Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
11,600
|
<ASSISTANT_TASK:>
Python Code:
from openhunt.mordorutils import *
spark = get_spark()
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/lateral_movement/host/covenant_dcom_iertutil_dll_hijack.zip"
registerMordorSQLTable(spark, sd_file, "sdTable")
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName
FROM sdTable
WHERE LOWER(Channel) = "security"
AND EventID = 5145
AND RelativeTargetName LIKE '%Internet Explorer\\\iertutil.dll'
AND NOT SubjectUserName LIKE '%$'
AND AccessMask = '0x2'
'''
)
df.show(10,False)
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName
FROM sdTable b
INNER JOIN (
SELECT LOWER(REVERSE(SPLIT(TargetFilename, '\'))[0]) as TargetFilename
FROM sdTable
WHERE Channel = 'Microsoft-Windows-Sysmon/Operational'
AND Image = 'System'
AND EventID = 11
AND TargetFilename LIKE '%Internet Explorer\\\iertutil.dll'
) a
ON LOWER(REVERSE(SPLIT(RelativeTargetName, '\'))[0]) = a.TargetFilename
WHERE LOWER(b.Channel) = 'security'
AND b.EventID = 5145
AND b.AccessMask = '0x2'
'''
)
df.show(10,False)
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName
FROM sdTable b
INNER JOIN (
SELECT LOWER(REVERSE(SPLIT(TargetFilename, '\'))[0]) as TargetFilename
FROM sdTable
WHERE Channel = 'Microsoft-Windows-Sysmon/Operational'
AND Image = 'System'
AND EventID = 11
AND TargetFilename LIKE '%Internet Explorer\\\iertutil.dll'
) a
ON LOWER(REVERSE(SPLIT(RelativeTargetName, '\'))[0]) = a.TargetFilename
WHERE LOWER(b.Channel) = 'security'
AND b.EventID = 5145
AND b.AccessMask = '0x2'
'''
)
df.show(10,False)
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName
FROM sdTable d
INNER JOIN (
SELECT LOWER(REVERSE(SPLIT(TargetFilename, '\'))[0]) as TargetFilename
FROM sdTable b
INNER JOIN (
SELECT ImageLoaded
FROM sdTable
WHERE Channel = 'Microsoft-Windows-Sysmon/Operational'
AND EventID = 7
AND LOWER(Image) LIKE '%iexplore.exe'
AND ImageLoaded LIKE '%Internet Explorer\\\iertutil.dll'
) a
ON b.TargetFilename = a.ImageLoaded
WHERE b.Channel = 'Microsoft-Windows-Sysmon/Operational'
AND b.Image = 'System'
AND b.EventID = 11
) c
ON LOWER(REVERSE(SPLIT(RelativeTargetName, '\'))[0]) = c.TargetFilename
WHERE LOWER(d.Channel) = 'security'
AND d.EventID = 5145
AND d.AccessMask = '0x2'
'''
)
df.show(10,False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download & Process Security Dataset
Step2: Analytic I
Step3: Analytic II
Step4: Analytic III
Step5: Analytic IV
|
11,601
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Denis A. Engemann <denis.engemann@gmail.com>
# Mainak Jas <mainak.jas@telecom-paristech.fr>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname, condition='Left Auditory',
baseline=(None, 0))
# plot with bads
evoked.plot(exclude=[], picks=('grad', 'eeg'))
evoked_interp = evoked.copy().interpolate_bads(reset_bads=False)
evoked_interp.plot(exclude=[], picks=('grad', 'eeg'))
evoked_interp_mne = evoked.copy().interpolate_bads(
reset_bads=False, method=dict(eeg='MNE'), verbose=True)
evoked_interp_mne.plot(exclude=[], picks=('grad', 'eeg'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compute interpolation (also works with Raw and Epochs objects)
Step2: You can also use minimum-norm for EEG as well as MEG
|
11,602
|
<ASSISTANT_TASK:>
Python Code:
import h2o
import imp
from h2o.estimators.kmeans import H2OKMeansEstimator
# Start a local instance of the H2O engine.
h2o.init();
iris = h2o.import_file(path="https://github.com/h2oai/h2o-3/raw/master/h2o-r/h2o-package/inst/extdata/iris_wheader.csv")
iris.describe()
try:
imp.find_module('pandas')
can_pandas = True
import pandas as pd
except:
can_pandas = False
try:
imp.find_module('seaborn')
can_seaborn = True
import seaborn as sns
except:
can_seaborn = False
%matplotlib inline
if can_seaborn:
sns.set()
if can_seaborn:
sns.set_context("notebook")
sns.pairplot(iris.as_data_frame(), vars=["sepal_len", "sepal_wid", "petal_len", "petal_wid"], hue="class");
results = [H2OKMeansEstimator(k=clusters, init="Random", seed=2, standardize=True) for clusters in range(2,13)]
for estimator in results:
estimator.train(x=iris.col_names[0:-1], training_frame = iris)
import math as math
def diagnostics_from_clusteringmodel(model):
total_within_sumofsquares = model.tot_withinss()
number_of_clusters = len(model.centers()[0])
number_of_dimensions = len(model.centers())
number_of_rows = sum(model.size())
aic = total_within_sumofsquares + 2 * number_of_dimensions * number_of_clusters
bic = total_within_sumofsquares + math.log(number_of_rows) * number_of_dimensions * number_of_clusters
return {'Clusters':number_of_clusters,
'Total Within SS':total_within_sumofsquares,
'AIC':aic,
'BIC':bic}
if can_pandas:
diagnostics = pd.DataFrame( [diagnostics_from_clusteringmodel(model) for model in results])
diagnostics.set_index('Clusters', inplace=True)
if can_pandas:
diagnostics.plot(kind='line');
clusters = 4
predicted = results[clusters-2].predict(iris)
iris["Predicted"] = predicted["predict"].asfactor()
if can_seaborn:
sns.pairplot(iris.as_data_frame(), vars=["sepal_len", "sepal_wid", "petal_len", "petal_wid"], hue="Predicted");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The next step of using H2O is to parse and load data into H2O's in-memory columnar compressed storage. Today we will be using the Iris flower data set.
Step2: H2O provides convenient commands to understand the H2OFrame object, the data structure for data that will be used by H2O's machine learning algorithms. Because H2O is often used for very large datasets and in a cluster computing configuration information about how much the data is compressed in memory and the distribution of the data across the H2O nodes, along with standard summary statics on the data in the H2OFrame, is provided.
Step3: The iris data set is labeled into three classes; there are four measurements that were taken for each iris. While we will not be using the labeled data for clustering, it does provide us a convenient comparison and visualization of the data as it was provided. In this example I use Seaborn for the visualization of the data.
Step4: The next step is to model the data using H2O's kmeans algorithm. We will do this across a range of cluster options and collect each H2O model object as an element in an array. In this example the initial position of the cluster centers is selected at random and the random number seed is set for reproducibility. Because H2O is designed for high performance it is quick and easy to explore many different hyper-parameter settings during modeling to find the model that best suits your needs.
Step5: There are three diagnostics that will be demonstrated to help with determining the number of clusters
Step6: From the plot below, to me, it is difficult to find a 'knee' in the rate of decrease of the total within cluster sum of square. It might be at 4 clusters, it might be 7. AIC is minimized at 7 clusters, and BIC is minimized at 4 clusters.
Step7: For demonstration purposes, I will selected the number of clusters to be 4. I will use the H2O Model for 4 clusters previously created, and use that to assign the membership in each of the original data points. This predicted cluster assignment is then added to the original iris data frames as a new vector (mostly to make plotting easy).
Step8: Finally, I will plot the predicted cluster membership using the same layout as on the original data earlier in the notebook.
|
11,603
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib as plt
import matplotlib.pyplot as plt
% matplotlib inline
df = pd.read_csv("07-hw-animals.csv")
df
df.columns.values
df.head(3)
df.sort_values(by='length', ascending = False).head(3)
df['animal'].value_counts()
df['animal'] == 'dog'
df[df['animal'] == 'dog']
df[df['length'] > 40]
cm_in_inch = 0.393701
df['length_inches'] = df['length'] * cm_in_inch
df
cats = df[df['animal'] == 'cat']
cats
dogs = df[df['animal'] == 'dog']
dogs
cats[cats['length_inches']> 12]
#Using the normal dataframe
df[(df['animal'] == 'cat') & (df['length_inches'] > 12)]
cats['length'].describe()['mean']
dogs['length'].describe()['mean']
animals = df.groupby(['animal'])
animals['length'].mean()
dogs['length'].hist()
plt.style.use('ggplot')
dogs['length'].hist()
df.plot(kind='barh', x='name', y='length')
cats.sort_values(by='length').plot(kind='barh', x='name', y='length')
import pandas as pd
import matplotlib.pyplot as plt
% matplotlib inline
df = pd.read_csv('richpeople.csv', encoding='latin-1')
df.head(10)
richpeople = df[df['year'] == 2014]
richpeople.columns
richpeople.sort_values(by='networthusbillion', ascending=False).head(10)
richpeople.sort_values(by='networthusbillion').head(10)
print("The average networth of billionaires in US billion is", richpeople['networthusbillion'].mean())
richpeople.groupby('gender')['networthusbillion'].mean()
richpeople['citizenship'].value_counts()
richpeople['industry'].value_counts()
print("On average billionaires are", richpeople['age'].mean(), "years old.")
selfmade = richpeople[richpeople['selfmade'] == 'self-made']
print("Selfmade billionaires are about", selfmade['age'].mean(), "years old.")
non_selfmade = richpeople[richpeople['selfmade'] != 'self-made']
print("Non-selfmade billionaires are on average", non_selfmade['age'].mean(), "years old.")
richpeople.sort_values(by='age', ascending = True).head(3)
richpeople.sort_values(by='age', ascending = False).head(3)
plt.style.use('ggplot')
richpeople['age'].hist()
richpeople.plot(kind='scatter', x = 'age', y='networthusbillion', figsize=(10,10), alpha=0.3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Set all graphics from matplotlib to display inline
Step2: 3. Read the csv in (it should be UTF-8 already so you don't have to worry about encoding), save it with the proper boring name
Step3: 4. Display the names of the columns in the csv
Step4: 5. Display the first 3 animals.
Step5: 6. Sort the animals to see the 3 longest animals.
Step6: 7. What are the counts of the different values of the "animal" column?
Step7: 8. Only select the dogs.
Step8: 9. Display all of the animals that are greater than 40 cm.
Step9: 10. 'length' is the animal's length in cm. Create a new column called inches that is the length in inches.
Step10: 11. Save the cats to a separate variable called "cats." Save the dogs to a separate variable called "dogs."
Step11: 12. Display all of the animals that are cats and above 12 inches long. First do it using the "cats" variable, then do it using your normal dataframe
Step12: 13. What's the mean length of a cat?
Step13: 14. What's the mean length of a dog?
Step14: 15. Use groupby to accomplish both of the above tasks at once.
Step15: 16. Make a histogram of the length of dogs.
Step16: 17. Change your graphing style to be something else (anything else!)
Step17: 18. Make a horizontal bar graph of the length of the animals, with their name as the label
Step18: 19. Make a sorted horizontal bar graph of the cats, with the larger cats on top.
Step19: Part 2
Step20: 1) Who are the top 10 richest billionaires?
Step21: 2) Who are the top 10 poorest billionaires?
Step22: 3) What's the average wealth of a billionaire? Male? Female?
Step23: 4) What country are most billionaires from?
Step24: 4) What are the most common industries for billionaires to come from?
Step25: 5) How old are billionaires? How old are billionaires self made vs. non self made?
Step26: 6) Who are the youngest billionaires?
Step27: 7) Who are the oldest?
Step28: 8) Age distribution - maybe make a graph about it
Step29: 9) Maybe plot their net worth vs age (scatterplot)
|
11,604
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
def convert_source_sentence(sentence):
return [source_vocab_to_int[w] for w in sentence.split(" ") if w!=""]
def convert_target_sentence(sentence):
return [target_vocab_to_int[w] for w in sentence.split(" ") if w!=""]+[target_vocab_to_int['<EOS>']]
return [convert_source_sentence(sentence) for sentence in source_text.split("\n")],\
[convert_target_sentence(sentence) for sentence in target_text.split("\n")]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
helper.preprocess_and_save_data
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
input = tf.placeholder(tf.int32,[None,None],name="input")
targets = tf.placeholder(tf.int32,[None,None],name="targets")
learning_rate = tf.placeholder(tf.float32,name="learning_rate")
keep_probability = tf.placeholder(tf.float32,name="keep_prob")
target_sequence_length = tf.placeholder(tf.int32,[None],name="target_sequence_length")
max_target_len = tf.reduce_max(target_sequence_length)
source_sequence_len = tf.placeholder(tf.int32,[None],name="source_sequence_length")
# TODO: Implement Function
return input, targets, learning_rate, keep_probability, target_sequence_length, max_target_len, source_sequence_len
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
import inspect
inspect.getsourcelines(tests.test_process_encoding_input)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
return(tf.concat([tf.constant([[target_vocab_to_int["<GO>"]]]*batch_size),\
tf.strided_slice(target_data,[0,0],[batch_size,-1],[1,1])],1))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
embed = tf.contrib.layers.embed_sequence(rnn_inputs,rnn_size,encoding_embedding_size)
lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size)
lstm_stack = tf.contrib.rnn.DropoutWrapper(\
tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.LSTMCell(rnn_size) for _ in range(num_layers)]),
output_keep_prob=keep_prob)
return tf.nn.dynamic_rnn(lstm_stack,embed,source_sequence_length,dtype=tf.float32)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
training_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input,
sequence_length=target_sequence_length)
decoder = tf.contrib.seq2seq.BasicDecoder(\
tf.contrib.rnn.DropoutWrapper(dec_cell,output_keep_prob=keep_prob),\
training_helper,encoder_state,output_layer)
final_outputs, final_state = tf.contrib.seq2seq.dynamic_decode(decoder,maximum_iterations=max_summary_length)
return(final_outputs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
embed_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,start_tokens,end_of_sequence_id)
decoder = tf.contrib.seq2seq.BasicDecoder(\
dec_cell,\
embed_helper,encoder_state,output_layer)
final_outputs, final_state = tf.contrib.seq2seq.dynamic_decode(decoder,maximum_iterations=max_target_sequence_length,impute_finished=True)
return final_outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size)
lstm_stack = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.LSTMCell(rnn_size) for _ in range(num_layers)])
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
with tf.variable_scope("decode") as scope:
train_output = decoding_layer_train(encoder_state, lstm_stack, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)
scope.reuse_variables()
infer_output = decoding_layer_infer(encoder_state, lstm_stack, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'],\
max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob)
return train_output, infer_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
_, encoding_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
enc_embedding_size)
decoder_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
return(decoding_layer(decoder_input, encoding_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 50
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 50
decoding_embedding_size = 50
# Learning Rate
learning_rate = 0.003
# Dropout Keep Probability
keep_probability = 0.5
display_step = 10
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
return [vocab_to_int.get(w,vocab_to_int["<UNK>"]) for w in sentence.lower().split(" ")]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoder Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step40: Batch and pad the source and target sequences
Step43: Train
Step45: Save Parameters
Step47: Checkpoint
Step50: Sentence to Sequence
Step52: Translate
|
11,605
|
<ASSISTANT_TASK:>
Python Code:
root_directory = 'D:/github/w_vattenstatus/ekostat_calculator'#"../" #os.getcwd()
workspace_directory = root_directory + '/workspaces'
resource_directory = root_directory + '/resources'
#alias = 'lena'
user_id = 'test_user' #kanske ska vara off_line user?
# workspace_alias = 'lena_indicator' # kustzonsmodellen_3daydata
workspace_alias = 'kustzonsmodellen_3daydata'
# ## Initiate EventHandler
print(root_directory)
paths = {'user_id': user_id,
'workspace_directory': root_directory + '/workspaces',
'resource_directory': root_directory + '/resources',
'log_directory': 'D:/github' + '/log',
'test_data_directory': 'D:/github' + '/test_data',
'cache_directory': 'D:/github/w_vattenstatus/cache'}
t0 = time.time()
ekos = EventHandler(**paths)
#request = ekos.test_requests['request_workspace_list']
#response = ekos.request_workspace_list(request)
#ekos.write_test_response('request_workspace_list', response)
print('-'*50)
print('Time for request: {}'.format(time.time()-t0))
###############################################################################################################################
# ### Make a new workspace
# ekos.copy_workspace(source_uuid='default_workspace', target_alias='kustzonsmodellen_3daydata')
# ### See existing workspaces and choose workspace name to load
ekos.print_workspaces()
workspace_uuid = ekos.get_unique_id_for_alias(workspace_alias = workspace_alias) #'kuszonsmodellen' lena_indicator
print(workspace_uuid)
workspace_alias = ekos.get_alias_for_unique_id(workspace_uuid = workspace_uuid)
###############################################################################################################################
# ### Load existing workspace
ekos.load_workspace(unique_id = workspace_uuid)
###############################################################################################################################
# ### import data
# ekos.import_default_data(workspace_alias = workspace_alias)
###############################################################################################################################
# ### Load all data in workspace
# #### if there is old data that you want to remove
ekos.get_workspace(workspace_uuid = workspace_uuid).delete_alldata_export()
ekos.get_workspace(workspace_uuid = workspace_uuid).delete_all_export_data()
###############################################################################################################################
# #### to just load existing data in workspace
ekos.load_data(workspace_uuid = workspace_uuid)
###############################################################################################################################
# ### check workspace data length
w = ekos.get_workspace(workspace_uuid = workspace_uuid)
len(w.data_handler.get_all_column_data_df())
###############################################################################################################################
# ### see subsets in data
for subset_uuid in w.get_subset_list():
print('uuid {} alias {}'.format(subset_uuid, w.uuid_mapping.get_alias(unique_id=subset_uuid)))
###############################################################################################################################
# # Step 0
print(w.data_handler.all_data.columns)
###############################################################################################################################
# ### Apply first data filter
w.apply_data_filter(step = 0) # This sets the first level of data filter in the IndexHandler
###############################################################################################################################
# # Step 1
# ### make new subset
# w.copy_subset(source_uuid='default_subset', target_alias='test_kustzon')
###############################################################################################################################
# ### Choose subset name to load
subset_alias = 'test_kustzon'
# subset_alias = 'period_2007-2012_refvalues_2013'
# subset_alias = 'test_subset'
subset_uuid = ekos.get_unique_id_for_alias(workspace_alias = workspace_alias, subset_alias = subset_alias)
print('subset_alias', subset_alias, 'subset_uuid', subset_uuid)
# #### year filter
w.set_data_filter(subset = subset_uuid, step=1,
filter_type='include_list',
filter_name='MYEAR',
data=[2007,2008,2009,2010,2011,2012])#['2011', '2012', '2013']) #, 2014, 2015, 2016
###############################################################################################################################
# #### waterbody filter
w.set_data_filter(subset = subset_uuid, step=1,
filter_type='include_list',
filter_name='viss_eu_cd', data = []) #'SE584340-174401', 'SE581700-113000', 'SE654470-222700', 'SE633000-195000', 'SE625180-181655'
# data=['SE584340-174401', 'SE581700-113000', 'SE654470-222700', 'SE633000-195000', 'SE625180-181655'])
# wb with no data for din 'SE591400-182320'
f1 = w.get_data_filter_object(subset = subset_uuid, step=1)
print(f1.include_list_filter)
print('subset_alias:', subset_alias, '\nsubset uuid:', subset_uuid)
f1 = w.get_data_filter_object(subset = subset_uuid, step=1)
print(f1.include_list_filter)
###############################################################################################################################
# ## Apply step 1 datafilter to subset
w.apply_data_filter(subset = subset_uuid, step = 1)
filtered_data = w.get_filtered_data(step = 1, subset = subset_uuid)
print(filtered_data['VISS_EU_CD'].unique())
filtered_data[['AMON','NTRA','DIN','CPHL_INTEG_CALC','DEPH']].head()
### Load indicator settings filter
w.get_step_object(step = 2, subset = subset_uuid).load_indicator_settings_filters()
###############################################################################################################################
### set available indicators
w.get_available_indicators(subset= subset_uuid, step=2)
###############################################################################################################################
# ### choose indicators
#list(zip(typeA_list, df_step1.WATER_TYPE_AREA.unique()))
# indicator_list = ['oxygen','din_winter','ntot_summer', 'ntot_winter', 'dip_winter', 'ptot_summer', 'ptot_winter','bqi', 'biov', 'chl', 'secchi']
# indicator_list = ['din_winter','ntot_summer', 'ntot_winter', 'dip_winter', 'ptot_summer', 'ptot_winter']
#indicator_list = ['biov', 'chl']
# indicator_list = ['bqi', 'biov', 'chl', 'secchi']
#indicator_list = ['bqi', 'secchi'] + ['biov', 'chl'] + ['din_winter']
# indicator_list = ['din_winter','ntot_summer']
# indicator_list = ['indicator_' + indicator for indicator in indicator_list]
indicator_list = w.available_indicators
###############################################################################################################################
# ### Apply indicator data filter
print('apply indicator data filter to {}'.format(indicator_list))
for indicator in indicator_list:
w.apply_indicator_data_filter(step = 2,
subset = subset_uuid,
indicator = indicator)#,
# water_body_list = test_wb)
#print(w.mapping_objects['water_body'][wb])
#print('*************************************')
#df = w.get_filtered_data(subset = subset_uuid, step = 'step_2', water_body = 'SE625180-181655', indicator = 'indicator_din_winter').dropna(subset = ['DIN'])
# ### Set up indicator objects
print('indicator set up to {}'.format(indicator_list))
w.get_step_object(step = 3, subset = subset_uuid).indicator_setup(indicator_list = indicator_list)
###############################################################################################################################
# ### CALCULATE STATUS
print('CALCULATE STATUS to {}'.format(indicator_list))
w.get_step_object(step = 3, subset = subset_uuid).calculate_status(indicator_list = indicator_list)
###############################################################################################################################
# ### CALCULATE QUALITY ELEMENTS
w.get_step_object(step = 3, subset = subset_uuid).calculate_quality_element(quality_element = 'nutrients')
# w.get_step_object(step = 3, subset = subset_uuid).calculate_quality_element(quality_element = 'phytoplankton')
# w.get_step_object(step = 3, subset = subset_uuid).calculate_quality_element(quality_element = 'bottomfauna')
# w.get_step_object(step = 3, subset = subset_uuid).calculate_quality_element(quality_element = 'oxygen')
# w.get_step_object(step = 3, subset = subset_uuid).calculate_quality_element(quality_element = 'secchi')
# w.get_step_object(step = 3, subset = subset_uuid).calculate_quality_element(subset_unique_id = subset_uuid, quality_element = 'Phytoplankton')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set subset filters
Step2: #########################################################################################################################
Step 2
Step3: #########################################################################################################################
Step 3
|
11,606
|
<ASSISTANT_TASK:>
Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "us-central1" # @param {type: "string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
import google.cloud.aiplatform as aip
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
IMPORT_FILE = "gs://cloud-ml-tables-data/bank-marketing.csv"
count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head
heading = ! gsutil cat $IMPORT_FILE | head -n1
label_column = str(heading).split(",")[-1].split("'")[0]
print("Label Column Name", label_column)
if label_column is None:
raise Exception("label column missing")
dataset = aip.TabularDataset.create(
display_name="Bank Marketing" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE]
)
print(dataset.resource_name)
dag = aip.AutoMLTabularTrainingJob(
display_name="bank_" + TIMESTAMP,
optimization_prediction_type="classification",
optimization_objective="minimize-log-loss",
)
print(dag)
model = dag.run(
dataset=dataset,
model_display_name="bank_" + TIMESTAMP,
training_fraction_split=0.6,
validation_fraction_split=0.2,
test_fraction_split=0.2,
budget_milli_node_hours=8000,
disable_early_stopping=False,
target_column=label_column,
)
# Get model resource ID
models = aip.Model.list(filter="display_name=bank_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
! gsutil cat $IMPORT_FILE | head -n 1 > tmp.csv
! gsutil cat $IMPORT_FILE | tail -n 10 >> tmp.csv
! cut -d, -f1-16 tmp.csv > batch.csv
gcs_input_uri = BUCKET_NAME + "/test.csv"
! gsutil cp batch.csv $gcs_input_uri
batch_predict_job = model.batch_predict(
job_display_name="bank_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
instances_format="csv",
predictions_format="csv",
generate_explanation=True,
sync=False,
)
print(batch_predict_job)
batch_predict_job.wait()
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
explanation_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("explanation"):
explanation_results.append(blob.name)
tags = list()
for explanation_result in explanation_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{explanation_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
print(line)
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Step3: Before you begin
Step4: Region
Step5: Timestamp
Step6: Authenticate your Google Cloud account
Step7: Create a Cloud Storage bucket
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Step11: Initialize Vertex SDK for Python
Step12: Tutorial
Step13: Quick peek at your data
Step14: Create the Dataset
Step15: Create and run training pipeline
Step16: Run the training pipeline
Step17: Review model evaluation scores
Step18: Send a batch prediction request
Step19: Make the batch explanation request
Step20: Wait for completion of batch prediction job
Step21: Get the explanations
Step22: Cleaning up
|
11,607
|
<ASSISTANT_TASK:>
Python Code:
multi_language = app.loc[app['multiple languages'] == 'Y']
sin_language = app.loc[app['multiple languages'] == 'N']
multi_language['overall rating'].plot(kind = "density")
sin_language['overall rating'].plot(kind = "density")
plt.xlabel('Overall Rating')
plt.legend(labels = ['multiple languages','single language'], loc='upper right')
plt.title('Distribution of overall rating among apps with multiple/single languages')
plt.show()
import scipy.stats
multi_language = list(multi_language['overall rating'])
sin_language = list(sin_language['overall rating'])
multiple = []
single = []
for each in multi_language:
if each > 0:
multiple.append(each)
for each in sin_language:
if each > 0:
single.append(each)
print(np.mean(multiple))
print(np.mean(single))
scipy.stats.ttest_ind(multiple, single, equal_var = False)
scipy.stats.f_oneway(multiple, single)
scipy.stats.kruskal(multiple, single)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <p>First, the data set is splitted into two parts, one is app with multiple languages and another is app with single language. Then the density plots for the two subsets are made and from the plots we can see that the overall rating of apps with multiple languages is generally higher than the overall rating of apps with single language. Some specific tests are still needed to perform.</p>
Step2: <p>I perform t test here. We have two samples here, one is apps with multiple languages and another is apps with single language. So I want to test whether the mean overall rating for these two samples are different.</p>
Step3: <p>I also perform one-way ANOVA test here.</p>
|
11,608
|
<ASSISTANT_TASK:>
Python Code:
print(__doc__)
import numpy as np
from skopt import Optimizer
from skopt.space import Real
from joblib import Parallel, delayed
# example objective taken from skopt
from skopt.benchmarks import branin
optimizer = Optimizer(
dimensions=[Real(-5.0, 10.0), Real(0.0, 15.0)],
random_state=1,
base_estimator='gp'
)
for i in range(10):
x = optimizer.ask(n_points=4) # x is a list of n_points points
y = Parallel(n_jobs=4)(delayed(branin)(v) for v in x) # evaluate points in parallel
optimizer.tell(x, y)
# takes ~ 20 sec to get here
print(min(optimizer.yi)) # print the best objective found
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example
|
11,609
|
<ASSISTANT_TASK:>
Python Code:
import re
import tables
import matplotlib.pyplot as plt
import numpy as np
from astropy.time import Time
from astropy.table import Table
import Ska.engarchive.fetch_eng as fetch
from Ska.engarchive import fetch_sci
from Chandra.Time import DateTime
from Ska.Numpy import interpolate
from kadi import events
from sherpa import ui
from Ska.Matplotlib import plot_cxctime
%matplotlib inline
SIM_MM_TO_ARCSEC = 20.493
# Discrete jumps after 2012:001. Note also jumps at:
# '2008:293', # IU-reset
# '2010:151', # IU-reset
# '2011:190', # Safe mode
JUMPS = ['2015:006', # IU-reset
'2015:265', # Safe mode 6
'2016:064', # Safe mode 7
'2017:066', # NSM
'2018:285', # Safe mode 8
]
ltt_bads = events.ltt_bads(pad=(0, 200000))
normal_suns = events.normal_suns(pad=(0, 100000))
safe_suns = events.safe_suns(pad=(0, 86400 * 7))
# Aspect camera CCD temperature trend since 2010
t_ccd = fetch.Msid('aacccdpt', start='2010:001', stat='5min')
t_ccd.remove_intervals(ltt_bads | normal_suns | safe_suns)
plt.figure(figsize=(12, 4.5))
t_ccd.plot()
plt.ylabel('T_ccd (degF)')
plt.title('ACA CCD temperature')
plt.ylim(None, 20)
plt.grid()
# Get aspect solution DY and DZ (apparent SIM offsets via fid light positions)
# which are sampled at 1 ksec intervals and updated daily.
if 'adat' not in globals():
h5 = tables.open_file('/proj/sot/ska/data/aimpoint_mon/aimpoint_asol_values.h5')
adat = h5.root.data[:]
h5.close()
adat.sort(order=['time'])
# Filter bad data when asol DY and DZ are both exactly 0.0 (doesn't happen normally)
bad = (adat['dy'] == 0.0) & (adat['dz'] == 0.0)
adat = adat[~bad]
class AcaDriftModel(object):
Class to encapsulate necessary data and compute the model of ACA
alignment drift. The object created from this class is called
by Sherpa as a function during fitting. This gets directed to
the __call__() method.
YEAR0 = 2016.0 # Reference year for linear offset
def __init__(self, adat, start='2012:001', stop=None):
adat is the raw data array containing aspect solution data
sampled at 1 ksec intervals.
# Get the ACA CCD temperature telemetry
t_ccd = fetch.Msid('aacccdpt', stat='5min', start=start, stop=stop)
# Slice the ASOL data corresponding to available ACA CCD temps
i0, i1 = np.searchsorted(adat['time'], [t_ccd.times[0], t_ccd.times[-1]])
self.asol = adat[i0:i1].copy()
# Convert from mm to arcsec for convenience
self.asol['dy'] *= SIM_MM_TO_ARCSEC
self.asol['dz'] *= SIM_MM_TO_ARCSEC
self.times = self.asol['time']
self.years = Time(self.times, format='cxcsec').decimalyear
self.years_0 = self.years - self.YEAR0
# Resample CCD temp. data to the 1 ksec ASOL time stamps
self.t_ccd = interpolate(t_ccd.vals, t_ccd.times, self.asol['time'], method='linear')
# Get indices corresponding to jump times for later model computation
self.jump_times = Time(JUMPS).cxcsec
self.jump_idxs = np.searchsorted(self.times, self.jump_times)
def __call__(self, pars, years=None, t_ccd=None):
Calculate model prediction for DY or DZ. Params are:
scale : scaling in arcsec / degF
offset : ACA CCD temperature corresponding to DY/Z = 0.0 arcsec
trend : Trend in DY/Z (arcsec / year)
jumpYYYYDDD : discrete jump in arcsec at date YYYY:DDD
# Sherpa passes the parameters as a list
scale, offset, trend = pars[0:3]
jumps = pars[3:]
# Allow for passing in a different value for ACA CCD temperature
if t_ccd is None:
t_ccd = self.t_ccd
# Compute linear part of model
out = (t_ccd - offset) * scale + self.years_0 * trend
# Put in the step function jumps
for jump_idx, jump in zip(self.jump_idxs, jumps):
if jump_idx > 10 and jump_idx < len(out) - 10:
out[jump_idx:] += jump
return out
def fit_aimpoint_aca_temp(axis='dy', start='2012:180', stop=None):
Use Sherpa to fit the model parameters
# Create the object used to define the Sherpa user model, then
# load as a model and create parameters
aca_drift = AcaDriftModel(adat, start, stop)
ui.load_user_model(aca_drift, 'aca_drift_model')
parnames = ['scale', 'offset', 'trend']
parnames += ['jump{}'.format(re.sub(':', '', x)) for x in JUMPS]
ui.add_user_pars('aca_drift_model', parnames)
# Sherpa automatically puts 'aca_drift_model' into globals, but
# make this explicit so code linters don't complain.
aca_drift_model = globals()['aca_drift_model']
# Get the DY or DZ values and load as Sherpa data
dyz = aca_drift.asol[axis]
ui.load_arrays(1, aca_drift.years, dyz)
# Set the model and fit using Simplex (Nelder-Mead) minimization
ui.set_model(1, aca_drift_model)
ui.set_method('simplex')
ui.fit(1)
return aca_drift, ui.get_fit_results()
def plot_aimpoint_drift(axis, aca_drift, fit_results, start='2010:001', stop=None, plot_t_ccd=False):
Plot our results
y_start = DateTime(start).frac_year
y_stop = DateTime(stop).frac_year
years = aca_drift.years
ok = (years > y_start) & (years < y_stop)
years = aca_drift.years[ok]
times = aca_drift.times[ok]
# Call model directly with best-fit parameters to get model values
dyz_fit = aca_drift(fit_results.parvals)[ok]
# DY or DZ values from aspect solution
dyz = aca_drift.asol[axis][ok]
dyz_resid = dyz - dyz_fit
if plot_t_ccd:
plt.figure(figsize=(12, 4.5))
plt.subplot(1, 2, 1)
plot_cxctime(times, dyz, label='Data')
plot_cxctime(times, dyz_fit, 'r-', alpha=0.5, label='Fit')
plot_cxctime(times, dyz_resid, 'r-', label='Residual')
plt.title('Fit aspect solution {} to scaled ACA CCD temperature'
.format(axis.upper()))
plt.ylabel('{} (arcsec)'.format(axis.upper()))
plt.grid()
plt.legend(loc='upper left', framealpha=1.0)
if plot_t_ccd:
dat = fetch_sci.Msid('aacccdpt', start, stop, stat='5min')
plt.subplot(1, 2, 2)
dat.plot()
plt.grid()
plt.ylabel('AACCCDPT (degC)')
if isinstance(plot_t_ccd, tuple):
plt.ylim(*plot_t_ccd)
std = dyz_resid.std()
p1, p99 = np.percentile(dyz_resid, [1, 99])
print('Fit residual stddev = {:.2f} arcsec'.format(std))
print('Fit residual 99th - 1st percentile = {:.2f}'.format(p99 - p1))
aca_drift_dy, fit_dy = fit_aimpoint_aca_temp('dy')
plot_aimpoint_drift('dy', aca_drift_dy, fit_dy)
start = '2018:260'
stop = '2018:310'
plot_aimpoint_drift('dy', aca_drift_dy, fit_dy, start=start, stop=stop, plot_t_ccd=(-16, -8))
dyz_fit = aca_drift_dy(fit_dy.parvals, t_ccd=14) # degF = -10 C
plot_cxctime(aca_drift_dy.times, dyz_fit)
plt.title('DY drift model assuming constant ACA temperature')
plt.grid();
aca_drift_dz, fit_dz = fit_aimpoint_aca_temp('dz')
plot_aimpoint_drift('dz', aca_drift_dz, fit_dz)
start = '2018:260'
stop = '2018:310'
plot_aimpoint_drift('dz', aca_drift_dz, fit_dz, start=start, stop=stop, plot_t_ccd=(-16, -8))
dyz_fit = aca_drift_dz(fit_dz.parvals, t_ccd=14) # degF = -10 C
plot_cxctime(aca_drift_dz.times, dyz_fit)
plt.title('DZ drift model assuming constant ACA temperature')
plt.grid();
text =
obsid detector chipx chipy chip_id aca_offset_y aca_offset_z mean_t_ccd mean_date
----- -------- ------- ------- ------- ------------ ------------ ---------- ---------------------
21152 ACIS-S 210.0 520.0 7 -0.9 -22.67 -11.72 2018:307:18:07:54.816
20332 ACIS-I 970.0 975.0 3 -14.27 -21.89 -11.88 2018:308:04:03:46.816
21718 HRC-I 7590.0 7745.0 0 -13.39 -22.8 -11.53 2018:313:03:14:10.816
21955 HRC-S 2195.0 8915.0 2 -12.50 -22.57 -11.53 2018:305:16:28:34.816
obss = Table.read(text, format='ascii.fixed_width_two_line')
import sys
import os
sys.path.insert(0, os.path.join(os.environ['HOME'], 'git', 'chandra_aca'))
import chandra_aca
from chandra_aca import drift
from kadi import events
chandra_aca.test(get_version=True)
for obs in obss:
dwell = events.dwells.filter(obsid=21152)[0]
t_ccd = fetch_sci.Msid('aacccdpt', dwell.start, dwell.stop, stat='5min')
mean_t_ccd = np.mean(t_ccd.vals)
offsets = drift.get_aca_offsets(obs['detector'], chip_id=obs['chip_id'],
chipx=obs['chipx'], chipy=obs['chipy'],
time=obs['mean_date'], t_ccd=mean_t_ccd)
print(obs)
print('T_ccd:', mean_t_ccd, ' Delta offsets Y Z:',
'%.2f' % (obs['aca_offset_y'] - offsets[0]),
'%.2f' % (obs['aca_offset_z'] - offsets[1]))
print()
from chandra_aca.tests.test_all import simple_test_aca_drift
dy, dz, times = simple_test_aca_drift()
plt.figure(figsize=(12, 4.5))
plt.subplot(1, 2, 1)
dy_fit = aca_drift_dy(fit_dy.parvals, t_ccd=14) # degF = -10 C
plot_cxctime(aca_drift_dy.times, dy_fit)
plt.title('DY drift model assuming constant ACA temperature')
plt.grid();
plt.subplot(1, 2, 2)
plot_cxctime(times, dy);
plt.grid()
plt.ylabel('DY (arcsec)');
plt.title('DY drift model from chandra_aca');
plt.figure(figsize=(12, 4.5))
plt.subplot(1, 2, 1)
dz_fit = aca_drift_dz(fit_dz.parvals, t_ccd=14) # degF = -10 C
plot_cxctime(aca_drift_dz.times, dz_fit)
plt.title('DZ drift model assuming constant ACA temperature')
plt.grid();
plt.subplot(1, 2, 2)
plot_cxctime(times, dz);
plt.grid()
plt.ylabel('DZ (arcsec)');
plt.title('DZ drift model from chandra_aca');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: Model for aimpoint drift (aka ACA alignment drift) 2018-11
Step6: Fit model coefficients for DY and plot results
Step7: Zoom in around the 2018
Step8: Fid light commanded vs observed angles
Step9: Fit model coefficients for DZ and plot results
Step10: Illustrate model behavior by assuming a constant ACA CCD temperature
Step12: Comparison to current flight model for NOV0518B
Step13: Comparison of local model prediction to implementation in chandra_aca
|
11,610
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
df = pd.read_csv('911.csv')
df.info()
df.head(3)
df['zip'].value_counts().head(5)
df['twp'].value_counts().head(5)
df['title'].nunique()
df['Reason'] = df['title'].apply(lambda title: title.split(':')[0])
df['Reason'].value_counts()
sns.countplot(x='Reason',data=df,palette='viridis')
type(df['timeStamp'].iloc[0])
df['timeStamp'] = pd.to_datetime(df['timeStamp'])
df['Hour'] = df['timeStamp'].apply(lambda time: time.hour)
df['Month'] = df['timeStamp'].apply(lambda time: time.month)
df['Day of Week'] = df['timeStamp'].apply(lambda time: time.dayofweek)
dmap = {0:'Mon',1:'Tue',2:'Wed',3:'Thu',4:'Fri',5:'Sat',6:'Sun'}
df['Day of Week'] = df['Day of Week'].map(dmap)
sns.countplot(x='Day of Week',data=df,hue='Reason',palette='viridis')
# To relocate the legend
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
sns.countplot(x='Month',data=df,hue='Reason',palette='viridis')
# To relocate the legend
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
# It is missing some months! 9,10, and 11 are not there.
byMonth = df.groupby('Month').count()
byMonth.head()
# Could be any column
byMonth['twp'].plot()
sns.lmplot(x='Month',y='twp',data=byMonth.reset_index())
df['Date']=df['timeStamp'].apply(lambda t: t.date())
df.groupby('Date').count()['twp'].plot()
plt.tight_layout()
df[df['Reason']=='Traffic'].groupby('Date').count()['twp'].plot()
plt.title('Traffic')
plt.tight_layout()
df[df['Reason']=='Fire'].groupby('Date').count()['twp'].plot()
plt.title('Fire')
plt.tight_layout()
df[df['Reason']=='EMS'].groupby('Date').count()['twp'].plot()
plt.title('EMS')
plt.tight_layout()
dayHour = df.groupby(by=['Day of Week','Hour']).count()['Reason'].unstack()
dayHour.head()
plt.figure(figsize=(12,6))
sns.heatmap(dayHour,cmap='viridis')
sns.clustermap(dayHour,cmap='viridis')
dayMonth = df.groupby(by=['Day of Week','Month']).count()['Reason'].unstack()
dayMonth.head()
plt.figure(figsize=(12,6))
sns.heatmap(dayMonth,cmap='viridis')
sns.clustermap(dayMonth,cmap='viridis')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import visualization libraries and set %matplotlib inline.
Step2: Read in the csv file as a dataframe called df
Step3: Check the info() of the df
Step4: Check the head of df
Step5: Basic Questions
Step6: What are the top 5 townships (twp) for 911 calls?
Step7: Take a look at the 'title' column, how many unique title codes are there?
Step8: Creating new features
Step9: What is the most common Reason for a 911 call based off of this new column?
Step10: Now use seaborn to create a countplot of 911 calls by Reason.
Step11: Now let us begin to focus on time information. What is the data type of the objects in the timeStamp column?
Step12: You should have seen that these timestamps are still strings. Use pd.to_datetime to convert the column from strings to DateTime objects.
Step13: You can now grab specific attributes from a Datetime object by calling them. For example
Step14: Notice how the Day of Week is an integer 0-6. Use the .map() with this dictionary to map the actual string names to the day of the week
Step15: Now use seaborn to create a countplot of the Day of Week column with the hue based off of the Reason column.
Step16: Now do the same for Month
Step17: Did you notice something strange about the Plot?
Step18: You should have noticed it was missing some Months, let's see if we can maybe fill in this information by plotting the information in another way, possibly a simple line plot that fills in the missing months, in order to do this, we'll need to do some work with pandas...
Step19: Now create a simple plot off of the dataframe indicating the count of calls per month.
Step20: Now see if you can use seaborn's lmplot() to create a linear fit on the number of calls per month. Keep in mind you may need to reset the index to a column.
Step21: Create a new column called 'Date' that contains the date from the timeStamp column. You'll need to use apply along with the .date() method.
Step22: Now groupby this Date column with the count() aggregate and create a plot of counts of 911 calls.
Step23: Now recreate this plot but create 3 separate plots with each plot representing a Reason for the 911 call
Step24: Now let's move on to creating heatmaps with seaborn and our data. We'll first need to restructure the dataframe so that the columns become the Hours and the Index becomes the Day of the Week. There are lots of ways to do this, but I would recommend trying to combine groupby with an unstack method. Reference the solutions if you get stuck on this!
Step25: Now create a HeatMap using this new DataFrame.
Step26: Now create a clustermap using this DataFrame.
Step27: Now repeat these same plots and operations, for a DataFrame that shows the Month as the column.
|
11,611
|
<ASSISTANT_TASK:>
Python Code:
response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&market=US')
Lil_data = response.json()
Lil_data.keys()
Lil_data['artists'].keys()
Lil_artists = Lil_data['artists']['items']
for artist in Lil_artists:
print(artist['name'], artist['popularity'])
Lil_artists = Lil_data['artists']['items']
for artist in Lil_artists:
print(artist['name'], artist['popularity'])
#joining
if len(artist['genres']) == 0:
print("No genres listed")
else:
genres = ", ".join(artist['genres'])
print("Genres: ", genres)
Lil_artists = Lil_data['artists']['items']
Lil_genres_list = []
for genres in Lil_artists:
Lil_genres_list = genres["genres"] + Lil_genres_list
print(Lil_genres_list)
Genre_list = [[x,Lil_genres_list.count(x)] for x in set(Lil_genres_list)]
print(Genre_list)
sorted(Genre_list, key = lambda x: int(x[1]), reverse=True)
Sorted_by_occurences_Genre_list = sorted(Genre_list, key = lambda x: int(x[1]), reverse=True)
print("The most frequent genre of the musicians called Lil is", Sorted_by_occurences_Genre_list[0])
Lil_artists = Lil_data['artists']['items']
for artist in Lil_artists:
if artist['genres'] == []:
print(artist['name'], artist['popularity'], "No genres listed.")
else:
print(artist['name'], artist['popularity'], artist['genres'])
Lil_artists = Lil_data['artists']['items']
#Genres
all_genres = []
#The Loop
for artist in Lil_artists:
#print("All Genres we have heard of:", all_genres)
#print('Current artist has', artist['genres'])
all_genres = all_genres + artist['genres']
print(all_genres)
all_genres.count('dirty south rap')
# your_list
#This shows duplicates
for genre in all_genres:
genre_count = all_genres.count(genre)
print(genre, "shows up", genre_count, "times.")
#Unique list of all genres:
#Unique List = set(list_with_duplicates)
unique_genres = set(all_genres)
for genre in unique_genres:
genre_count = all_genres.count(genre)
print(genre, "shows up", genre_count, "times.")
#There is a library tha comes with Python called Collections
#Inside of this library is Counter
import collections
from collections import Counter
counts = Counter(all_genres)
counts.most_common(1)
#
print(counts['crunk'])
from collections import Counter
counts = Collections.Counter(all_genres)
counts.most_common(1)
response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&market=US')
small_data = response.json()
small_data['artists']
len(small_data['artists'])
print("test")
for artist in Lil_artists:
if artist['popularity'] >= 72 and artist['name'] != 'Lil Wayne':
print(artist['name'])
#Better solution:
most_popular_name = ""
most_popular_score = 0
for artist in Lil_artists:
#print("Comparing", artist['popularity'], 'to', most_popular_score)
if artist['popularity'] > most_popular_score:
print("checking for Lil Wayne")
if artist['name'] == 'Lil Wayne':
print('go away')
else:
#The change you are keeping track of
#a.k.a. what you are keeping track of
print('not Lil Wayne, updating our notebook')
most_popular_name = artist['name']
most_popular_score = artist['popularity']
print(most_popular_name, most_popular_score)
####### This doesn't work
#name = 'Lil Soma'
#target_score = 72
#1 INITIAL CONDITION
#second_best_artists = []
#second_best_artists = [Lil Yachty]
#Aggregation Problem
#When you're looping through a series of serious objects
#and sometimes you want to add one of those objects
#to a different list
#for artist in artists:
# print('Looking at', artist['name'])
#2 COndition
#wehen we want someone on the list
# if artist['popularity'] == 72:
# print('!!! The artist is popularity is 72.')
# second_best_artists.append(second_best_artists)
Lil_data['artists'].keys()
type(artist['followers'])
artist['followers']
Lil_artists = Lil_data['artists']['items']
List_of_Followers = []
for artist in Lil_artists:
List_of_Followers.append(artist['followers']['total'])
print(List_of_Followers)
List_of_Followers.sort(reverse=True)
print(List_of_Followers)
Highest_Number_of_Followers = (List_of_Followers[0])
print(Highest_Number_of_Followers)
for artist in Lil_artists:
if artist['followers']['total'] > List_of_Followers[0] and artist['name'] != 'Lil Wayne':
print(artist['name'], "has more followers than Lil Wayne.")
else:
print("Their are no artists with more followers that Lil Wayne.")
break
for artist in Lil_artists:
if artist['name'] == "Lil' Kim":
print(artist['popularity'])
for artist in Lil_artists:
if artist['popularity'] > 62:
print(artist['name'], artist['popularity'])
for artist in Lil_artists:
print(artist['name'], artist['id'])
response = requests.get('https://api.spotify.com/v1/artists/5einkgXXrjhfYCyac1FANB/top-tracks?country=US')
Lil_Scrappy_data = response.json()
type(Lil_Scrappy_data)
response = requests.get('https://api.spotify.com/v1/artists/5qK5bOC6wLtuLhG5KvU17c/top-tracks?country=US')
Lil_Mama_data = response.json()
type(Lil_Mama_data)
Lil_Scrappy_data.keys()
Lil_Mama_data.keys()
type(Lil_Scrappy_data.keys())
type(Lil_Mama_data.keys())
Scrappy_tracks = Lil_Scrappy_data['tracks']
for tracks in Scrappy_tracks:
print(tracks['name'])
Mama_tracks = Lil_Mama_data['tracks']
for tracks in Mama_tracks:
print(tracks['name'])
explicit_track_scrappy = 0
non_explicit_track_scrappy = 0
unknown_scrappy = 0
for tracks in Scrappy_tracks:
if tracks['explicit'] == True:
explicit_track_scrappy = explicit_track_scrappy + 1
elif tracks['explicit'] == False:
non_explicit_track_scrappy = non_explicit_track_scrappy + 1
else:
unknown_scrappy = unknown_scrappy + 1
explicit_track_pop_total = 0
non_explicit_track_pop_total = 0
for tracks in Scrappy_tracks:
if tracks['explicit'] == True:
explicit_track_pop_total = explicit_track_pop_total + tracks['popularity']
elif tracks['explicit'] == False:
non_explicit_track_pop_total = non_explicit_track_pop_total + tracks['popularity']
explicit_track_duration_total = 0
non_explicit_track_duration_total = 0
for tracks in Scrappy_tracks:
if tracks['explicit'] == True:
explicit_track_duration_total = explicit_track_duration_total + tracks['duration_ms']
elif tracks['explicit'] == False:
non_explicit_track_duration_total = non_explicit_track_duration_total + tracks['duration_ms']
print("The average rating of explicit songs by Lil Scrappy is", round(explicit_track_pop_total / explicit_track_scrappy), ".")
print("The average rating of non-explicit songs by Lil Scrappy is", round(non_explicit_track_pop_total / non_explicit_track_scrappy), ".")
print("The duration of explicit song material of Lil Scrappy is", round(explicit_track_duration_total / 1000), "minutes, and of non explicit material is", round(non_explicit_track_duration_total / 1000), "minutes.")
explicit_track_Mama = 0
non_explicit_track_Mama = 0
unknown = 0
for tracks in Mama_tracks:
if tracks['explicit'] == True:
explicit_track_Mama = explicit_track_Mama + 1
elif tracks['explicit'] == False:
non_explicit_track_Mama = non_explicit_track_Mama + 1
else:
unknown = unknown + 1
explicit_track_pop_total_Mama = 0
non_explicit_track_pop_total_Mama = 0
for tracks in Mama_tracks:
if tracks['explicit'] == True:
explicit_track_pop_total_Mama = explicit_track_pop_total_Mama + tracks['popularity']
elif tracks['explicit'] == False:
non_explicit_track_pop_total_Mama = non_explicit_track_pop_total_Mama + tracks['popularity']
explicit_track_duration_total_Mama = 0
non_explicit_track_duration_total_Mama = 0
for tracks in Mama_tracks:
if tracks['explicit'] == True:
explicit_track_duration_total_Mama = explicit_track_duration_total_Mama + tracks['duration_ms']
elif tracks['explicit'] == False:
non_explicit_track_duration_total_Mama = non_explicit_track_duration_total_Mama + tracks['duration_ms']
print("The average rating of explicit songs by Lil Mama is", round(explicit_track_pop_total_Mama / explicit_track_Mama), ".")
print("The average rating of non-explicit songs by Lil Mama is", round(non_explicit_track_pop_total_Mama / non_explicit_track_Mama), ".")
print("The duration of explicit song material of Lil Mama is", round(explicit_track_duration_total_Mama / 1000), "minutes, and of non explicit material is", round(non_explicit_track_duration_total_Mama / 1000), "minutes.")
response = requests.get('https://api.spotify.com/v1/search?query=Biggie&type=artist&limit=50&market=US')
Biggie_data = response.json()
response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&market=US')
Lil_data = response.json()
Biggie_artists = Biggie_data['artists']['total']
Lil_artists = Lil_data['artists']['total']
print("There are", Biggie_artists, "artists named Biggie on Spotify and", Lil_artists, "named Lil",)
Total_Download_Time_Biggie = Biggie_artists / 50 * 5
Total_Download_Time_Lil = Lil_artists / 50 * 5
print("It would take", round(Total_Download_Time_Biggie), "seconds to download all the Biggie artists and", round(Total_Download_Time_Lil), "seconds to download the Lil artists." )
Lil_artists_popularity = Lil_data['artists']['items']
popularity_total = 0
for popularity in Lil_artists_popularity:
popularity_total = popularity_total + popularity['popularity']
print("The average rating for the top 50 artists called Lil is:", round(popularity_total / 50))
Biggie_artists_popularity = Biggie_data['artists']['items']
Biggie_popularity_total = 0
for popularity2 in Biggie_artists_popularity:
Biggie_popularity_total = Biggie_popularity_total + popularity2['popularity']
print("The average rating for the top 50 artists called Biggie is:", round(Biggie_popularity_total / 49) )
Biggie_popularity = Biggie_data['artists']['items']
for artist in Biggie_popularity:
print(artist['name'], artist['popularity'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1) With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score.
Step2: 2 a) What genres are most represented in the search results?
Step3: Counting the genres.
Step4: Sorting the genres by occurences.
Step5: 2 b) Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
Step6: how to automate all of the results
Step7: 3 a) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating.
Step8: 3 b) Is it the same artist who has the largest number of followers?
Step9: Creating a list of the popularity values, so we can sort them and say which one is the highest)
Step10: Deciding which one is highest
Step11: 4) Print a list of Lil's that are more popular than Lil' Kim.
Step12: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
Step13: 6 Will the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit?
Step14: And this is the same for Lil Mama
Step15: 7 a) Since we're talking about Lils, what about Biggies? How many total "Biggie" artists are there? How many total "Lil"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
Step16: 8) Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average?
|
11,612
|
<ASSISTANT_TASK:>
Python Code:
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos de la muestra
datos = pd.read_csv('ensayo1.CSV')
%pylab inline
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X','Diametro Y', 'RPM TRAC']
#Mostramos un resumen de los datos obtenidoss
datos[columns].describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
datos.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r')
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
Step2: Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como segunda aproximación, vamos a modificar los incrementos en los que el diámetro se encuentra entre $1.80mm$ y $1.70 mm$, en ambos sentidos. (casos 3 a 6)
Step3: Filtrado de datos
Step4: Representación de X/Y
Step5: Analizamos datos del ratio
Step6: Límites de calidad
|
11,613
|
<ASSISTANT_TASK:>
Python Code:
get_ipython().magic('load_ext autoreload')
get_ipython().magic('autoreload 2')
from IPython.display import display, clear_output
import glob
import logging
import numpy as np
import os
import cv2
logging.basicConfig(format=
"%(relativeCreated)12d [%(filename)s:%(funcName)20s():%(lineno)s] [%(process)d] %(message)s",
# filename="/tmp/caiman.log",
level=logging.WARNING)
import caiman as cm
from caiman.source_extraction import cnmf as cnmf
from caiman.paths import caiman_datadir
from caiman.utils.utils import download_demo
import matplotlib.pyplot as plt
import bokeh.plotting as bpl
bpl.output_notebook()
fnames=download_demo('blood_vessel_10Hz.mat')
reuse_model = False # set to True to re-use an existing ring model
path_to_model = None # specify a pre-trained model here if needed
gSig = (7, 7) # expected half size of neurons
gnb = 2 # number of background components for OnACID
init_batch = 500 # number of frames for initialization and training
params_dict = {'fnames': fnames,
'var_name_hdf5': 'Y', # name of variable inside mat file where the data is stored
'fr': 10, # frame rate (Hz)
'decay_time': 0.5, # approximate length of transient event in seconds
'gSig': gSig,
'p': 0, # order of AR indicator dynamics
'ring_CNN': True, # SET TO TRUE TO USE RING CNN
'min_SNR': 2.65, # minimum SNR for accepting new components
'SNR_lowest': 0.75, # reject components with SNR below this value
'use_cnn': False, # do not use CNN based test for components
'use_ecc': True, # test eccentricity
'max_ecc': 2.625, # reject components with eccentricity above this value
'rval_thr': 0.70, # correlation threshold for new component inclusion
'rval_lowest': 0.25, # reject components with corr below that value
'ds_factor': 1, # spatial downsampling factor (increases speed but may lose some fine structure)
'nb': gnb,
'motion_correct': False, # Flag for motion correction
'init_batch': init_batch, # number of frames for initialization (presumably from the first file)
'init_method': 'bare',
'normalize': False,
'expected_comps': 1100, # maximum number of expected components used for memory pre-allocation (exaggerate here)
'sniper_mode': False, # flag using a CNN to detect new neurons (o/w space correlation is used)
'dist_shape_update' : True, # flag for updating shapes in a distributed way
'min_num_trial': 5, # number of candidate components per frame
'epochs': 3, # number of total passes over the data
'stop_detection': True, # Run a last epoch without detecting new neurons
'K': 50, # initial number of components
'lr': 6e-4,
'lr_scheduler': [0.9, 6000, 10000],
'pct': 0.01,
'path_to_model': path_to_model, # where the ring CNN model is saved/loaded
'reuse_model': reuse_model # flag for re-using a ring CNN model
}
opts = cnmf.params.CNMFParams(params_dict=params_dict)
run_onacid = True
if run_onacid:
cnm = cnmf.online_cnmf.OnACID(params=opts)
cnm.fit_online()
fld_name = os.path.dirname(cnm.params.ring_CNN['path_to_model'])
res_name_nm = os.path.join(fld_name, 'onacid_results_nm.hdf5')
cnm.save(res_name_nm) # save initial results (without any postprocessing)
else:
fld_name = os.path.dirname(path_to_model)
res_name = os.path.join(fld_name, 'onacid_results.hdf5')
cnm = cnmf.online_cnmf.load_OnlineCNMF(res_name)
cnm.params.data['fnames'] = fnames
ds = 10 # plot every ds frames to make more manageable figures
init_batch = 500
dims, T = cnmf.utilities.get_file_size(fnames, var_name_hdf5='Y')
T = np.array(T).sum()
n_epochs = cnm.params.online['epochs']
T_detect = 1e3*np.hstack((np.zeros(init_batch), cnm.t_detect))
T_shapes = 1e3*np.hstack((np.zeros(init_batch), cnm.t_shapes))
T_online = 1e3*np.hstack((np.zeros(init_batch), cnm.t_online)) - T_detect - T_shapes
plt.figure()
plt.stackplot(np.arange(len(T_detect))[::ds], T_online[::ds], T_detect[::ds], T_shapes[::ds],
colors=['tab:red', 'tab:purple', 'tab:brown'])
plt.legend(labels=['process', 'detect', 'shapes'], loc=2)
plt.title('Processing time allocation')
plt.xlabel('Frame #')
plt.ylabel('Processing time [ms]')
max_val = 80
plt.ylim([0, max_val]);
plt.plot([init_batch, init_batch], [0, max_val], '--k')
for i in range(n_epochs - 1):
plt.plot([(i+1)*T, (i+1)*T], [0, max_val], '--k')
plt.xlim([0, n_epochs*T]);
plt.savefig(os.path.join(fld_name, 'time_per_frame_ds.pdf'), bbox_inches='tight', pad_inches=0)
init_batch = 500
plt.figure()
tc_init = cnm.t_init*np.ones(T*n_epochs)
ds = 10
#tc_mot = np.hstack((np.zeros(init_batch), np.cumsum(T_motion)/1000))
tc_prc = np.cumsum(T_online)/1000#np.hstack((np.zeros(init_batch), ))
tc_det = np.cumsum(T_detect)/1000#np.hstack((np.zeros(init_batch), ))
tc_shp = np.cumsum(T_shapes)/1000#np.hstack((np.zeros(init_batch), ))
plt.stackplot(np.arange(len(tc_init))[::ds], tc_init[::ds], tc_prc[::ds], tc_det[::ds], tc_shp[::ds],
colors=['g', 'tab:red', 'tab:purple', 'tab:brown'])
plt.legend(labels=['initialize', 'process', 'detect', 'shapes'], loc=2)
plt.title('Processing time allocation')
plt.xlabel('Frame #')
plt.ylabel('Processing time [s]')
max_val = (tc_prc[-1] + tc_det[-1] + tc_shp[-1] + cnm.t_init)*1.05
for i in range(n_epochs - 1):
plt.plot([(i+1)*T, (i+1)*T], [0, max_val], '--k')
plt.xlim([0, n_epochs*T]);
plt.ylim([0, max_val])
plt.savefig(os.path.join(fld_name, 'time_cumulative_ds.pdf'), bbox_inches='tight', pad_inches=0)
print('Cost of estimating model and running first epoch: {:.2f}s'.format(tc_prc[T] + tc_det[T] + tc_shp[T] + tc_init[T]))
# first compute background summary images
images = cm.load(fnames, var_name_hdf5='Y', subindices=slice(None, None, 2))
cn_filter, pnr = cm.summary_images.correlation_pnr(images, gSig=3, swap_dim=False) # change swap dim if output looks weird, it is a problem with tiffile
plt.figure(figsize=(15, 7))
plt.subplot(1,2,1); plt.imshow(cn_filter); plt.colorbar()
plt.subplot(1,2,2); plt.imshow(pnr); plt.colorbar()
cnm.estimates.plot_contours_nb(img=cn_filter, idx=cnm.estimates.idx_components, line_color='white', thr=0.3)
cnm.estimates.nb_view_components(img=cn_filter, denoised_color='red')
save_file = True
if save_file:
from caiman.utils.nn_models import create_LN_model
model_LN = create_LN_model(images, shape=opts.data['dims'] + (1,), n_channels=opts.ring_CNN['n_channels'],
width=opts.ring_CNN['width'], use_bias=opts.ring_CNN['use_bias'], gSig=gSig[0],
use_add=opts.ring_CNN['use_add'])
model_LN.load_weights(cnm.params.ring_CNN['path_to_model'])
# Load the data in batches and save them
m = []
saved_files = []
batch_length = 256
for i in range(0, T, batch_length):
images = cm.load(fnames, var_name_hdf5='Y', subindices=slice(i, i + batch_length))
images_filt = np.squeeze(model_LN.predict(np.expand_dims(images, axis=-1)))
temp_file = os.path.join(fld_name, 'pfc_back_removed_' + format(i, '05d') + '.h5')
saved_files.append(temp_file)
m = cm.movie(np.maximum(images - images_filt, 0))
m.save(temp_file)
else:
saved_files = glob.glob(os.path.join(fld_name, 'pfc_back_removed_*'))
saved_files.sort()
fname_mmap = cm.save_memmap([saved_files], order='C', border_to_0=0)
Yr, dims, T = cm.load_memmap(fname_mmap)
images_mmap = Yr.T.reshape((T,) + dims, order='F')
cnm.params.merging['merge_thr'] = 0.7
cnm.estimates.c1 = np.zeros(cnm.estimates.A.shape[-1])
cnm.estimates.bl = np.zeros(cnm.estimates.A.shape[-1])
cnm.estimates.neurons_sn = np.zeros(cnm.estimates.A.shape[-1])
cnm.estimates.g = None #np.ones((cnm.estimates.A.shape[-1], 1))*.9
cnm.estimates.merge_components(Yr, cnm.params)
cnm.params.quality
cnm.estimates.evaluate_components(imgs=images_mmap, params=cnm.params)
cnm.estimates.plot_contours_nb(img=cn_filter, idx=cnm.estimates.idx_components, line_color='white')
cnm.estimates.nb_view_components(idx=cnm.estimates.idx_components, img=cn_filter)
cnmfe_results = download_demo('online_vs_offline.npz')
locals().update(np.load(cnmfe_results, allow_pickle=True))
A_patch_good = A_patch_good.item()
estimates_gt = cnmf.estimates.Estimates(A=A_patch_good, C=C_patch_good, dims=dims)
maxthr=0.01
cnm.estimates.A_thr=None
cnm.estimates.threshold_spatial_components(maxthr=maxthr)
estimates_gt.A_thr=None
estimates_gt.threshold_spatial_components(maxthr=maxthr*10)
min_size = np.pi*(gSig[0]/1.5)**2
max_size = np.pi*(gSig[0]*1.5)**2
ntk = cnm.estimates.remove_small_large_neurons(min_size_neuro=min_size, max_size_neuro=2*max_size)
gtk = estimates_gt.remove_small_large_neurons(min_size_neuro=min_size, max_size_neuro=2*max_size)
m1, m2, nm1, nm2, perf = cm.base.rois.register_ROIs(estimates_gt.A_thr[:, estimates_gt.idx_components],
cnm.estimates.A_thr[:, cnm.estimates.idx_components],
dims, align_flag=False, thresh_cost=.7, plot_results=True,
Cn=cn_filter, enclosed_thr=None)[:-1]
for k, v in perf.items():
print(k + ':', '%.4f' % v, end=' ')
res_name = os.path.join(fld_name, 'onacid_results.hdf5')
cnm.save(res_name)
import matplotlib.lines as mlines
lp, hp = np.nanpercentile(cn_filter, [5, 98])
A_onacid = cnm.estimates.A_thr.toarray().copy()
A_onacid /= A_onacid.max(0)
A_TP = estimates_gt.A[:, m1].toarray() #cnm.estimates.A[:, cnm.estimates.idx_components[m2]].toarray()
A_TP = A_TP.reshape(dims + (-1,), order='F').transpose(2,0,1)
A_FN = estimates_gt.A[:, nm1].toarray()
A_FN = A_FN.reshape(dims + (-1,), order='F').transpose(2,0,1)
A_FP = A_onacid[:,cnm.estimates.idx_components[nm2]]
A_FP = A_FP.reshape(dims + (-1,), order='F').transpose(2,0,1)
plt.figure(figsize=(15, 12))
plt.imshow(cn_filter, vmin=lp, vmax=hp, cmap='viridis')
plt.colorbar();
for aa in A_TP:
plt.contour(aa, [0.05], colors='k');
for aa in A_FN:
plt.contour(aa, [0.05], colors='r');
for aa in A_FP:
plt.contour(aa, [0.25], colors='w');
cl = ['k', 'r', 'w']
lb = ['both', 'CNMF-E only', 'ring CNN only']
day = [mlines.Line2D([], [], color=cl[i], label=lb[i]) for i in range(3)]
plt.legend(handles=day, loc=3)
plt.axis('off');
plt.margins(0, 0);
plt.savefig(os.path.join(fld_name, 'ring_CNN_contours_gSig_3.pdf'), bbox_inches='tight', pad_inches=0)
A_rej = cnm.estimates.A[:, cnm.estimates.idx_components_bad].toarray()
A_rej = A_rej.reshape(dims + (-1,), order='F').transpose(2,0,1)
plt.figure(figsize=(15, 15))
plt.imshow(cn_filter, vmin=lp, vmax=hp, cmap='viridis')
plt.title('Rejected Components')
for aa in A_rej:
plt.contour(aa, [0.05], colors='w');
from caiman.utils.nn_models import create_LN_model
model_LN = create_LN_model(images, shape=opts.data['dims'] + (1,), n_channels=opts.ring_CNN['n_channels'],
width=opts.ring_CNN['width'], use_bias=opts.ring_CNN['use_bias'], gSig=gSig[0],
use_add=opts.ring_CNN['use_add'])
model_LN.load_weights(cnm.params.ring_CNN['path_to_model'])
W = model_LN.get_weights()
plt.figure(figsize=(10, 10))
plt.subplot(2,2,1); plt.imshow(np.squeeze(W[0][:,:,:,0])); plt.colorbar(); plt.title('Ring Kernel 1')
plt.subplot(2,2,2); plt.imshow(np.squeeze(W[0][:,:,:,1])); plt.colorbar(); plt.title('Ring Kernel 2')
plt.subplot(2,2,3); plt.imshow(np.squeeze(W[-1][:,:,0])); plt.colorbar(); plt.title('Multiplicative Layer 1')
plt.subplot(2,2,4); plt.imshow(np.squeeze(W[-1][:,:,1])); plt.colorbar(); plt.title('Multiplicative Layer 2');
m1 = cm.load(fnames, var_name_hdf5='Y') # original data
m2 = cm.load(fname_mmap) # background subtracted data
m3 = m1 - m2 # estimated background
m4 = cm.movie(cnm.estimates.A[:,cnm.estimates.idx_components].dot(cnm.estimates.C[cnm.estimates.idx_components])).reshape(dims + (T,)).transpose(2,0,1)
# estimated components
nn = 0.01
mm = 1 - nn/4 # normalize movies by quantiles
m1 = (m1 - np.quantile(m1[:1000], nn))/(np.quantile(m1[:1000], mm) - np.quantile(m1[:1000], nn))
m2 = (m2 - np.quantile(m2[:1000], nn))/(np.quantile(m2[:1000], mm) - np.quantile(m2[:1000], nn))
m3 = (m3 - np.quantile(m3[:1000], nn))/(np.quantile(m3[:1000], mm) - np.quantile(m3[:1000], nn))
m4 = (m4 - np.quantile(m4[:1000], nn))/(np.quantile(m4[:1000], mm) - np.quantile(m4[:1000], nn))
m = cm.concatenate((cm.concatenate((m1.transpose(0,2,1), m3.transpose(0,2,1)), axis=2),
cm.concatenate((m2.transpose(0,2,1), m4), axis=2)), axis=1)
m[:3000].play(magnification=2, q_min=1, plot_text=True,
save_movie=True, movie_name=os.path.join(fld_name, 'movie.avi'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First specify the data file(s) to be analyzed
Step2: Set up some parameters
Step3: Now run the Ring-CNN + CaImAn online algorithm (OnACID).
Step4: Check speed
Step5: Do some initial plotting
Step6: View components
Step7: Load ring model to filter the data
Step8: Merge components
Step9: Evaluate components and compare again
Step10: Compare against CNMF-E results
Step11: Print performance results
Step12: Save the results
Step13: Make some plots
Step14: Show the learned filters
Step15: Make a movie
|
11,614
|
<ASSISTANT_TASK:>
Python Code:
import psi4
import forte
import forte.utils
xyz =
0 1
C -1.9565506735 0.4146729724 0.0000000000
H -0.8865506735 0.4146729724 0.0000000000
H -2.3132134555 1.1088535618 -0.7319870007
H -2.3132183114 0.7015020975 0.9671697106
H -2.3132196063 -0.5663349614 -0.2351822830
symmetry c1
E_scf, wfn = forte.utils.psi4_scf(xyz, 'sto-3g', 'rhf', functional = 'hf')
from forte import forte_options
options = forte.forte_options
mos_spaces = {'RESTRICTED_DOCC' : [5], 'ACTIVE' : [0]}
nmopi = wfn.nmopi()
point_group = wfn.molecule().point_group().symbol()
mo_space_info = forte.make_mo_space_info_from_map(nmopi,point_group,mos_spaces,[])
scf_info = forte.SCFInfo(wfn)
ints = forte.make_ints_from_psi4(wfn, options, mo_space_info)
localizer = forte.Localize(forte.forte_options, ints, mo_space_info)
localizer.set_orbital_space(['RESTRICTED_DOCC'])
localizer.compute_transformation()
Ua = localizer.get_Ua()
Ca = wfn.Ca()
Ca_local = psi4.core.doublet(Ca,Ua,False,False)
wfn.Ca().copy(Ca_local)
# make cube files
# forte.utils.psi4_cubeprop(wfn,path='cubes',nocc=5,nvir=0, load=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: We will start by generating SCF orbitals for methane via psi4 using the forte.util.psi4_scf function
Step3: Next we start forte, setup the MOSpaceInfo object specifying the number of orbitals, and start the integral object
Step4: Localizing the orbitals
Step5: Once the Localize object is created, we specify which orbital space we want to be localized and compute the unitary transformation
Step6: From the localizer we can then extract the unitary transformation matrix that correspond to the orbital localizaition. Here we get the alpha part
Step7: We are now ready to read the MOs from psi4 and transform them by computing the product $\mathbf{C}' = \mathbf{C} \mathbf{U}$. We then place the orbitals back into psi4 by calling the copy function on wfn.Ca(). We have to do this because this function returns a smart pointer to the matrix that holds $\mathbf{C}$. If we assigned Ca_local via wfn.Ca() = Ca_local we would not change the orbitals in psi4.
Step8: Lastly, we can optionally generate cube files for all the occupied orbitals and visualize them. The resulting orbitals consist of a core orbital (which cannot be seen) and four localized C-H $\sigma$ bond orbitals
|
11,615
|
<ASSISTANT_TASK:>
Python Code:
import base64
import datetime
import logging
import os
import json
import pandas as pd
import time
import sys
import grpc
import google.auth
import numpy as np
import tensorflow.io as tf_io
from google.cloud import bigquery
from typing import List, Optional, Text, Tuple
ANN_GRPC_ENDPOINT_STUB = 'ann_grpc'
if ANN_GRPC_ENDPOINT_STUB not in sys.path:
sys.path.append(ANN_GRPC_ENDPOINT_STUB)
import ann_grpc.match_pb2_grpc as match_pb2_grpc
import ann_grpc.match_pb2 as match_pb2
PROJECT_ID = 'jk-mlops-dev' # <-CHANGE THIS
PROJECT_NUMBER = '895222332033' # <-CHANGE THIS
BQ_DATASET_NAME = 'song_embeddings' # <- CHANGE THIS
BQ_LOCATION = 'US' # <- CHANGE THIS
DATA_LOCATION = 'gs://jk-ann-staging/embeddings' # <-CHANGE THIS
VPC_NAME = 'default' # <-CHANGE THIS
EMBEDDINGS_TABLE = 'item_embeddings'
REGION = 'us-central1'
MATCH_SERVICE_PORT = 10000
client = bigquery.Client(project=PROJECT_ID, location=BQ_LOCATION)
query = f
SELECT COUNT(*) embedding_count
FROM {BQ_DATASET_NAME}.item_embeddings;
query_job = client.query(query)
query_job.to_dataframe()
file_name_pattern = 'embedding-*.json'
destination_uri = f'{DATA_LOCATION}/{file_name_pattern}'
table_id = 'item_embeddings'
destination_format = 'NEWLINE_DELIMITED_JSON'
dataset_ref = bigquery.DatasetReference(PROJECT_ID, BQ_DATASET_NAME)
table_ref = dataset_ref.table(table_id)
job_config = bigquery.job.ExtractJobConfig()
job_config.destination_format = bigquery.DestinationFormat.NEWLINE_DELIMITED_JSON
extract_job = client.extract_table(
table_ref,
destination_uris=destination_uri,
job_config=job_config,
#location=BQ_LOCATION,
)
extract_job.result()
! gsutil ls {DATA_LOCATION}
import datetime
import logging
import json
import time
import google.auth
class ANNClient(object):
Base ANN Service client.
def __init__(self, project_id, project_number, region):
credentials, _ = google.auth.default()
self.authed_session = google.auth.transport.requests.AuthorizedSession(credentials)
self.ann_endpoint = f'{region}-aiplatform.googleapis.com'
self.ann_parent = f'https://{self.ann_endpoint}/v1alpha1/projects/{project_id}/locations/{region}'
self.project_id = project_id
self.project_number = project_number
self.region = region
def wait_for_completion(self, operation_id, message, sleep_time):
Waits for a completion of a long running operation.
api_url = f'{self.ann_parent}/operations/{operation_id}'
start_time = datetime.datetime.utcnow()
while True:
response = self.authed_session.get(api_url)
if response.status_code != 200:
raise RuntimeError(response.json())
if 'done' in response.json().keys():
logging.info('Operation completed!')
break
elapsed_time = datetime.datetime.utcnow() - start_time
logging.info('{}. Elapsed time since start: {}.'.format(
message, str(elapsed_time)))
time.sleep(sleep_time)
return response.json()['response']
class IndexClient(ANNClient):
Encapsulates a subset of control plane APIs
that manage ANN indexes.
def __init__(self, project_id, project_number, region):
super().__init__(project_id, project_number, region)
def create_index(self, display_name, description, metadata):
Creates an ANN Index.
api_url = f'{self.ann_parent}/indexes'
request_body = {
'display_name': display_name,
'description': description,
'metadata': metadata
}
response = self.authed_session.post(api_url, data=json.dumps(request_body))
if response.status_code != 200:
raise RuntimeError(response.text)
operation_id = response.json()['name'].split('/')[-1]
return operation_id
def list_indexes(self, display_name=None):
Lists all indexes with a given display name or
all indexes if the display_name is not provided.
if display_name:
api_url = f'{self.ann_parent}/indexes?filter=display_name="{display_name}"'
else:
api_url = f'{self.ann_parent}/indexes'
response = self.authed_session.get(api_url).json()
return response['indexes'] if response else []
def delete_index(self, index_id):
Deletes an ANN index.
api_url = f'{self.ann_parent}/indexes/{index_id}'
response = self.authed_session.delete(api_url)
if response.status_code != 200:
raise RuntimeError(response.text)
class IndexDeploymentClient(ANNClient):
Encapsulates a subset of control plane APIs
that manage ANN endpoints and deployments.
def __init__(self, project_id, project_number, region):
super().__init__(project_id, project_number, region)
def create_endpoint(self, display_name, vpc_name):
Creates an ANN endpoint.
api_url = f'{self.ann_parent}/indexEndpoints'
network_name = f'projects/{self.project_number}/global/networks/{vpc_name}'
request_body = {
'display_name': display_name,
'network': network_name
}
response = self.authed_session.post(api_url, data=json.dumps(request_body))
if response.status_code != 200:
raise RuntimeError(response.text)
operation_id = response.json()['name'].split('/')[-1]
return operation_id
def list_endpoints(self, display_name=None):
Lists all ANN endpoints with a given display name or
all endpoints in the project if the display_name is not provided.
if display_name:
api_url = f'{self.ann_parent}/indexEndpoints?filter=display_name="{display_name}"'
else:
api_url = f'{self.ann_parent}/indexEndpoints'
response = self.authed_session.get(api_url).json()
return response['indexEndpoints'] if response else []
def delete_endpoint(self, endpoint_id):
Deletes an ANN endpoint.
api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}'
response = self.authed_session.delete(api_url)
if response.status_code != 200:
raise RuntimeError(response.text)
return response.json()
def create_deployment(self, display_name, deployment_id, endpoint_id, index_id):
Deploys an ANN index to an endpoint.
api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}:deployIndex'
index_name = f'projects/{self.project_number}/locations/{self.region}/indexes/{index_id}'
request_body = {
'deployed_index': {
'id': deployment_id,
'index': index_name,
'display_name': display_name
}
}
response = self.authed_session.post(api_url, data=json.dumps(request_body))
if response.status_code != 200:
raise RuntimeError(response.text)
operation_id = response.json()['name'].split('/')[-1]
return operation_id
def get_deployment_grpc_ip(self, endpoint_id, deployment_id):
Returns a private IP address for a gRPC interface to
an Index deployment.
api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}'
response = self.authed_session.get(api_url)
if response.status_code != 200:
raise RuntimeError(response.text)
endpoint_ip = None
if 'deployedIndexes' in response.json().keys():
for deployment in response.json()['deployedIndexes']:
if deployment['id'] == deployment_id:
endpoint_ip = deployment['privateEndpoints']['matchGrpcAddress']
return endpoint_ip
def delete_deployment(self, endpoint_id, deployment_id):
Undeployes an index from an endpoint.
api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}:undeployIndex'
request_body = {
'deployed_index_id': deployment_id
}
response = self.authed_session.post(api_url, data=json.dumps(request_body))
if response.status_code != 200:
raise RuntimeError(response.text)
return response
index_client = IndexClient(PROJECT_ID, PROJECT_NUMBER, REGION)
deployment_client = IndexDeploymentClient(PROJECT_ID, PROJECT_NUMBER, REGION)
indexes = index_client.list_indexes()
if not indexes:
print('There are not any indexes registered with the service')
for index in indexes:
print(index['name'])
index_display_name = 'Song embeddings'
index_description = 'Song embeddings created BQML Matrix Factorization model'
index_metadata = {
'contents_delta_uri': DATA_LOCATION,
'config': {
'dimensions': 50,
'approximate_neighbors_count': 50,
'distance_measure_type': 'DOT_PRODUCT_DISTANCE',
'feature_norm_type': 'UNIT_L2_NORM',
'tree_ah_config': {
'child_node_count': 1000,
'max_leaves_to_search': 100
}
}
}
logging.getLogger().setLevel(logging.INFO)
operation_id = index_client.create_index(index_display_name,
index_description,
index_metadata)
response = index_client.wait_for_completion(operation_id, 'Creating index', 20)
print(response)
indexes = index_client.list_indexes(index_display_name)
for index in indexes:
print(index['name'])
if indexes:
index_id = index['name'].split('/')[-1]
print(f'Index: {index_id} will be used for deployment')
else:
print('No indexes available for deployment')
endpoints = deployment_client.list_endpoints()
if not endpoints:
print('There are not any endpoints registered with the service')
for endpoint in endpoints:
print(endpoint['name'])
deployment_display_name = 'Song embeddings endpoint'
operation_id = deployment_client.create_endpoint(deployment_display_name, VPC_NAME)
response = index_client.wait_for_completion(operation_id, 'Waiting for endpoint', 10)
print(response)
endpoints = deployment_client.list_endpoints(deployment_display_name)
for endpoint in endpoints:
print(endpoint['name'])
if endpoints:
endpoint_id = endpoint['name'].split('/')[-1]
print(f'Endpoint: {endpoint_id} will be used for deployment')
else:
print('No endpoints available for deployment')
deployment_display_name = 'Song embeddings deployed index'
deployed_index_id = 'songs_embeddings_deployed_index'
response = index_client.wait_for_completion(operation_id, 'Waiting for deployment', 10)
operation_id = deployment_client.create_deployment(deployment_display_name,
deployed_index_id,
endpoint_id,
index_id)
response = index_client.wait_for_completion(operation_id, 'Waiting for deployment', 10)
print(response)
deployed_index_ip = deployment_client.get_deployment_grpc_ip(endpoint_id, deployed_index_id)
endpoint = f'{deployed_index_ip}:{MATCH_SERVICE_PORT}'
print(f'gRPC endpoint for the: {deployed_index_id} deployment is: {endpoint}')
class MatchService(object):
This is a wrapper around Online Querying gRPC interface.
def __init__(self, endpoint, deployed_index_id):
self.endpoint = endpoint
self.deployed_index_id = deployed_index_id
def single_match(
self,
embedding: List[float],
num_neighbors: int) -> List[Tuple[str, float]]:
Requests a match for a single embedding.
match_request = match_pb2.MatchRequest(deployed_index_id=self.deployed_index_id,
float_val=embedding,
num_neighbors=num_neighbors)
with grpc.insecure_channel(endpoint) as channel:
stub = match_pb2_grpc.MatchServiceStub(channel)
response = stub.Match(match_request)
return [(neighbor.id, neighbor.distance) for neighbor in response.neighbor]
def batch_match(
self,
embeddings: List[List[float]],
num_neighbors: int) -> List[List[Tuple[str, float]]]:
Requests matches ofr a list of embeddings.
match_requests = [
match_pb2.MatchRequest(deployed_index_id=self.deployed_index_id,
float_val=embedding,
num_neighbors=num_neighbors)
for embedding in embeddings]
batches_per_index = [
match_pb2.BatchMatchRequest.BatchMatchRequestPerIndex(
deployed_index_id=self.deployed_index_id,
requests=match_requests)]
batch_match_request = match_pb2.BatchMatchRequest(
requests=batches_per_index)
with grpc.insecure_channel(endpoint) as channel:
stub = match_pb2_grpc.MatchServiceStub(channel)
response = stub.BatchMatch(batch_match_request)
matches = []
for batch_per_index in response.responses:
for match in batch_per_index.responses:
matches.append(
[(neighbor.id, neighbor.distance) for neighbor in match.neighbor])
return matches
match_service = MatchService(endpoint, deployed_index_id)
%%bigquery df_embeddings
SELECT id, embedding
FROM `recommendations.item_embeddings`
LIMIT 10
sample_embeddings = [list(embedding) for embedding in df_embeddings['embedding']]
sample_embeddings[0]
%%time
single_match = match_service.single_match(sample_embeddings[0], 10)
single_match
%%time
batch_match = match_service.batch_match(sample_embeddings[0:5], 3)
batch_match
for endpoint in deployment_client.list_endpoints():
endpoint_id = endpoint['name'].split('/')[-1]
if 'deployedIndexes' in endpoint.keys():
for deployment in endpoint['deployedIndexes']:
print(' Deleting index deployment: {} in the endpoint: {} '.format(deployment['id'], endpoint_id))
deployment_client.delete_deployment(endpoint_id, deployment['id'])
print('Deleting endpoint: {}'.format(endpoint['name']))
deployment_client.delete_endpoint(endpoint_id)
for index in index_client.list_indexes():
index_id = index['name'].split('/')[-1]
print('Deleting index: {}'.format(index['name']))
index_client.delete_index(index_id)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the experimental release, the Online Querying API of the ANN service is exposed throught the GRPC interface. The ann_grpc folder contains the grpc client stub to interface to the API.
Step2: Configure GCP environment
Step4: Exporting the embeddings
Step5: Export the embeddings
Step6: Inspect the extracted files.
Step20: Creating an ANN index deployment
Step21: Create an ANN index
Step22: Configure and create a new index based on the exported embeddings
Step23: Verify that the index was created
Step24: Create the index deployment
Step25: Create an index endpoint
Step26: Verify that the endpoint was created
Step27: Deploy the index to the endpoint
Step28: Deploy the index
Step29: Querying the ANN service
Step33: Create a helper wrapper around the Match Service gRPC API.
Step34: Prepare sample data
Step35: Run a single match query
Step36: Run a batch match query
Step37: Clean up
Step38: Delete indexes
|
11,616
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), self.scaler(y)
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
# Reshape it to start the convolutional stack
x1 = tf.reshape(x1, (-1, 4, 4, 512))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha * x1, x1)
# 4x4x512 now
x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=training)
x2 = tf.maximum(alpha * x2, x2)
# 8x8x256 now
x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=training)
x3 = tf.maximum(alpha * x3, x3)
# 16x16x128 now
# Output layer
logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')
# 32x32x3 now
out = tf.tanh(logits)
return out
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same')
relu1 = tf.maximum(alpha * x1, x1)
# 16x16x64
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * bn2, bn2)
# 8x8x128
x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
# 4x4x256
# Flatten it
flat = tf.reshape(relu3, (-1, 4*4*256))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
def model_loss(input_real, input_z, output_dim, alpha=0.2):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=0.2)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, 0.5)
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
real_size = (32,32,3)
z_size = 100
learning_rate = 0.0002
batch_size = 128
epochs = 25
alpha = 0.2
beta1 = 0.5
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data
Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
Step5: Network Inputs
Step6: Generator
Step7: Discriminator
Step9: Model Loss
Step11: Optimizers
Step12: Building the model
Step13: Here is a function for displaying generated images.
Step14: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an errror without it because of the tf.control_dependencies block we created in model_opt.
Step15: Hyperparameters
|
11,617
|
<ASSISTANT_TASK:>
Python Code:
from stingray.utils import create_window
from scipy.fftpack import fft, fftshift, fftfreq
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
N = 100
window = create_window(N)
plt.plot(window)
plt.title("Uniform window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(uniform_window,nfft ) / (len(uniform_window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Uniform window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
N = 100
window = create_window(N, window_type='parzen')
plt.plot(window)
plt.title("Parzen window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Parzen window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
N = 50
window = create_window(N, window_type='hamming')
plt.plot(window)
plt.title("Hamming window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Hamming window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
N = 50
window = create_window(N, window_type='hanning')
plt.plot(window)
plt.title("Hanning window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Hanning window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
N = 50
window = create_window(N, window_type='triangular')
plt.plot(window)
plt.title("Traingualr window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Triangular window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
N = 50
window = create_window(N, window_type='welch')
plt.plot(window)
plt.title("Welch window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Welch window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
N = 50
window = create_window(N, window_type='blackmann')
plt.plot(window)
plt.title("Blackmann window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Blackmann window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
N = 50
window = create_window(N, window_type='flat-top')
plt.plot(window)
plt.title("Flat-top window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Flat-top window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: create_window function in stingray.utils takes two parameters.
Step2: Parzen Window
Step3: Hamming Window
Step4: Hanning Window
Step5: Traingular Window
Step6: Welch Window
Step7: Blackmann's Window
Step8: Flat Top Window
|
11,618
|
<ASSISTANT_TASK:>
Python Code:
# importing code modules
import json
import ijson
from ijson import items
import pprint
from tabulate import tabulate
import matplotlib.pyplot as plt
import re
import csv
import sys
import codecs
import nltk
import nltk.collocations
import collections
import statistics
from nltk.metrics.spearman import *
from nltk.collocations import *
from nltk.stem import WordNetLemmatizer
# This is a function for reading the contents of files
def read_file(filename):
"Read the contents of FILENAME and return as a string."
infile = codecs.open(filename, 'r', 'utf-8')
contents = infile.read()
infile.close()
return contents
# loading the JSON file
filename = "../scrapy/hearing_result6.json"
# loading the stopwords file
stopwords = read_file('cornellStopWords.txt')
customStopwords = stopwords.split()
# reads the file and assigns the keys and values to a Python dictionary structure
with open(filename, 'r') as f:
objects = ijson.items(f, 'item')
file = list(objects)
# checks to see how many records we have
print(len(file))
# commenting this out to make the github notebook more readable.
# prints all content in a single record. Changing the number shows a different record
file[0]
# iterates through each record in the file
for row in file:
# prints the title of each record and its url
print(row['title'], ":", row['url'])
# appends all of the text items to a single string object (rather than a list)
joined_text = []
for row in file:
joined_text.append(' '.join(row['text']))
# shows the text. Changing the number displays a different record...
# ...changing/removing the second number limits/expands the text shown.
print(joined_text[5][:750])
# splits the text string in each record into a list of separate words
token_joined = []
for words in joined_text:
# splits the text into a list of words
text = words.split()
# makes all words lowercase
clean = [w.lower() for w in text if w.isalpha()]
# applies stopword removal
text = [w for w in clean if w not in customStopwords]
token_joined.append(text)
#for title,word in zip(file,token_joined):
# print(title['title'],"guarantee:", word.count('guarantee'), "guarantees:", \
# word.count('guarantees'), "guaranteed:", word.count('guaranteed'))
for title,word in zip(file,token_joined):
print(title['title'],"service:", word.count('service'),"services:", word.count('services'))
# splits the text from the record into a list of individual words
words = joined_text[0].split()
#assigns NLTK functionality to the text
text = nltk.Text(words)
# prints a concordance output for the selected word (shown in green)
print(text.concordance('services', lines=25))
#creates a new file that can be written by the print queue
fileconcord = codecs.open('April11_service_concord.txt', 'w', 'utf-8')
#makes a copy of the empty print queue, so that we can return to it at the end of the function
tmpout = sys.stdout
#stores the text in the print queue
sys.stdout = fileconcord
#generates and prints the concordance, the number pertains to the total number of bytes per line
text.concordance("service", 79, sys.maxsize)
#closes the file
fileconcord.close()
#returns the print queue to an empty state
sys.stdout = tmpout
# shows the text list for a given record. Changing the first number displays a...
# ...different record, changing/removing the second number limits/expands the text shown
print(token_joined[5][:50])
# creates a variable for the lemmatizing function
wnl = WordNetLemmatizer()
# lemmatizes all of the verbs
lemm = []
for record in token_joined:
for word in record:
lemm.append(wnl.lemmatize(word, 'v'))
'''
lemm = []
for word in token_joined[13]:
lemm.append(wnl.lemmatize(word, 'v'))
'''
# lemmatizes all of the nouns
lems = []
for word in lemm:
lems.append(wnl.lemmatize(word, 'n'))
# just making sure the lemmatizer has worked
#print("guarantee:", lems.count('guarantee'), "guarantees:", \
# lems.count('guarantees'), "guaranteed:", lems.count('guaranteed'))
print("service:", lems.count('service'), lems.count('services'))
# counting the number of words in each record
for name, each in zip(file,token_joined):
print(name['title'], ":",len(each), "words")
docfreq = []
for words in token_joined:
docfreq.append(nltk.FreqDist(words))
for name, words in zip(file_obj, docfreq):
print(name['title'], ":", words.most_common(5))
# prints the 10 most common bigrams
colText = nltk.Text(lems)
colText.collocations(10)
# creates a list of bigrams (ngrams of 2), printing the first 5
colBigrams = list(nltk.ngrams(colText, 2))
colBigrams[:5]
# error checking. There should be one less bigram than total words
print("Number of words:", len(lems))
print("Number of bigrams:", len(colBigrams))
# frequency plot with stopwords removed
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 10.0)
fd = nltk.FreqDist(colText)
fd.plot(25)
# loads bigram code from NLTK
bigram_measures = nltk.collocations.BigramAssocMeasures()
# bigrams with a window size of 2 words
finder = BigramCollocationFinder.from_words(lems, window_size = 2)
# ngrams with 'word of interest' as a member
word_filter = lambda *w: 'service' not in w
# only bigrams that contain the 'word of interest'
finder.apply_ngram_filter(word_filter)
# filter results based on statistical test
# calulates the raw frequency as an actual number and percentage of total words
act = finder.ngram_fd.items()
raw = finder.score_ngrams(bigram_measures.raw_freq)
# log-likelihood ratio
log = finder.score_ngrams(bigram_measures.likelihood_ratio)
# prints list of results.
print(tabulate(log, headers = ["Collocate", "Log-Likelihood"], floatfmt=".3f", \
numalign="left"))
# prints list of results.
print(tabulate(act, headers = ["Collocate", "Actual"], floatfmt=".3f", \
numalign="left"))
with open('digital-literacy_collocate_Act.csv','w') as f:
w = csv.writer(f)
w.writerows(act)
##################################################################
############### sorts list of log-likelihood scores ##############
##################################################################
# group bigrams by first and second word in bigram
prefix_keys = collections.defaultdict(list)
for key, l in log:
# first word
prefix_keys[key[0]].append((key[1], l))
# second word
prefix_keys[key[1]].append((key[0], l))
# sort bigrams by strongest association
for key in prefix_keys:
prefix_keys[key].sort(key = lambda x: -x[1])
# prints top 80 results
logkeys = prefix_keys['service'][:80]
from tabulate import tabulate
print(tabulate(logkeys, headers = ["Collocate", "Log-Likelihood"], floatfmt=".3f", \
numalign="left"))
with open('service_collocate_Log.csv','w') as f:
w = csv.writer(f)
w.writerows(logkeys)
# working on a regex to split text by speaker
diced = []
for words in joined_text:
diced.append(re.split('(\d+(\s)\w+[A-Z](\s|.\s)\w+[A-Z]:\s)', words))
print(diced[8])
init_names = []
for words in joined_text:
init_names.append(set(re.findall('[A-Z]{3,}', words)))
print(init_names)
with open('initialNames.csv','w') as f:
w = csv.writer(f)
w.writerows(init_names)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading the File
Step2: A bit of error checking here to confirm the number of records in the file. We should have 14.
Step3: Changing the number in the code below will print a different record from the file. Please remember that in coding, numbered lists begin at 0.
Step4: Here is a bit more error checking to confirm the record titles and their urls.
Step5: And a bit more processing to make the text more readable. It's printed below.
Step6: Text Analysis Processing
Step7: Since a word of interest is guarantee, here is a list of how many times that word (and its variations) appear in each record.
Step8: Concordance
Step9: Below is what the text looks like after the initial processing, without punctuation, numbers, or stopwords.
Step10: Lemmatization
Step11: Here we are checking to make sure the lemmatizer has worked. Now the word guarantee only appears in one form.
Step12: Word Frequency
Step13: Here we will count the five most common words in each record.
Step14: These are the 10 most common word pairs in the text.
Step15: Error checking to make sure the code is processing the text properly.
Step16: More error checking.
Step17: Below is a frequency plot showing the occurence of the 25 most frequent words.
Step18: Collocations
Step19: Research shows that this is the most reliable statistical test for unreliable data.
Step20: Here is an example of words appearing twice. Below are both instances of the ngram 'quality'. The first instance appears before 'guarantee' and the second occurs after.
Step21: Here is a list showing only the collocates for the word guarantee. Again, watch for duplicate words below.
Step22:
|
11,619
|
<ASSISTANT_TASK:>
Python Code:
machine = dict(
name="PM-4-130",
lfe=0.1,
poles=4,
outer_diam=0.13,
bore_diam=0.07,
inner_diam=0.015,
airgap=0.0015,
stator=dict(
num_slots=12,
rlength=1.0,
statorRotor3=dict(
slot_height=0.02,
slot_h1=0.002,
slot_h2=0.004,
slot_r1=0.0,
slot_r2=0.0,
wedge_width1=0.0,
wedge_width2=0.0,
middle_line=0,
tooth_width=0.009,
slot_top_sh=0,
slot_width=0.003)
),
magnet=dict(
magnetSector=dict(
magn_num=1,
magn_width_pct=0.6,
magn_height=0.005,
magn_shape=0.02,
bridge_height=0,
magn_type=2,
condshaft_r=0.02,
magn_ori=1,
magn_rfe=0.0,
bridge_width=0,
magn_len=1)
),
windings=dict(
num_phases=3,
num_wires=20,
coil_span=3.0,
num_layers=1)
)
simulation = dict(
speed=5000.0 / 60,
calculationMode="pm_sym_fast",
magn_temp=20.0,
wind_temp=60,
current=28.284,
period_frac=6,
angl_i_up=0.0)
decision_vars = [
{"steps": 5, "bounds": [3e-3, 8e-3],
"name": "stator.statorRotor3.slot_width",
"label": "Slot Width/m"},
{"steps": 5, "bounds": [0.72, 0.85],
"name": "magnet.magnetSector.magn_width_pct",
"label": "Rel. Magnet Width"},
{"steps": 5, "bounds": [0.024, 0.0335],
"name": "magnet.magnetSector.magn_shape",
"label": "Magnet Shape/m"}
]
objective_vars = [
{"name": "machine.torque",
"label": "Load Torque/Nm"},
{"name": "torque[0].ripple",
"label": "Cogging Torque/Nm"},
{"name": "torque[-1].ripple",
"label": "Torque Ripple/Nm"}
]
parvardef = {
"objective_vars": objective_vars,
"population_size": 20,
"decision_vars": decision_vars
}
import logging
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(message)s')
from femagtools.multiproc import Engine
engine = Engine()
import pathlib
workdir = pathlib.Path.home() / 'parvar2'
workdir.mkdir(parents=True, exist_ok=True)
import femagtools.parstudy
parvar = femagtools.parstudy.Grid(workdir)
results = parvar(parvardef, machine, simulation, engine)
import numpy as np
x = results['x']
f = results['f']
# print header
print(' '.join(['{:15}'.format(s)
for s in [d['label']
for d in parvardef['decision_vars']] +
[o['label']
for o in parvardef['objective_vars']]]))
print()
# print values in table format
for l in np.vstack((x, f)).T:
print(' '.join(['{:15.4f}'.format(x) for x in l]))
parvardef['objective_vars'][0]['sign']=-1
import femagtools.moproblem
import femagtools.moo
size = np.shape(f)[1]
prob = femagtools.moproblem.FemagMoProblem(parvardef['decision_vars'],
parvardef['objective_vars'])
pop = femagtools.moo.Population(prob, size)
signs = [o.get('sign', 1)
for o in parvardef['objective_vars']]
pop.populate(np.array(x).T, np.array(f), signs)
px = pop.get_ranked_decisions()
po = pop.get_ranked_objectives(signs)
#
fp = dict()
xp = dict()
for k in po:
#print("k {} len {}".format(k, len(pareto[k])))
fp[k] = np.array(po[k]).T
xp[k] = np.array(px[k]).T
for k in xp:
xp[k] = [xp[k][0]*1e3, xp[k][1], xp[k][2]*1e3]
np.concatenate((np.array(xp[0]), fp[0])).T
import matplotlib.pyplot as pl
import matplotlib.colors
import matplotlib.cm
import mpl_toolkits.mplot3d as mpl
cm = pl.get_cmap('jet')
cNorm = matplotlib.colors.Normalize(vmin=0, vmax=max(fp.keys()))
scalarMap = matplotlib.cm.ScalarMappable(norm=cNorm, cmap=cm)
fig = pl.figure()
ax = fig.add_subplot(111, projection='3d')
for k in fp:
ax.scatter(fp[k][0], fp[k][1], fp[k][2], color=scalarMap.to_rgba(k))
ax.plot(fp[0][0], fp[0][1], fp[0][2],
color='red', linewidth=3, label='Pareto Front')
ax.set_xlabel(parvardef['objective_vars'][0]['label'])
ax.set_ylabel(parvardef['objective_vars'][1]['label'])
ax.set_zlabel(parvardef['objective_vars'][2]['label'])
pl.legend()
pl.show()
pl.plot(fp[0][0], fp[0][2], 'o')
pl.grid()
import femagtools.opt
import femagtools.docker
workdir = pathlib.Path.home() / 'opti'
parvardef['population_size'] = 48
parvardef['decision_vars'][0]['desc'] = 'Slot width/mm'
parvardef['decision_vars'][1]['desc'] = 'rel. Magn. width'
parvardef['decision_vars'][2]['desc'] = 'Magn. Shape/mm'
parvardef['objective_vars'][0]['desc'] = 'Load Torque/Nm'
parvardef['objective_vars'][1]['desc'] = 'Cogging Torque/Nm'
parvardef['objective_vars'][2]['desc'] = 'Torque Ripple/Nm'
engine = femagtools.docker.Engine(num_threads=9, port=5555)
opt = femagtools.opt.Optimizer(workdir, magnetizingCurves=dict(), magnetMat=dict())
num_generations = 6
results = opt.optimize(num_generations, parvardef, machine, simulation, engine)
pl.plot([t for t in results['f'][0] if t>0],
[p for p in results['f'][2] if p>0], 'o')
pl.xlabel(parvardef['objective_vars'][0]['desc'])
pl.ylabel(parvardef['objective_vars'][2]['desc'])
pl.grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use a pm_sym_fast calculation at a rotor speed of 5000 1/min
Step2: Define the variation parameters with their ranges and number of steps
Step3: Define the objective parameters
Step4: Combine the objective and variation parameters and set the population size which is in this case the upper limit of the bucket size (ie. the number of parallel calculations)
Step5: Use logging to get a feedback during the calculation
Step6: Setup up an engine the drives the calculation. Here we chose a multi-core calculation
Step7: Define the working directory
Step8: Start the parameter variation with the total 125 FE calculations (duration on a Linux i7 laptop with 8 cores
Step9: Print the results in table form
Step10: The next step will be the creation of the pareto sets. Before doing that we must define the optimization criteria. We want all values be minimize with the exception of the torque
Step11: Now we are ready to create the pareto sets
Step12: Show the results of the pareto front
Step13: Display a 3D scatter plot with all results and the pareto front
Step14: With multi objective optimization
|
11,620
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import pandas as pd
import shutil
print(tf.__version__)
!gsutil cp gs://cloud-training-demos/taxifare/small/*.csv .
!ls -l *.csv
df_train = pd.read_csv(filepath_or_buffer = "./taxi-train.csv")
df_valid = pd.read_csv(filepath_or_buffer = "./taxi-valid.csv")
df_test = pd.read_csv(filepath_or_buffer = "./taxi-test.csv")
CSV_COLUMN_NAMES = list(df_train)
print(CSV_COLUMN_NAMES)
FEATURE_NAMES = CSV_COLUMN_NAMES[1:] # all but first column
LABEL_NAME = CSV_COLUMN_NAMES[0] # first column
feature_columns = [tf.feature_column.numeric_column(key = k) for k in FEATURE_NAMES]
feature_columns
def train_input_fn(df, batch_size = 128):
#1. Convert dataframe into correct (features,label) format for Estimator API
dataset = tf.data.Dataset.from_tensor_slices(tensors = (dict(df[FEATURE_NAMES]), df[LABEL_NAME]))
# Note:
# If we returned now, the Dataset would iterate over the data once
# in a fixed order, and only produce a single element at a time.
#2. Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle(buffer_size = 1000).repeat(count = None).batch(batch_size = batch_size)
return dataset
def eval_input_fn(df, batch_size = 128):
#1. Convert dataframe into correct (features,label) format for Estimator API
dataset = tf.data.Dataset.from_tensor_slices(tensors = (dict(df[FEATURE_NAMES]), df[LABEL_NAME]))
#2.Batch the examples.
dataset = dataset.batch(batch_size = batch_size)
return dataset
def predict_input_fn(df, batch_size = 128):
#1. Convert dataframe into correct (features) format for Estimator API
dataset = tf.data.Dataset.from_tensor_slices(tensors = dict(df[FEATURE_NAMES])) # no label
#2.Batch the examples.
dataset = dataset.batch(batch_size = batch_size)
return dataset
OUTDIR = "taxi_trained"
model = tf.estimator.LinearRegressor(
feature_columns = feature_columns,
model_dir = OUTDIR,
config = tf.estimator.RunConfig(tf_random_seed = 1) # for reproducibility
)
%%time
tf.logging.set_verbosity(tf.logging.INFO) # so loss is printed during training
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
model.train(
input_fn = lambda: train_input_fn(df = df_train),
steps = 500)
def print_rmse(model, df):
metrics = model.evaluate(input_fn = lambda: eval_input_fn(df))
print("RMSE on dataset = {}".format(metrics["average_loss"]**.5))
print_rmse(model = model, df = df_valid)
predictions = model.predict(input_fn = lambda: predict_input_fn(df = df_test[:10]))
for items in predictions:
print(items)
%%time
tf.logging.set_verbosity(tf.logging.INFO)
shutil.rmtree(path = OUTDIR, ignore_errors = True)
model = tf.estimator.DNNRegressor(
hidden_units = [10,10], # specify neural architecture
feature_columns = feature_columns,
model_dir = OUTDIR,
config = tf.estimator.RunConfig(tf_random_seed = 1)
)
model.train(
input_fn = lambda: train_input_fn(df = df_train),
steps = 500)
print_rmse(model = model, df = df_valid)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load raw data
Step2: Because the files are small we can load them into in-memory Pandas dataframes.
Step3: Create feature columns
Step4: Define input function
Step5: Choose Estimator
Step6: Train
Step7: Evaluate
Step8: RMSE of 9.43 is worse than our rules based benchmark (RMSE of $7.70). However given that we haven't done any feature engineering or hyperparameter tuning, and we're training on a small dataset using a simple linear model, we shouldn't yet expect good performance.
Step9: Further evidence of the primitiveness of our model, it predicts similar amounts for every trip!
|
11,621
|
<ASSISTANT_TASK:>
Python Code:
class Person:
# Constructor
def __init__(self, name, age):
self.name = name
self.age = age
def __str__(self):
return 'name = {}\nage = {}'.format(self.name,self.age)
# Inherited or Sub class
class Employee(Person):
def __init__(self, name, age, employee_id):
Person.__init__(self, name, age) # Referring Base class
# Can also be done by super(Employee, self).__init__(name, age)
self.employee_id = employee_id
# Overriding implied code reusability
def __str__(self):
return Person.__str__(self) + '\nemployee id = {}'.format(self.employee_id)
s = Person('Kiran',18)
print(s)
e = Employee('Ramesh',18,48)
print(e)
class Base1:
def some_method(self):
print('Base1')
class Base2:
def some_method(self):
print('Base2')
class Derived1(Base1,Base2):
pass
class Derived2(Base2,Base1):
pass
d1 = Derived1()
d2 = Derived2()
d1.some_method()
d2.some_method()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <div class="alert alert-info">
Step2: Note how pass statement is used to leave the class body empty. Otherwise it would have raised a Syntax Error. Since Drived1 and Derived2 are empty, they would have imported the methods from their base classes
Step3: Now what will be the result of invoking some_method on d1 and d2? ... Does the name clash ocuur? ... Let's see
|
11,622
|
<ASSISTANT_TASK:>
Python Code:
PATH = '/cellar/users/agross/TCGA_Code/Methlation/'
cd $PATH
import NotebookImport
from Setup.Imports import *
epic = pd.read_csv(PATH + 'data/EPIC_ITALY/detectionP.csv',
index_col=0)
pData = pd.read_csv(PATH + 'data/EPIC_ITALY/pData.csv',
dtype='str', index_col=0)
epic.columns = epic.columns.map(lambda s: '_'.join(s.split('_')[1:]))
epic = epic.replace(0, nan)
epic = epic.stack()
hannum = pd.read_csv(PATH + 'data/Hannum/detectionP.csv',
index_col=0)
pData = pd.read_csv(PATH + 'data/Hannum/pData.csv',
dtype='str', index_col=0)
hannum.columns = hannum.columns.map(lambda s: pData.Sample_Name[s])
hannum = hannum.replace(0, nan)
hannum = hannum.stack()
ucsd = pd.read_csv(PATH + 'data/UCSD_Methylation/detectionP.csv',
index_col=0)
p = pd.read_csv(PATH + 'data/UCSD_Methylation/pData.csv',
index_col=0)
ucsd.columns = p.Sample_Name
ucsd = ucsd.replace(0, nan)
ucsd = ucsd.stack()
detection_p = pd.concat([ucsd, hannum, epic])
detection_p = detection_p.reset_index()
detection_p.to_hdf(HDFS_DIR + 'dx_methylation.h5', 'detection_p')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Epic Data
Step2: Hannum
Step3: UCSD
|
11,623
|
<ASSISTANT_TASK:>
Python Code:
import bali
fileReader = bali.FileReader()
fileReader.taught
fileReader.transcribed
fp = bali.FileParser()
fp.taught
firstPattern = fp.taught[0]
print(firstPattern)
firstPattern.title
firstPattern.drumPattern
firstPattern.gongPattern
firstPattern.beatLength()
firstPattern.strokes
for taughtPattern in fp.taught:
if taughtPattern.beatLength() == 4:
print(taughtPattern.title, " ::: ", taughtPattern.beatLength())
print(taughtPattern.gongPattern)
print(taughtPattern.drumPattern)
print("")
for taughtPattern in fp.taught:
print(taughtPattern.getStrokeByBeat(4), taughtPattern.title)
total = 0
for pattern in fp.taught:
total += len(pattern.strokes)
print(total)
lanang = [p for p in fp.taught if 'lanang' in p.title.lower()]
lanang
from music21 import *
ld = text.LanguageDetector()
ld
ld.trigrams
english = ld.trigrams['en']
english.lut
english.lut['be']
ld.trigrams['fr'].lut['be']
ld.mostLikelyLanguage("Das geht so gut heute!")
other = [p for p in fp.taught if 'lanang' not in p.title.lower() and 'wadon' not in p.title.lower()]
other
for p in fp.taught:
print(p.drumType, p.gongPattern)
print(p.drumType, p.drumPattern)
for i in [0.25, 0.75, 1.25, 1.75, 2.25, 2.75, 3.25, 3.75]:
for tp in fp.taught:
if tp.drumType != 'wadon':
continue
print(tp.getStrokeByBeat(i), end=' ')
print()
patt = fp.taught[9]
patt
patt.strokes
beat = 0
for stroke in patt.strokes:
print(beat, stroke)
beat = beat + 0.25
5 is 5.0
patt
subdivisionSearch = (0, 2)
strokeSearch = ('o', 'l')
for patt in fp.taught:
totalOff = 0
totalAll = 0
if patt.drumType != 'wadon':
continue
for b, s in patt.iterateStrokes():
if b == 0:
continue
if ((b*4) % 4) in subdivisionSearch and s in strokeSearch:
totalOff += 1
if s in strokeSearch:
totalAll += 1
if totalAll > 0:
perc = int(100*(totalOff/totalAll))
else:
perc = 0
print(totalAll, perc, " -- ", patt.title)
for patt in fp.taught:
if patt.title == 'Pak Dewa Wadon 0a with Dag delay':
print(patt.gongPattern)
print(patt.drumPattern)
print(patt.beatLength())
subdivisionSearch = (0, 2)
strokeSearch = ('o', 'l')
for patt in fp.taught:
totalOff = 0
totalAll = 0
if patt.drumType != 'lanang':
continue
for b, s in patt.iterateStrokes():
if b == 0:
continue
previousStrokeBeat = b - 0.25
if previousStrokeBeat >= 0:
previousStroke = patt.getStrokeByBeat(previousStrokeBeat)
if previousStroke == s:
continue
nextStrokeBeat = b + 0.25
if nextStrokeBeat <= patt.beatLength():
nextStroke = patt.getStrokeByBeat(nextStrokeBeat)
if nextStroke == s:
continue
if ((b*4) % 4) in subdivisionSearch and s in strokeSearch:
totalOff += 1
if s in strokeSearch:
totalAll += 1
if totalAll > 0:
perc = int(100*(totalOff/totalAll))
else:
perc = 0
print(totalAll, perc, " -- ", patt.title)
import re
re.match('(Pak\s\w+)\s', 'Pak Cok Lanang 7').group(1)
for patt in fp.taught:
print(patt.teacher, '--', patt.title)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we make a FileReader
Step2: More useful Object
Step3: Now we have all the taught patterns! Yay!
Step4: How many strokes total are there in the whole taught set?
Step5: Create a list of all the taught patterns that contain "lanang"
Step6: Find percentage of strokes that are on a particular beat subdivision in patterns for a given drum
Step7: Find the same as above, but eliminate all double strokes.
|
11,624
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-3', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
11,625
|
<ASSISTANT_TASK:>
Python Code:
import json
import re
with open('../catalogs/json/ecoinvent_3.2_undefined_xlsx.json') as fp:
ei32 = json.load(fp)
def search_tags(entity, search):
This function searches through all the 'tags' (semantic content) of a data set
and returns 'true' if the search expression is found. case insensitive.
all_tags = '; '.join([str(x) for x in entity['tags'].values()])
return bool(re.search(search, all_tags, flags=re.IGNORECASE))
beets = [flow for flow in ei32['flows'] if search_tags(flow,'beet')]
len(beets)
[b['tags']['Name'] for b in beets]
beet_processes = [x['process'] for x in ei32['exchanges'] if x['flow'] in beets]
len(beet_processes)
beet_refs = [b['dataSetReference'] for b in beets]
beet_processes = [x['process'] for x in ei32['exchanges'] if x['flow'] in beet_refs]
len(beet_processes)
len(set(beet_processes))
[p for p in ei32['processes'] if p['dataSetReference'] == beet_processes[0]]
import pandas as pd
p_list = [p for p in ei32['processes'] if p['dataSetReference'] in set(beet_processes)]
P = pd.DataFrame([p['tags'] for p in p_list],
index=[p['dataSetReference'] for p in p_list])
P
P.to_csv('beet_processes.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The object of this exercise is to find the UUIDs for processes dealing with sugar beet production.
Step2: 11 beet-related flows
Step3: that didn't work... it's because 'beets' is a list of flow objects, but exchange entries only list references to flow objects
Step4: This can include duplicates... set forces the entries to be unique (set members)
Step5: use pandas to print the processes as a table
Step6: p_list is a list of full process records -- we want to view all the semantic content of these records -- all the tags
Step7: Write this info to a CSV file in the current directory
|
11,626
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
res = pca.fit_transform(df_norm)
res
# Singular values
pca.singular_values_.round(2)
# Eigenvalues
pca.explained_variance_.round(2)
# Eigenvalues/eigenvalues.sum()
pca.explained_variance_ratio_.round(2)
# Eigenvectors
pca.components_
plt.bar(['PC1', 'PC2'], pca.explained_variance_ratio_)
k = 1
df_reduced = np.dot(pca.components_[:k], df_norm.T)
plt.scatter(df_reduced[0], np.ones_like(df_reduced[0]))
cov = np.cov(df_norm.T)
cov
(df_norm.T.dot(df_norm))/(len(df_norm)-1)
eigenvalues, eigenvectors = np.linalg.eig(cov)
# Here the column v[:,i] is the eigenvector
# corresponding to the eigenvalue w[i]
# Make it the opposite: v[i, :]
eigenvectors = eigenvectors.T
print("Eigenvalues (explained variance):\n", eigenvalues, "\n")
print("Eigenvectors (components):\n", eigenvectors)
print("Explained variance ratio:\n", eigenvalues/eigenvalues.sum())
rsort_eigenvalues_idx = eigenvalues.argsort()[::-1]
rsort_eigenvalues_idx
eigenvalues[rsort_eigenvalues_idx]/eigenvalues.sum()
rsort_eigenvectors = eigenvectors[rsort_eigenvalues_idx]
rsort_eigenvectors
k = 1
df_reduced = np.dot(rsort_eigenvectors[:k], df_norm.T)
df_reduced
plt.scatter(df_reduced[0], np.ones_like(df_reduced[0]))
U, s, V = np.linalg.svd(df_norm, full_matrices=False)
# Left singular vectors
U
# Singular values
s.round(2)
# Right singular vectors = eigenvectors
V
# Eigenvalues
n_sample = len(df_norm)
(s**2/(n_sample-1)).round(2)
# Transformed data
k = 2
U[:, :k]*s[:k]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From scratch with eigenvalues
Step2: Compute eigenvalues & eigenvectors
Step3: Sort eigenvectors by DESC eigenvalues
Step4: PC1 is a principal component that captures 0.92% of the data variance, using a combination of a and b (-0.71⋅a - 0.71⋅b). That means that a 1-D graph, using just PC1 would be a good approximation of the 2-D graph since it would account for 92% of the variation in the data. This can be used to identify clusters of data.
Step5: From scratch using Singular values
|
11,627
|
<ASSISTANT_TASK:>
Python Code:
import wishbone
# Plotting and miscellaneous imports
import os
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
# Load sample data
scdata = wishbone.wb.SCData.from_csv(os.path.expanduser('~/.wishbone/data/sample_scseq_data.csv'),
data_type='sc-seq', normalize=True)
scdata
scdata.run_pca()
scdata
fig, ax = scdata.plot_pca_variance_explained(ylim=(0, 0.1), n_components=30)
NO_CMPNTS = 5
scdata.run_tsne(n_components=NO_CMPNTS, perplexity=30)
fig, ax = scdata.plot_tsne()
fig = plt.figure(figsize=[5, 4])
scdata.plot_tsne_by_cell_sizes(fig=fig)
fig, ax = scdata.plot_gene_expression(genes = ['CD34', 'GATA2', 'GATA1', 'MPO'])
# Run diffusion maps
scdata.run_diffusion_map()
fig, ax = scdata.plot_diffusion_components()
scdata.run_diffusion_map_correlations()
fig, ax = scdata.plot_gene_component_correlations()
scdata.data.columns = scdata.data.columns.str.upper()
scdata.run_gsea( output_stem= os.path.expanduser('~/.wishbone/gsea/mouse_marrow'))
reports = scdata.run_gsea(output_stem= os.path.expanduser('~/.wishbone/gsea/mouse_marrow'),
gmt_file=('mouse', 'gofat.bp.v1.0.gmt.txt'))
!open ~/.wishbone/gsea/
# Component 1 enrichments
reports[1]['neg']
# Component 2 enrichments
reports[2]['pos']
# Wishbone class
wb = wishbone.wb.Wishbone(scdata)
wb.run_wishbone(start_cell='W30258', components_list=[1, 2], num_waypoints=150)
wb
fig, ax = wb.plot_wishbone_on_tsne()
vals, fig, ax = wb.plot_marker_trajectory(['CD34', 'GATA1', 'GATA2', 'MPO']);
wb.plot_marker_heatmap(vals)
wb.plot_marker_heatmap(vals, trajectory_range=[0.1, 0.6])
wb.plot_derivatives(vals)
wb.plot_derivatives(vals, trajectory_range=[0.3, 0.6])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A sample RNA-seq csv data is installed at ~/.wishbone/data/sample_scseq_data.csv. This sample data will be used to demonstrate the utilization and capabilities of the Wishbone package. This is a sample data derived from <a href="https
Step2: This will create an object of the type wishbone.wb.SCData which is the base class for the analysis. This can be either be single cell RNA-seq or mass cytometry and is specified by using data_type parameter set to sc-seq or masscyt respectively. The normalize parameter is used for correcting for library size among cells.
Step3: This shows that the data matrix contains 4423 cells and 2312 genes along with the different properties of the wishbone.wb.SCData class.
Step4: Each of the analysis function updates the scdata object. As shown below, the pca property of the scdata object is now changed to True compared to <a href="#pca">False</a> when the object was created
Step5: The results of PCA i.e., the fraction of variance explained by each component can be visualized using the function plot_pca_variance_explained. Use the ylim and n_components parameters to set the y-axis limits and visualize variance explained by n_components respectively. Typically, most of the variance is explained by the first few components (No more than 15).
Step6: From this, choose the appropriate number of components using the elbow method. While tSNE visualization is sensitive to the number of components chosen, downstream results are robust to this parameter.
Step7: perplexity by default is set 30. This will be reduced automatically to 15 is the number of cells is less than 100.
Step8: Gene expression can be visualized on tSNE maps using the plot_gene_expression function. The genes parameter is an string iterable of genes, which are a subset of the expression of column names. The below function plots the expression of HSC gene CD34, myeloid gene MPO and erythroid precursor genes GATA2 and GATA1.
Step9: <h4> Diffusion maps </h4>
Step10: Note the component 0 is the trivial component and does not encode any information of the data
Step11: The function plot_gene_component_correlations shows the distribution of correlations along each component.
Step12: The enrichments can be determined using the run_gsea function. This function needs the prefix for generating GSEA reports and a gmt file representing the different gene sets. The following invocation of the function shows the supported set of gmt files
Step13: Since this is data from mouse, gmt_file parameter can be set to (mouse, gofat.bp.v1.0.gmt.txt)
Step14: The detailed reports can be found at ~/.wishbone/gsea/
Step15: run_gsea function also returns the top enrichment gene sets along each component. GSEA determines enrichments that are either positively or negatively correlated with the gene component correlations. In this datasets, components 1 and 2 show relevant enrichments and are used for running Wishbone. Please see Selection of diffusion components for single cell RNA-seq section of the Supplementary Methods for more details.
Step16: <h4> Saving SCData object </h4>
Step17: Wishbone objects contain the SCData object along with the identified trajectory, branch associations and waypoints
Step18: <a id="wishbone2"></a><h3> Visualizing Wishbone results </h3>
Step19: Gene expression trends along the Wishbone trajectory can be visualized using the plot_marker_trajectory function. This function also returns the smoothed trends along with the matplotlib fig, ax handler objects.
Step20: The marker trends can be visualized as heatmaps in a given trajectory range using the following functions
Step21: The change in marker trends along the trajectory or derivatives can be visualized using these functions
|
11,628
|
<ASSISTANT_TASK:>
Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ensembles"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
heads_proba = 0.51
coin_tosses = (np.random.rand(10000, 10) < heads_proba).astype(np.int32)
cumulative_heads_ratio = np.cumsum(coin_tosses, axis=0) / np.arange(1, 10001).reshape(-1, 1)
plt.figure(figsize=(8,3.5))
plt.plot(cumulative_heads_ratio)
plt.plot([0, 10000], [0.51, 0.51], "k--", linewidth=2, label="51%")
plt.plot([0, 10000], [0.5, 0.5], "k-", label="50%")
plt.xlabel("Number of coin tosses")
plt.ylabel("Heads ratio")
plt.legend(loc="lower right")
plt.axis([0, 10000, 0.42, 0.58])
save_fig("law_of_large_numbers_plot")
plt.show()
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=500, noise=0.30, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
log_clf = LogisticRegression(solver="liblinear", random_state=42)
rnd_clf = RandomForestClassifier(n_estimators=10, random_state=42)
svm_clf = SVC(gamma="auto", random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='hard')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
log_clf = LogisticRegression(solver="liblinear", random_state=42)
rnd_clf = RandomForestClassifier(n_estimators=10, random_state=42)
svm_clf = SVC(gamma="auto", probability=True, random_state=42)
voting_clf = VotingClassifier(
estimators=[('lr', log_clf), ('rf', rnd_clf), ('svc', svm_clf)],
voting='soft')
voting_clf.fit(X_train, y_train)
from sklearn.metrics import accuracy_score
for clf in (log_clf, rnd_clf, svm_clf, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
max_samples=100, bootstrap=True, n_jobs=-1, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
tree_clf = DecisionTreeClassifier(random_state=42)
tree_clf.fit(X_train, y_train)
y_pred_tree = tree_clf.predict(X_test)
print(accuracy_score(y_test, y_pred_tree))
from matplotlib.colors import ListedColormap
def plot_decision_boundary(clf, X, y, axes=[-1.5, 2.5, -1, 1.5], alpha=0.5, contour=True):
x1s = np.linspace(axes[0], axes[1], 100)
x2s = np.linspace(axes[2], axes[3], 100)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap)
if contour:
custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50'])
plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", alpha=alpha)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", alpha=alpha)
plt.axis(axes)
plt.xlabel(r"$x_1$", fontsize=18)
plt.ylabel(r"$x_2$", fontsize=18, rotation=0)
plt.figure(figsize=(11,4))
plt.subplot(121)
plot_decision_boundary(tree_clf, X, y)
plt.title("Decision Tree", fontsize=14)
plt.subplot(122)
plot_decision_boundary(bag_clf, X, y)
plt.title("Decision Trees with Bagging", fontsize=14)
save_fig("decision_tree_without_and_with_bagging_plot")
plt.show()
bag_clf = BaggingClassifier(
DecisionTreeClassifier(splitter="random", max_leaf_nodes=16, random_state=42),
n_estimators=500, max_samples=1.0, bootstrap=True, n_jobs=-1, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
from sklearn.ensemble import RandomForestClassifier
rnd_clf = RandomForestClassifier(n_estimators=500, max_leaf_nodes=16, n_jobs=-1, random_state=42)
rnd_clf.fit(X_train, y_train)
y_pred_rf = rnd_clf.predict(X_test)
np.sum(y_pred == y_pred_rf) / len(y_pred) # almost identical predictions
from sklearn.datasets import load_iris
iris = load_iris()
rnd_clf = RandomForestClassifier(n_estimators=500, n_jobs=-1, random_state=42)
rnd_clf.fit(iris["data"], iris["target"])
for name, score in zip(iris["feature_names"], rnd_clf.feature_importances_):
print(name, score)
rnd_clf.feature_importances_
plt.figure(figsize=(6, 4))
for i in range(15):
tree_clf = DecisionTreeClassifier(max_leaf_nodes=16, random_state=42 + i)
indices_with_replacement = np.random.randint(0, len(X_train), len(X_train))
tree_clf.fit(X[indices_with_replacement], y[indices_with_replacement])
plot_decision_boundary(tree_clf, X, y, axes=[-1.5, 2.5, -1, 1.5], alpha=0.02, contour=False)
plt.show()
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
bootstrap=True, n_jobs=-1, oob_score=True, random_state=40)
bag_clf.fit(X_train, y_train)
bag_clf.oob_score_
bag_clf.oob_decision_function_
from sklearn.metrics import accuracy_score
y_pred = bag_clf.predict(X_test)
accuracy_score(y_test, y_pred)
try:
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1, as_frame=False)
mnist.target = mnist.target.astype(np.int64)
except ImportError:
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
rnd_clf = RandomForestClassifier(n_estimators=10, random_state=42)
rnd_clf.fit(mnist["data"], mnist["target"])
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = mpl.cm.hot,
interpolation="nearest")
plt.axis("off")
plot_digit(rnd_clf.feature_importances_)
cbar = plt.colorbar(ticks=[rnd_clf.feature_importances_.min(), rnd_clf.feature_importances_.max()])
cbar.ax.set_yticklabels(['Not important', 'Very important'])
save_fig("mnist_feature_importance_plot")
plt.show()
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=200,
algorithm="SAMME.R", learning_rate=0.5, random_state=42)
ada_clf.fit(X_train, y_train)
plot_decision_boundary(ada_clf, X, y)
m = len(X_train)
plt.figure(figsize=(11, 4))
for subplot, learning_rate in ((121, 1), (122, 0.5)):
sample_weights = np.ones(m)
plt.subplot(subplot)
for i in range(5):
svm_clf = SVC(kernel="rbf", C=0.05, gamma="auto", random_state=42)
svm_clf.fit(X_train, y_train, sample_weight=sample_weights)
y_pred = svm_clf.predict(X_train)
sample_weights[y_pred != y_train] *= (1 + learning_rate)
plot_decision_boundary(svm_clf, X, y, alpha=0.2)
plt.title("learning_rate = {}".format(learning_rate), fontsize=16)
if subplot == 121:
plt.text(-0.7, -0.65, "1", fontsize=14)
plt.text(-0.6, -0.10, "2", fontsize=14)
plt.text(-0.5, 0.10, "3", fontsize=14)
plt.text(-0.4, 0.55, "4", fontsize=14)
plt.text(-0.3, 0.90, "5", fontsize=14)
save_fig("boosting_plot")
plt.show()
list(m for m in dir(ada_clf) if not m.startswith("_") and m.endswith("_"))
np.random.seed(42)
X = np.random.rand(100, 1) - 0.5
y = 3*X[:, 0]**2 + 0.05 * np.random.randn(100)
from sklearn.tree import DecisionTreeRegressor
tree_reg1 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg1.fit(X, y)
y2 = y - tree_reg1.predict(X)
tree_reg2 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg2.fit(X, y2)
y3 = y2 - tree_reg2.predict(X)
tree_reg3 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg3.fit(X, y3)
X_new = np.array([[0.8]])
y_pred = sum(tree.predict(X_new) for tree in (tree_reg1, tree_reg2, tree_reg3))
y_pred
def plot_predictions(regressors, X, y, axes, label=None, style="r-", data_style="b.", data_label=None):
x1 = np.linspace(axes[0], axes[1], 500)
y_pred = sum(regressor.predict(x1.reshape(-1, 1)) for regressor in regressors)
plt.plot(X[:, 0], y, data_style, label=data_label)
plt.plot(x1, y_pred, style, linewidth=2, label=label)
if label or data_label:
plt.legend(loc="upper center", fontsize=16)
plt.axis(axes)
plt.figure(figsize=(11,11))
plt.subplot(321)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h_1(x_1)$", style="g-", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Residuals and tree predictions", fontsize=16)
plt.subplot(322)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1)$", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Ensemble predictions", fontsize=16)
plt.subplot(323)
plot_predictions([tree_reg2], X, y2, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_2(x_1)$", style="g-", data_style="k+", data_label="Residuals")
plt.ylabel("$y - h_1(x_1)$", fontsize=16)
plt.subplot(324)
plot_predictions([tree_reg1, tree_reg2], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1)$")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.subplot(325)
plot_predictions([tree_reg3], X, y3, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_3(x_1)$", style="g-", data_style="k+")
plt.ylabel("$y - h_1(x_1) - h_2(x_1)$", fontsize=16)
plt.xlabel("$x_1$", fontsize=16)
plt.subplot(326)
plot_predictions([tree_reg1, tree_reg2, tree_reg3], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1) + h_3(x_1)$")
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
save_fig("gradient_boosting_plot")
plt.show()
from sklearn.ensemble import GradientBoostingRegressor
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=3, learning_rate=1.0, random_state=42)
gbrt.fit(X, y)
gbrt_slow = GradientBoostingRegressor(max_depth=2, n_estimators=200, learning_rate=0.1, random_state=42)
gbrt_slow.fit(X, y)
plt.figure(figsize=(11,4))
plt.subplot(121)
plot_predictions([gbrt], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="Ensemble predictions")
plt.title("learning_rate={}, n_estimators={}".format(gbrt.learning_rate, gbrt.n_estimators), fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_slow], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("learning_rate={}, n_estimators={}".format(gbrt_slow.learning_rate, gbrt_slow.n_estimators), fontsize=14)
save_fig("gbrt_learning_rate_plot")
plt.show()
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=49)
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=120, random_state=42)
gbrt.fit(X_train, y_train)
errors = [mean_squared_error(y_val, y_pred)
for y_pred in gbrt.staged_predict(X_val)]
bst_n_estimators = np.argmin(errors) + 1
gbrt_best = GradientBoostingRegressor(max_depth=2,n_estimators=bst_n_estimators, random_state=42)
gbrt_best.fit(X_train, y_train)
min_error = np.min(errors)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.plot(errors, "b.-")
plt.plot([bst_n_estimators, bst_n_estimators], [0, min_error], "k--")
plt.plot([0, 120], [min_error, min_error], "k--")
plt.plot(bst_n_estimators, min_error, "ko")
plt.text(bst_n_estimators, min_error*1.2, "Minimum", ha="center", fontsize=14)
plt.axis([0, 120, 0, 0.01])
plt.xlabel("Number of trees")
plt.title("Validation error", fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_best], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("Best model (%d trees)" % bst_n_estimators, fontsize=14)
save_fig("early_stopping_gbrt_plot")
plt.show()
gbrt = GradientBoostingRegressor(max_depth=2, warm_start=True, random_state=42)
min_val_error = float("inf")
error_going_up = 0
for n_estimators in range(1, 120):
gbrt.n_estimators = n_estimators
gbrt.fit(X_train, y_train)
y_pred = gbrt.predict(X_val)
val_error = mean_squared_error(y_val, y_pred)
if val_error < min_val_error:
min_val_error = val_error
error_going_up = 0
else:
error_going_up += 1
if error_going_up == 5:
break # early stopping
print(gbrt.n_estimators)
print("Minimum validation MSE:", min_val_error)
try:
import xgboost
except ImportError as ex:
print("Error: the xgboost library is not installed.")
xgboost = None
if xgboost is not None: # not shown in the book
xgb_reg = xgboost.XGBRegressor(random_state=42)
xgb_reg.fit(X_train, y_train)
y_pred = xgb_reg.predict(X_val)
val_error = mean_squared_error(y_val, y_pred)
print("Validation MSE:", val_error)
if xgboost is not None: # not shown in the book
xgb_reg.fit(X_train, y_train,
eval_set=[(X_val, y_val)], early_stopping_rounds=2)
y_pred = xgb_reg.predict(X_val)
val_error = mean_squared_error(y_val, y_pred)
print("Validation MSE:", val_error)
%timeit xgboost.XGBRegressor().fit(X_train, y_train) if xgboost is not None else None
%timeit GradientBoostingRegressor().fit(X_train, y_train)
from sklearn.model_selection import train_test_split
X_train_val, X_test, y_train_val, y_test = train_test_split(
mnist.data, mnist.target, test_size=10000, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(
X_train_val, y_train_val, test_size=10000, random_state=42)
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.svm import LinearSVC
from sklearn.neural_network import MLPClassifier
random_forest_clf = RandomForestClassifier(n_estimators=10, random_state=42)
extra_trees_clf = ExtraTreesClassifier(n_estimators=10, random_state=42)
svm_clf = LinearSVC(random_state=42)
mlp_clf = MLPClassifier(random_state=42)
estimators = [random_forest_clf, extra_trees_clf, svm_clf, mlp_clf]
for estimator in estimators:
print("Training the", estimator)
estimator.fit(X_train, y_train)
[estimator.score(X_val, y_val) for estimator in estimators]
from sklearn.ensemble import VotingClassifier
named_estimators = [
("random_forest_clf", random_forest_clf),
("extra_trees_clf", extra_trees_clf),
("svm_clf", svm_clf),
("mlp_clf", mlp_clf),
]
voting_clf = VotingClassifier(named_estimators)
voting_clf.fit(X_train, y_train)
voting_clf.score(X_val, y_val)
[estimator.score(X_val, y_val) for estimator in voting_clf.estimators_]
voting_clf.set_params(svm_clf=None)
voting_clf.estimators
voting_clf.estimators_
del voting_clf.estimators_[2]
voting_clf.score(X_val, y_val)
voting_clf.voting = "soft"
voting_clf.score(X_val, y_val)
voting_clf.score(X_test, y_test)
[estimator.score(X_test, y_test) for estimator in voting_clf.estimators_]
X_val_predictions = np.empty((len(X_val), len(estimators)), dtype=np.float32)
for index, estimator in enumerate(estimators):
X_val_predictions[:, index] = estimator.predict(X_val)
X_val_predictions
rnd_forest_blender = RandomForestClassifier(n_estimators=200, oob_score=True, random_state=42)
rnd_forest_blender.fit(X_val_predictions, y_val)
rnd_forest_blender.oob_score_
X_test_predictions = np.empty((len(X_test), len(estimators)), dtype=np.float32)
for index, estimator in enumerate(estimators):
X_test_predictions[:, index] = estimator.predict(X_test)
y_pred = rnd_forest_blender.predict(X_test_predictions)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Voting classifiers
Step2: Warning
Step3: Bagging ensembles
Step4: Random Forests
Step5: Out-of-Bag evaluation
Step6: Feature importance
Step7: AdaBoost
Step8: Gradient Boosting
Step9: Gradient Boosting with Early stopping
Step10: Using XGBoost
Step11: Exercise solutions
Step12: Exercise
Step13: The linear SVM is far outperformed by the other classifiers. However, let's keep it for now since it may improve the voting classifier's performance.
Step14: Let's remove the SVM to see if performance improves. It is possible to remove an estimator by setting it to None using set_params() like this
Step15: This updated the list of estimators
Step16: However, it did not update the list of trained estimators
Step17: So we can either fit the VotingClassifier again, or just remove the SVM from the list of trained estimators
Step18: Now let's evaluate the VotingClassifier again
Step19: A bit better! The SVM was hurting performance. Now let's try using a soft voting classifier. We do not actually need to retrain the classifier, we can just set voting to "soft"
Step20: That's a significant improvement, and it's much better than each of the individual classifiers.
Step21: The voting classifier reduced the error rate from about 4.0% for our best model (the MLPClassifier) to just 3.1%. That's about 22.5% less errors, not bad!
Step22: You could fine-tune this blender or try other types of blenders (e.g., an MLPClassifier), then select the best one using cross-validation, as always.
|
11,629
|
<ASSISTANT_TASK:>
Python Code:
import sympy as sym
sym.init_printing()
t, l = sym.symbols('t lambda')
y = sym.Function('y')(t)
dydt = y.diff(t)
expr = sym.Eq(dydt, -l*y)
expr
sym.dsolve(expr)
import numpy as np
def euler_fw(rhs, y0, tout, params):
y0 = np.atleast_1d(np.asarray(y0, dtype=np.float64))
dydt = np.empty_like(y0)
yout = np.zeros((len(tout), len(y0)))
yout[0] = y0
t_old = tout[0]
for i, t in enumerate(tout[1:], 1):
dydt[:] = rhs(yout[i-1], t, *params)
h = t - t_old
yout[i] = yout[i-1] + dydt*h
t_old = t
return yout
def rhs(y, t, decay_constant):
return -decay_constant*y # the rate does not depend on time ("t")
tout = np.linspace(0, 2e9, 100)
y0 = 3
params = (1.78e-9,) # 1 parameter, decay constant of tritium
yout = euler_fw(rhs, y0, tout, params)
import matplotlib.pyplot as plt
%matplotlib inline
def my_plot(tout, yout, params, xlbl='time / a.u.', ylabel=None, analytic=None):
fig, axes = plt.subplots(1, 2 if analytic else 1, figsize=(14, 4))
axes = np.atleast_1d(axes)
for i in range(yout.shape[1]):
axes[0].plot(tout, yout[:, i], label='y%d' % i)
if ylabel:
axes[0].set_ylabel(ylabel)
for ax in axes:
ax.set_xlabel(xlbl)
if analytic:
axes[0].plot(tout, analytic(tout, yout, params), '--')
axes[1].plot(tout, yout[:, 0] - yout[0]*np.exp(-params[0]*(tout-tout[0])))
if ylabel:
axes[1].set_ylabel('Error in ' + ylabel)
def analytic(tout, yout, params):
return yout[0, 0]*np.exp(-params[0]*tout)
my_plot(tout, yout, params, analytic=analytic, ylabel='number density / a.u.')
from scipy.integrate import odeint
yout, info = odeint(rhs, y0, tout, params, full_output=True)
my_plot(tout, yout, params, analytic=analytic)
print("Number of function evaluations: %d" % info['nfe'][-1])
def vdp(y, t, mu):
return [
y[1],
mu*(1-y[0]**2)*y[1] - y[0]
]
tout = np.linspace(0, 200, 1024)
y_init, params = [1, 0], (17,)
y_euler = euler_fw(vdp, y_init, tout, params) # never mind the warnings emitted here...
my_plot(tout, y_euler, params)
y_odeint, info = odeint(vdp, y_init, tout, params, full_output=True)
print("Number of function evaluations: %d, number of Jacobian evaluations: %d" % (info['nfe'][-1], info['nje'][-1]))
my_plot(tout, y_odeint, params)
help(odeint) # just skip to "Dfun"
%load_ext scipy2017codegen.exercise
%exercise exercise_jac_func.py
J_func(y_init, tout[0], params[0])
y_odeint, info = odeint(vdp, y_init, tout, params, full_output=True, Dfun=J_func)
my_plot(tout, y_odeint, params)
print("Number of function evaluations: %d, number of Jacobian evaluations: %d" % (info['nfe'][-1], info['nje'][-1]))
y = y0, y1 = sym.symbols('y0 y1')
mu = sym.symbols('mu')
J = sym.Matrix(vdp(y, None, mu)).jacobian(y)
J_func = sym.lambdify((y, t, mu), J)
J
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, pretend for a while that this function lacked an analytic solution. We could then integrate this equation numerically from an initial state for a predetermined amount of time by discretizing the time into a seriers of small steps.
Step2: applying this function on our model problem
Step3: and plotting the solution & the numerical error using matplotlib
Step4: We see that 100 points gave us almost plotting accuracy.
Step5: We can see that odeint was able to achieve a much higher precision using fewer number of function evaluations.
Step6: using "Euler forward"
Step7: That does not look like an oscillator. (we see that Euler forward has deviated to values with enormous magnitude), here the advanced treatment by the odeint solver is far superior
Step8: We see that LSODA has evaluated the Jacobian. But we never gave it an explicit representation of it―so how could it?
Step9: so the signature needs to be
Step10: Use either the * %exercise * or * %load * magic to get the exercise / solution respecitvely (i.e. delete the whole contents of the cell except for the uncommented magic command). Replace ??? with the correct expression.
Step11: So this time the integration needed to evaluate both the ODE system function and its Jacobian fewer times than when using finite difference approximations. The reason for this is that the more accurate the Jacobian is, the better is the convergence in the iterative (Newton's) method solving the implicit system of equations.
|
11,630
|
<ASSISTANT_TASK:>
Python Code:
# conventional way to import pandas
import pandas as pd
# read CSV file directly from a URL and save the results
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
# display the first 5 rows
data.head()
# display the last 5 rows
data.tail()
# check the shape of the DataFrame (rows, columns)
data.shape
# conventional way to import seaborn
import seaborn as sns
# allow plots to appear within the notebook
%matplotlib inline
# visualize the relationship between the features and the response using scatterplots
sns.pairplot(data, x_vars=['TV','Radio','Newspaper'], y_vars='Sales', size=7, aspect=0.7, kind='reg')
# create a Python list of feature names
feature_cols = ['TV', 'Radio', 'Newspaper']
# use the list to select a subset of the original DataFrame
X = data[feature_cols]
# equivalent command to do this in one line
X = data[['TV', 'Radio', 'Newspaper']]
# print the first 5 rows
X.head()
# check the type and shape of X
print(type(X))
print(X.shape)
# select a Series from the DataFrame
y = data['Sales']
# equivalent command that works if there are no spaces in the column name
y = data.Sales
# print the first 5 values
y.head()
# check the type and shape of y
print(type(y))
print(y.shape)
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# default split is 75% for training and 25% for testing
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
# import model
from sklearn.linear_model import LinearRegression
# instantiate
linreg = LinearRegression()
# fit the model to the training data (learn the coefficients)
linreg.fit(X_train, y_train)
# print the intercept and coefficients
print(linreg.intercept_)
print(linreg.coef_)
# pair the feature names with the coefficients
list(zip(feature_cols, linreg.coef_))
# make predictions on the testing set
y_pred = linreg.predict(X_test)
# define true and predicted response values
true = [100, 50, 30, 20]
pred = [90, 50, 50, 30]
# calculate MAE by hand
print((10 + 0 + 20 + 10)/4.)
# calculate MAE using scikit-learn
from sklearn import metrics
print(metrics.mean_absolute_error(true, pred))
# calculate MSE by hand
print((10**2 + 0**2 + 20**2 + 10**2)/4.)
# calculate MSE using scikit-learn
print(metrics.mean_squared_error(true, pred))
# calculate RMSE by hand
import numpy as np
print(np.sqrt((10**2 + 0**2 + 20**2 + 10**2)/4.))
# calculate RMSE using scikit-learn
print(np.sqrt(metrics.mean_squared_error(true, pred)))
print(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
# create a Python list of feature names
feature_cols = ['TV', 'Radio']
# use the list to select a subset of the original DataFrame
X = data[feature_cols]
# select a Series from the DataFrame
y = data.Sales
# split into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# fit the model to the training data (learn the coefficients)
linreg.fit(X_train, y_train)
# make predictions on the testing set
y_pred = linreg.predict(X_test)
# compute the RMSE of our predictions
print(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Primary object types
Step2: What are the features?
Step3: Linear regression
Step4: Splitting X and y into training and testing sets
Step5: Linear regression in scikit-learn
Step6: Interpreting model coefficients
Step7: $$y = 2.88 + 0.0466 \times TV + 0.179 \times Radio + 0.00345 \times Newspaper$$
Step8: We need an evaluation metric in order to compare our predictions with the actual values!
Step9: Mean Absolute Error (MAE) is the mean of the absolute value of the errors
Step10: Mean Squared Error (MSE) is the mean of the squared errors
Step11: Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors
Step12: Comparing these metrics
Step13: Feature selection
Step14: The RMSE decreased when we removed Newspaper from the model. (Error is something we want to minimize, so a lower number for RMSE is better.) Thus, it is unlikely that this feature is useful for predicting Sales, and should be removed from the model.
|
11,631
|
<ASSISTANT_TASK:>
Python Code:
import pints
import pints.toy as toy
import pints.plot
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
x = np.linspace(-15, 15, 1000)
y_c = scipy.stats.t.pdf(x, 1, loc=0, scale=1)
y_t = scipy.stats.t.pdf(x, 3, loc=0, scale=1)
y_norm = scipy.stats.norm.pdf(x, 0, 3)
plt.plot(x, y_c, label ='Cauchy(0, 1)')
plt.plot(x, y_t, label ='Student-t(df=3, scale=1)')
plt.plot(x, y_norm, label ='Gaussian(0, 3)')
plt.xlabel('x')
plt.ylabel('Probability density')
plt.legend()
plt.show()
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
real_parameters = [0.015, 500]
times = np.linspace(0, 1000, 100)
signal_values = model.simulate(real_parameters, times)
# Add Cauchy noise
nu = 1
sigma = 10
observed_values_t = signal_values + scipy.stats.t.rvs(df=nu, loc=0, scale=sigma, size=signal_values.shape)
observed_values_norm = signal_values + scipy.stats.norm.rvs(loc=0, scale=sigma, size=signal_values.shape)
real_parameters = np.array(real_parameters + [sigma])
# Plot
fig = plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.plot(times,signal_values,label = 'signal')
plt.plot(times,observed_values_t,label = 'observed')
plt.xlabel('Time')
plt.ylabel('Values')
plt.title('Cauchy errors')
plt.legend()
plt.subplot(122)
plt.plot(times,signal_values,label = 'signal')
plt.plot(times,observed_values_norm,label = 'observed')
plt.xlabel('Time')
plt.ylabel('Values')
plt.title('Gaussian errors')
plt.legend()
plt.show()
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, observed_values_t)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.CauchyLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0.01, 400, sigma*0.1],
[0.02, 600, sigma*100]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
xs = [
real_parameters * 1.1,
real_parameters * 0.9,
real_parameters * 1.0,
]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.HaarioBardenetACMC)
# Add stopping criterion
mcmc.set_max_iterations(2000)
# Start adapting after 1000 iterations
mcmc.set_initial_phase_iterations(250)
# Disable logging mode
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(chains)
# Discard warm up
chains = chains[:, 1000:, :]
# Check convergence and other properties of chains
results = pints.MCMCSummary(chains=chains, time=mcmc.time(), parameter_names=['growth rate', 'capacity', 'sigma'])
print(results)
# Look at distribution in chain 0
pints.plot.pairwise(chains[0], kde=True, ref_parameters=real_parameters)
# Show graphs
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compare a Cauchy error process with a normal error process for the logistic model.
Step2: Specify a model using a Cauchy error process and use adaptive covariance to fit it to data.
|
11,632
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import holoviews as hv
%reload_ext holoviews.ipython
fractal = hv.Image(np.load('mandelbrot.npy'))
((fractal * hv.HLine(y=0)).hist() + fractal.sample(y=0))
%%opts Points [scaling_factor=50] Contours (color='w')
dots = np.linspace(-0.45, 0.45, 19)
hv.HoloMap({y: (fractal * fractal.sample([(i,y) for i in dots]).to.points(['x','y'], 'z') +
fractal.sample(y=y) +
hv.operation.threshold(fractal, level=np.percentile(fractal.sample(y=y).data, 90)) +
hv.operation.contours(fractal, levels=[np.percentile(fractal.sample(y=y).data, 60)]))
for y in np.linspace(-0.3, 0.3, 21)}, kdims=['Y']).collate().cols(2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To use HoloViews, you first wrap your data in a HoloViews component along with optional metadata describing it. It will then display itself automatically on its own or in combination with any other HoloViews component. The separate matplotlib library does the plotting, but none of the data structures depend on the plotting code, so that you can easily create, save, load, and manipulate HoloViews objects from within your own programs. HoloViews objects support arbitrary combination, selection, slicing, sorting, sampling, and animation, to allow you to focus on whatever aspect of your data you wish. Instead of writing or maintaining complex plotting code, just declare what data you want to see, and HoloViews will handle the rest.
|
11,633
|
<ASSISTANT_TASK:>
Python Code:
# List all directories and sub-directories
!find ./Convolutional_Neural_Networks/dataset -type d -maxdepth 5
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
# Initialising the CNN
classifier = Sequential()
# Step 1 - Convolution
classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
# Step 2 - Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Adding a second convolutional layer
classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
# Step 3 - Flattening
classifier.add(Flatten())
# Step 4 - Full connection
classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
# Compiling the CNN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('Convolutional_Neural_Networks/dataset/training_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('Convolutional_Neural_Networks/dataset/test_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
classifier.fit_generator(training_set,
steps_per_epoch = 8000,
epochs = 25,
validation_data = test_set,
validation_steps = 2000)
import numpy as np
from keras.preprocessing import image
test_image = image.load_img('Convolutional_Neural_Networks/dataset/single_prediction/cat_or_dog_1.jpg',
target_size = (64, 64))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = classifier.predict(test_image)
training_set.class_indices
if result[0][0] == 1:
prediction = 'dog'
else:
prediction = 'cat'
print (prediction)
from tensorflow.contrib.keras.api.keras.layers import Dropout
from tensorflow.contrib.keras.api.keras.models import Sequential
from tensorflow.contrib.keras.api.keras.layers import Conv2D
from tensorflow.contrib.keras.api.keras.layers import MaxPooling2D
from tensorflow.contrib.keras.api.keras.layers import Flatten
from tensorflow.contrib.keras.api.keras.layers import Dense
from tensorflow.contrib.keras.api.keras.callbacks import Callback
from tensorflow.contrib.keras.api.keras.preprocessing.image import ImageDataGenerator
from tensorflow.contrib.keras import backend
import os
class LossHistory(Callback):
def __init__(self):
super().__init__()
self.epoch_id = 0
self.losses = ''
def on_epoch_end(self, epoch, logs={}):
self.losses += "Epoch {}: accuracy -> {:.4f}, val_accuracy -> {:.4f}\n"\
.format(str(self.epoch_id), logs.get('acc'), logs.get('val_acc'))
self.epoch_id += 1
def on_train_begin(self, logs={}):
self.losses += 'Training begins...\n'
script_dir = os.path.dirname(__file__)
training_set_path = os.path.join(script_dir, '../dataset/training_set')
test_set_path = os.path.join(script_dir, '../dataset/test_set')
# Initialising the CNN
classifier = Sequential()
# Step 1 - Convolution
input_size = (128, 128)
classifier.add(Conv2D(32, (3, 3), input_shape=(*input_size, 3), activation='relu'))
# Step 2 - Pooling
classifier.add(MaxPooling2D(pool_size=(2, 2))) # 2x2 is optimal
# Adding a second convolutional layer
classifier.add(Conv2D(32, (3, 3), activation='relu'))
classifier.add(MaxPooling2D(pool_size=(2, 2)))
# Adding a third convolutional layer
classifier.add(Conv2D(64, (3, 3), activation='relu'))
classifier.add(MaxPooling2D(pool_size=(2, 2)))
# Step 3 - Flattening
classifier.add(Flatten())
# Step 4 - Full connection
classifier.add(Dense(units=64, activation='relu'))
classifier.add(Dropout(0.5))
classifier.add(Dense(units=1, activation='sigmoid'))
# Compiling the CNN
classifier.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Part 2 - Fitting the CNN to the images
batch_size = 32
train_datagen = ImageDataGenerator(rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
training_set = train_datagen.flow_from_directory(training_set_path,
target_size=input_size,
batch_size=batch_size,
class_mode='binary')
test_set = test_datagen.flow_from_directory(test_set_path,
target_size=input_size,
batch_size=batch_size,
class_mode='binary')
# Create a loss history
history = LossHistory()
classifier.fit_generator(training_set,
steps_per_epoch=8000/batch_size,
epochs=90,
validation_data=test_set,
validation_steps=2000/batch_size,
workers=12,
max_q_size=100,
callbacks=[history])
# Save model
model_backup_path = os.path.join(script_dir, '../dataset/cat_or_dogs_model.h5')
classifier.save(model_backup_path)
print("Model saved to", model_backup_path)
# Save loss history to file
loss_history_path = os.path.join(script_dir, '../loss_history.log')
myFile = open(loss_history_path, 'w+')
myFile.write(history.losses)
myFile.close()
backend.clear_session()
print("The model class indices are:", training_set.class_indices)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Building the CNN
Step2: Fitting the CNN to the images
Step3: Making new predictions
Step4: Challenge
|
11,634
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import hyperspy.api as hs
import pyxem as pxm
import numpy as np
rp = hs.load('./data/08/amorphousSiO2.hspy')
rp.set_signal_type('electron_diffraction')
rp = pxm.signals.ElectronDiffraction1D([[rp.data]])
calibration = 0.00167
rp.set_diffraction_calibration(calibration=calibration)
rp.plot()
rigen = pxm.generators.ReducedIntensityGenerator1D(rp)
elements = ['Si','O']
fracs = [0.333,0.667]
rigen.fit_atomic_scattering(elements,fracs,scattering_factor='lobato',plot_fit=True,iterpath='serpentine')
rigen.set_s_cutoff(s_min=1.5,s_max=4)
rigen.fit_atomic_scattering(elements,fracs,scattering_factor='lobato',plot_fit=True,iterpath='serpentine')
ri = rigen.get_reduced_intensity()
ri.plot()
ri.damp_exponential(b=0.1)
ri.plot()
ri.damp_lorch(s_max=4)
ri.plot()
ri.damp_low_q_region_erfc(offset=4)
ri.plot()
ri = rigen.get_reduced_intensity()
pdfgen = pxm.generators.PDFGenerator1D(ri)
s_min = 0.
s_max = 4.
pdf = pdfgen.get_pdf(s_min=s_min, s_max=s_max, r_max=10)
pdf.plot()
pdf.save('Demo-PDF.hspy')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='loa'></a>
Step2: For now, the code requires navigation dimensions in the reduced intensity signal, two size 1 ones are created.
Step3: Set the diffraction pattern calibration. Note that pyXem uses a calibration to $s = \frac{1}{d} = 2\frac{\sin{\theta}}{\lambda}$.
Step4: Plot the radial profile
Step5: <a id='ri'></a>
Step6: We then fit an electron scattering factor to the profile. To do this, we need to define a list of elements and their respective atomic fractions.
Step7: Then we will fit a background scattering factor. The scattering factor parametrisation used here is that specified by Lobato and Van Dyck [2]. The plot_fit parameter ensures we check the fitted profile.
Step8: That's clearly a terrible fit! This is because we're trying to fit the beam stop. To avoid this, we specify to fit to the 'tail end' of the data by specifying a minimum and maximum scattering angle range. This is generally recommended, as electron scattering factors tend to not include inelastic scattering, which means the factors are rarely perfect fits.
Step9: That's clearly much much better. Always inspect your fit.
Step10: If it seems like the reduced intensity is not oscillating around 0 at high s, you should try fitting with a larger s_min. This generally speaking solves the issue.
Step11: Additionally, it is recommended to damp the low s regime. We use an error function to do that
Step12: If the function ends up overdamped, you can simply reacquire the reduced intensity using
Step13: <a id='pdf'></a>
Step14: Secify a minimum and maximum scattering angle. The maximum must be equivalent to the Lorch function s_max if the Lorch function is used to damp. Otherwise the Lorch function damping can cause artifact in the PDF.
Step15: Finally we get the PDF. r_max specifies the maximum real space distance we want to interpret.
Step16: The PDF can then be saved.
|
11,635
|
<ASSISTANT_TASK:>
Python Code:
import bnn
print(bnn.available_params(bnn.NETWORK_CNVW1A1))
classifier = bnn.CnvClassifier(bnn.NETWORK_CNVW1A1,"streetview",bnn.RUNTIME_HW)
print(classifier.classes)
from PIL import Image
import numpy as np
img = Image.open('/home/xilinx/jupyter_notebooks/bnn/pictures/6.png')
img
result_class_idx = classifier.classify_image(img)
print("Inferred number: {0}".format(classifier.class_name(result_class_idx)))
sw_classifier = bnn.CnvClassifier(bnn.NETWORK_CNVW1A1, "streetview", bnn.RUNTIME_SW)
result_class_idx = sw_classifier.classify_image(img)
print("Inferred number: {0}".format(sw_classifier.class_name(result_class_idx)))
from pynq import Xlnk
xlnk = Xlnk();
xlnk.xlnk_reset()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Get classes of dataset
Step2: 3. Open image to be classified
Step3: 4. Launching BNN in hardware
Step4: 5. Launching BNN in software
Step5: 6. Reset the device
|
11,636
|
<ASSISTANT_TASK:>
Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).load_data()
raw.pick(['EEG 0{:02}'.format(n) for n in range(41, 60)])
# code lines below are commented out because the sample data doesn't have
# earlobe or mastoid channels, so this is just for demonstration purposes:
# use a single channel reference (left earlobe)
# raw.set_eeg_reference(ref_channels=['A1'])
# use average of mastoid channels as reference
# raw.set_eeg_reference(ref_channels=['M1', 'M2'])
# use a bipolar reference (contralateral)
# raw.set_bipolar_reference(anode='[F3'], cathode=['F4'])
raw.plot()
# add new reference channel (all zero)
raw_new_ref = mne.add_reference_channels(raw, ref_channels=['EEG 999'])
raw_new_ref.plot()
# set reference to `EEG 050`
raw_new_ref.set_eeg_reference(ref_channels=['EEG 050'])
raw_new_ref.plot()
# use the average of all channels as reference
raw_avg_ref = raw.copy().set_eeg_reference(ref_channels='average')
raw_avg_ref.plot()
raw.set_eeg_reference('average', projection=True)
print(raw.info['projs'])
for title, proj in zip(['Original', 'Average'], [False, True]):
with mne.viz.use_browser_backend('matplotlib'):
fig = raw.plot(proj=proj, n_channels=len(raw))
# make room for title
fig.subplots_adjust(top=0.9)
fig.suptitle('{} reference'.format(title), size='xx-large', weight='bold')
raw.del_proj() # remove our average reference projector first
sphere = mne.make_sphere_model('auto', 'auto', raw.info)
src = mne.setup_volume_source_space(sphere=sphere, exclude=30., pos=15.)
forward = mne.make_forward_solution(raw.info, trans=None, src=src, bem=sphere)
raw_rest = raw.copy().set_eeg_reference('REST', forward=forward)
for title, _raw in zip(['Original', 'REST (∞)'], [raw, raw_rest]):
with mne.viz.use_browser_backend('matplotlib'):
fig = _raw.plot(n_channels=len(raw), scalings=dict(eeg=5e-5))
# make room for title
fig.subplots_adjust(top=0.9)
fig.suptitle('{} reference'.format(title), size='xx-large', weight='bold')
raw_bip_ref = mne.set_bipolar_reference(raw, anode=['EEG 054'],
cathode=['EEG 055'])
raw_bip_ref.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Background
Step2: If a scalp electrode was used as reference but was not saved alongside the
Step3: By default,
Step4: .. KEEP THESE BLOCKS SEPARATE SO FIGURES ARE BIG ENOUGH TO READ
Step5: Notice that the new reference (EEG 050) is now flat, while the original
Step6: Creating the average reference as a projector
Step7: Creating the average reference as a projector has a few advantages
Step8: Using an infinite reference (REST)
Step9: Using a bipolar reference
|
11,637
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:param is_training: bool or Tensor
Indicates whether or not the network is currently training, which tells the batch normalization
layer whether or not it should update or use its population statistics.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)
layer = tf.layers.batch_normalization(layer, training=is_training)
layer = tf.nn.relu(layer)
return layer
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', use_bias=False, activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
#Placeholder for training Boolean
is_training = tf.placeholder(tf.bool, name="is_training")
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO
|
11,638
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
%matplotlib inline
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import math
import os
import random
import time
import zipfile
import numpy as np
from six.moves import urllib
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow as tf
from sklearn.manifold import TSNE
url = 'http://mattmahoney.net/dc/'
def maybe_download(filename, expected_bytes):
Download a file if not present, and make sure it's the right size.
if not os.path.exists(filename):
filename, _ = urllib.request.urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified %s' % filename)
else:
print(statinfo.st_size)
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
filename = maybe_download('text8.zip', 31344016)
def read_data(filename):
Extract the first file enclosed in a zip file as a list of words
with zipfile.ZipFile(filename) as f:
data = tf.compat.as_str(f.read(f.namelist()[0])).split()
return data
words = read_data(filename)
print('Data size %d' % len(words))
vocabulary_size = 50000
def build_dataset(words):
count = [['UNK', -1]]
count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
if word in dictionary:
index = dictionary[word]
else:
index = 0 # dictionary['UNK']
unk_count = unk_count + 1
data.append(index)
count[0][1] = unk_count
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reverse_dictionary
data, count, dictionary, reverse_dictionary = build_dataset(words)
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10], [reverse_dictionary[i] for i in data[:10]])
del words # Hint to reduce memory.
data_index = 0
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [skip_window]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
print('data:', [reverse_dictionary[di] for di in data[:8]])
for num_skips, skip_window in [(2, 1), (4, 2)]:
data_index = 0
batch, labels = generate_batch(batch_size=8, num_skips=num_skips, skip_window=skip_window)
print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window))
print(' batch:', [reverse_dictionary[bi] for bi in batch])
print(' labels:', [reverse_dictionary[li] for li in labels.reshape(8)])
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. Here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.random.choice(valid_window, valid_size, replace=False)
num_sampled = 64 # Number of negative examples to sample.
graph = tf.Graph()
with graph.as_default():
# Input data.
train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Ops and variables pinned to the CPU because of missing GPU implementation
with tf.device('/cpu:0'):
# Look up embeddings for inputs.
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
embed = tf.nn.embedding_lookup(embeddings, train_inputs)
# Construct the variables for the NCE loss
nce_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Compute the average NCE loss for the batch.
# tf.nce_loss automatically draws a new sample of the negative labels each
# time we evaluate the loss.
loss = tf.reduce_mean(
tf.nn.nce_loss(nce_weights, nce_biases, embed, train_labels,
num_sampled, vocabulary_size))
# Construct the SGD optimizer using a learning rate of 1.0.
optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)
# Compute the cosine similarity between minibatch examples and all embeddings.
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(
valid_embeddings, normalized_embeddings, transpose_b=True)
# Define info to be used by the SummaryWriter. This will let TensorBoard
# plot loss values during the training process.
loss_summary = tf.scalar_summary("loss", loss)
train_summary_op = tf.merge_summary([loss_summary])
# Add variable initializer.
init = tf.initialize_all_variables()
print("finished building graph.")
# Begin training.
num_steps = 100001
with tf.Session(graph=graph) as session:
# We must initialize all variables before we use them.
init.run()
print("Initialized")
# Directory in which to write summary information.
# You can point TensorBoard to this directory via:
# $ tensorboard --logdir=/tmp/word2vec_basic/summaries
# Tensorflow assumes this directory already exists, so we need to create it.
timestamp = str(int(time.time()))
if not os.path.exists(os.path.join("/tmp/word2vec_basic",
"summaries", timestamp)):
os.makedirs(os.path.join("/tmp/word2vec_basic", "summaries", timestamp))
# Create the SummaryWriter
train_summary_writer = tf.train.SummaryWriter(
os.path.join(
"/tmp/word2vec_basic", "summaries", timestamp), session.graph)
average_loss = 0
for step in xrange(num_steps):
batch_inputs, batch_labels = generate_batch(
batch_size, num_skips, skip_window)
feed_dict = {train_inputs: batch_inputs, train_labels: batch_labels}
# We perform one update step by evaluating the optimizer op (including it
# in the list of returned values for session.run()
# Also evaluate the training summary op.
_, loss_val, tsummary = session.run(
[optimizer, loss, train_summary_op],
feed_dict=feed_dict)
average_loss += loss_val
# Write the evaluated summary info to the SummaryWriter. This info will
# then show up in the TensorBoard events.
train_summary_writer.add_summary(tsummary, step)
if step % 2000 == 0:
if step > 0:
average_loss /= 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print("Average loss at step ", step, ": ", average_loss)
average_loss = 0
# Note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in xrange(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k + 1]
log_str = "Nearest to %s:" % valid_word
for k in xrange(top_k):
close_word = reverse_dictionary[nearest[k]]
log_str = "%s %s," % (log_str, close_word)
print(log_str)
final_embeddings = normalized_embeddings.eval()
print("finished training.")
# Visualize the embeddings.
def plot_with_labels(low_dim_embs, labels, filename='tsne.png'):
assert low_dim_embs.shape[0] >= len(labels), "More labels than embeddings"
plt.figure(figsize=(18, 18)) # in inches
for i, label in enumerate(labels):
x, y = low_dim_embs[i, :]
plt.scatter(x, y)
plt.annotate(label,
xy=(x, y),
xytext=(5, 2),
textcoords='offset points',
ha='right',
va='bottom')
plt.savefig(filename)
try:
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
plot_only = 500
low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only, :])
labels = [reverse_dictionary[i] for i in xrange(plot_only)]
plot_with_labels(low_dim_embs, labels)
except ImportError:
print("Please install sklearn and matplotlib to visualize embeddings.")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Download the data from the source website if necessary.
Step4: Read the data into a string.
Step5: Build the dictionary and replace rare words with UNK token.
Step6: Function to generate a training batch for the skip-gram model.
Step7: Build and train a skip-gram model.
|
11,639
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from pymatgen.core import Composition, Element
from pymatgen.ext.matproj import MPRester
from pymatgen.io.vasp import Vasprun
from pymatgen.phasediagram.maker import PhaseDiagram, CompoundPhaseDiagram
from pymatgen.phasediagram.analyzer import PDAnalyzer
from pymatgen.phasediagram.plotter import PDPlotter
from pymatgen.entries.computed_entries import ComputedEntry
from pymatgen.entries.compatibility import MaterialsProjectCompatibility
from pymatgen.util.plotting_utils import get_publication_quality_plot
import json
import re
import palettable
import matplotlib as mpl
vasprun = Vasprun("aimd_data/vasprun.xml.relax2.gz")
# include structure so proper correction can be applied for oxides and sulfides
entry = vasprun.get_computed_entry(inc_structure=True)
rester = MPRester()
mp_entries = rester.get_entries_in_chemsys(["Li", "P", "S", "Cl"])
with open("aimd_data/lpo_entries.json") as f:
lpo_data = json.load(f)
lpo_entries = [ComputedEntry.from_dict(d) for d in lpo_data]
compatibility = MaterialsProjectCompatibility()
entry = compatibility.process_entry(entry)
entries = compatibility.process_entries([entry] + mp_entries + lpo_entries)
pd = PhaseDiagram(entries)
plotter = PDPlotter(pd)
plotter.show()
cpd = CompoundPhaseDiagram(entries,
[Composition("P2S5"), Composition("Li2S"), Composition("LiCl")])
cplotter = PDPlotter(cpd, show_unstable=True)
cplotter.show()
analyzer = PDAnalyzer(pd)
ehull = analyzer.get_e_above_hull(entry)
print("The energy above hull of Li6PS5Cl is %.3f eV/atom." % ehull)
li_entries = [e for e in entries if e.composition.reduced_formula == "Li"]
uli0 = min(li_entries, key=lambda e: e.energy_per_atom).energy_per_atom
el_profile = analyzer.get_element_profile(Element("Li"), entry.composition)
for i, d in enumerate(el_profile):
voltage = -(d["chempot"] - uli0)
print("Voltage: %s V" % voltage)
print(d["reaction"])
print("")
# Some matplotlib settings to improve the look of the plot.
mpl.rcParams['axes.linewidth']=3
mpl.rcParams['lines.markeredgewidth']=4
mpl.rcParams['lines.linewidth']=3
mpl.rcParams['lines.markersize']=15
mpl.rcParams['xtick.major.width']=3
mpl.rcParams['xtick.major.size']=8
mpl.rcParams['xtick.minor.width']=3
mpl.rcParams['xtick.minor.size']=4
mpl.rcParams['ytick.major.width']=3
mpl.rcParams['ytick.major.size']=8
mpl.rcParams['ytick.minor.width']=3
mpl.rcParams['ytick.minor.size']=4
# Plot of Li uptake per formula unit (f.u.) of Li6PS5Cl against voltage vs Li/Li+.
colors = palettable.colorbrewer.qualitative.Set1_9.mpl_colors
plt = get_publication_quality_plot(12, 8)
for i, d in enumerate(el_profile):
v = - (d["chempot"] - uli0)
if i != 0:
plt.plot([x2, x2], [y1, d["evolution"] / 4.0], 'k', linewidth=3)
x1 = v
y1 = d["evolution"] / 4.0
if i != len(el_profile) - 1:
x2 = - (el_profile[i + 1]["chempot"] - uli0)
else:
x2 = 5.0
if i in [0, 4, 5, 7]:
products = [re.sub(r"(\d+)", r"$_{\1}$", p.reduced_formula)
for p in d["reaction"].products if p.reduced_formula != "Li"]
plt.annotate(", ".join(products), xy=(v + 0.05, y1 + 0.3),
fontsize=24, color=colors[0])
plt.plot([x1, x2], [y1, y1], color=colors[0], linewidth=5)
else:
plt.plot([x1, x2], [y1, y1], 'k', linewidth=3)
plt.xlim((0, 4.0))
plt.ylim((-6, 10))
plt.xlabel("Voltage vs Li/Li$^+$ (V)")
plt.ylabel("Li uptake per f.u.")
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparation
Step2: To construct the phase diagram, we need all entries in the Li-P-S-Cl chemical space. We will use the MPRester class to obtain these entries from the Materials Project via the Materials API.
Step3: In addition to all the MP entries, here we also load the computed entries of O/S substituted Li-P-O tenary compounds.
Step4: Next, we need to combine all the entries and postprocess them using MaterialsProjectCompatibility. This postprocessing step corrects the energies to account for well-known DFT errors, e.g., in the sulfur binding energy.
Step5: Phase diagram construction
Step6: We may observe from the above phase diagram that Li6PS5Cl is not a stable phase (red nodes) in the calculated 0K phase diagram.
Step7: Calculating $E_{\rm hull}$ of Li6PS5Cl
Step8: Electrochemical Stability
Step9: The PDAnalyzer class provides a quick way to plot the phase diagram at a particular composition (e.g., Li6PS5Cl) as a function of lithium chemical potential called get_element_profile.
Step10: This element profile can be plotted as a Li evolution versus voltage using matplotlib as follows.
|
11,640
|
<ASSISTANT_TASK:>
Python Code:
# Import the IO module
import menpo.io as mio
# Import Matplotlib so we can plot subplots
import matplotlib.pyplot as plt
# Import a couple of interesting images that are landmarked!
takeo = mio.import_builtin_asset('takeo.ppm')
takeo = takeo.as_masked()
lenna = mio.import_builtin_asset('lenna.png')
lenna = lenna.as_masked()
%matplotlib inline
takeo = takeo.crop_to_landmarks()
takeo = takeo.constrain_mask_to_landmarks()
plt.subplot(121)
takeo.view_landmarks();
plt.subplot(122)
takeo.mask.view();
%matplotlib inline
lenna = lenna.crop_to_landmarks()
lenna = lenna.constrain_mask_to_landmarks()
plt.subplot(121)
lenna.view_landmarks();
plt.subplot(122)
lenna.mask.view();
from menpo.transform import ThinPlateSplines, PiecewiseAffine
tps_lenna_to_takeo = ThinPlateSplines(lenna.landmarks['LJSON'].lms, takeo.landmarks['PTS'].lms)
pwa_lenna_to_takeo = PiecewiseAffine(lenna.landmarks['LJSON'].lms, takeo.landmarks['PTS'].lms)
tps_takeo_to_lenna = ThinPlateSplines(takeo.landmarks['PTS'].lms, lenna.landmarks['LJSON'].lms)
pwa_takeo_to_lenna = PiecewiseAffine(takeo.landmarks['PTS'].lms, lenna.landmarks['LJSON'].lms)
warped_takeo_to_lenna_pwa = takeo.as_unmasked(copy=False).warp_to_mask(lenna.mask, pwa_lenna_to_takeo)
warped_takeo_to_lenna_tps = takeo.as_unmasked(copy=False).warp_to_mask(lenna.mask, tps_lenna_to_takeo)
%matplotlib inline
# Takeo to Lenna with PWA
warped_takeo_to_lenna_pwa.view();
import numpy as np
np.nanmax(warped_takeo_to_lenna_pwa.pixels) + 1
warped_takeo_to_lenna_pwa.pixels[0,1,1]
%matplotlib inline
# Takeo to Lenna with TPS
warped_takeo_to_lenna_tps.view();
warped_lenna_to_takeo_pwa = lenna.as_unmasked(copy=False).warp_to_mask(takeo.mask, pwa_takeo_to_lenna)
warped_lenna_to_takeo_tps = lenna.as_unmasked(copy=False).warp_to_mask(takeo.mask, pwa_takeo_to_lenna)
%matplotlib inline
# Lenna to Takeo with PWA
warped_lenna_to_takeo_pwa.view();
%matplotlib inline
# Lenna to Takeo with TPS
warped_lenna_to_takeo_tps.view();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, given a landmarked image, it is simple to create a reference template by constraining the images mask to lie within the boundary of the landmarks. For example
Step2: Different landmark sets will obviously produce different shaped masks!
Step3: Commonly used parametric warps
Step4: We can then see what it would look like if we warped Takeo's face into the space of Lenna's! Notice that the output image has the same shape as the mask of Lenna. This is because Lenna is defining the reference frame. Also notice that you achieve different results depending on what Transform was used! PWA is a local discrete approximation, whilst TPS is global. Therefore, you are likely to get quit different results in extreme cases!
Step5: The parameters to the warp function are very simple
Step6: Naturally, we can also perform the warp in the opposite direction!
|
11,641
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Christopher Holdgraf <choldgraf@berkeley.edu>
#
# License: BSD (3-clause)
from scipy.ndimage import imread
import numpy as np
from matplotlib import pyplot as plt
from os import path as op
import mne
from mne.viz import ClickableImage, add_background_image # noqa
from mne.channels import generate_2d_layout # noqa
print(__doc__)
# Set parameters and paths
plt.rcParams['image.cmap'] = 'gray'
im_path = op.join(op.dirname(mne.__file__), 'data', 'image', 'mni_brain.gif')
# We've already clicked and exported
layout_path = op.join(op.dirname(mne.__file__), 'data', 'image')
layout_name = 'custom_layout.lout'
im = imread(im_path)
plt.imshow(im)
This code opens the image so you can click on it. Commented out
because we've stored the clicks as a layout file already.
# The click coordinates are stored as a list of tuples
click = ClickableImage(im)
click.plot_clicks()
coords = click.coords
# Generate a layout from our clicks and normalize by the image
lt = generate_2d_layout(np.vstack(coords), bg_image=im)
lt.save(layout_path + layout_name) # To save if we want
# We've already got the layout, load it
lt = mne.channels.read_layout(layout_name, path=layout_path, scale=False)
# Create some fake data
nchans = len(lt.pos)
nepochs = 50
sr = 1000
nsec = 5
events = np.arange(nepochs).reshape([-1, 1])
events = np.hstack([events, np.zeros([nepochs, 2], dtype=int)])
data = np.random.randn(nepochs, nchans, sr * nsec)
info = mne.create_info(nchans, sr, ch_types='eeg')
epochs = mne.EpochsArray(data, info, events)
evoked = epochs.average()
# Using the native plot_topo function with the image plotted in the background
f = evoked.plot_topo(layout=lt, fig_background=im)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Load data and click
|
11,642
|
<ASSISTANT_TASK:>
Python Code:
# sequence_to_sequence_implementation course assignment was used a lot to finish this hw
# A live help person highly suggested I worked through it again. --- 10000% correct. this was vital
### AKA the UDACITY seq2seq assignment, /deep-learning/seq2seq/sequence_to_sequence_implementation.ipynb
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (10, 110)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
# I couldn't remember what eos stood for (too many acronyms to remember) so I googled it
#https://www.tensorflow.org/tutorials/seq2seq
# end-of-senence (eos)
# asked a live support about this. He / she directed me to https://github.com/nicolas-ivanov/tf_seq2seq_chatbot/issues/15
#
# Ok, setup the stuff that is known to be needed first
source_id_text = []
target_id_text = []
end_of_seq = target_vocab_to_int['<EOS>'] # had "eos" at first and it gave an error. Changing to EOS. ## Update: doesn't fix, / issue is something else.
#look at data strcuture
#print("================")
#print(source_text)
#print("================")
#source_id_text = enumerate(source_text.split('\n'))
#source_id_text = for tacos in (source_text.split('\n'))
#source_id_text = source_text.split('\n')
#print(source_id_text)
#print(np.)
print("================")
source_id_textsen = source_text.split('\n')
target_id_textsen = target_text.split('\n')
#for sentence in (source_id_textsen):
# for word in sentence.split():
# I think this is OK. default *should be spaces*
#print("test:"+word)
#source_id_text = word
#source_id_text = source_vocab_to_int[word]
# source_id_text.append([source_vocab_to_int[word]])
#print(len(source_id_text))
#for sentence in (target_id_textsen):
# for word in sentence.split():
# #pass
# #target_id_text = target_vocab_to_int[word]
# target_id_text.append(target_vocab_to_int[word])
# target_id_text.append(end_of_seq)
#### WHY AM I STILL GETTING 60 something and an error saying it should just be four values in
# source_id_text
#How did I just break this.... It jus t worked
# for sentence in (source_id_textsen):
# source_id_text = [[source_vocab_to_int[word] for word in sentence.split()]]
# for sentence in (target_id_textsen):
# target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] + [end_of_seq]]
# Live help said the following is the same. Added here for future reference if a similar problem is encountered after the course.
source_id_text = [[source_vocab_to_int[word] for word in seq.split()] for seq in source_text.split('\n')]
target_id_text = [[target_vocab_to_int[word] for word in seq.split()] + [end_of_seq] for seq in target_text.split('\n')]
return source_id_text, target_id_text
# do an enummeration for
print("================")
return (source_id_text, target_id_text) #None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
#https://www.tensorflow.org/api_docs/python/tf/placeholder
#float32 issue at end of project, chaning things to int32 where possible???
Input = tf.placeholder(dtype=tf.int32,shape=[None,None],name="input")
Target = tf.placeholder(dtype=tf.int32,shape=[None,None],name="target")
lr = tf.placeholder(dtype=tf.float32,name="lr")
taretlength = tf.placeholder(dtype=tf.int32,name="target_sequence_length")
kp = tf.placeholder(dtype=tf.float32,name="keep_prob")
#maxseq = tf.placeholder(dtype.float32,name='max_target_len')
maxseq = tf.reduce_max(taretlength,name='max_target_len')
sourceseqlen = tf.placeholder(dtype=tf.int32,shape=[None],name='source_sequence_length')
# TODO: Implement Function
return Input, Target, lr, kp, taretlength, maxseq, sourceseqlen
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
### From the UDACITYclass assignment:
##########################################
# Process the input we'll feed to the decoder
#def process_decoder_input(target_data, vocab_to_int, batch_size):
# '''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''
# ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
# dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)##
#return dec_input#
###udacity/hw/deep-learning/seq2seq/sequence_to_sequence_implementation.ipynb
#####################################
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# done: Implement Function
# this is to be sliced just like one would do with numpy
# to do that, https://www.tensorflow.org/api_docs/python/tf/strided_slice is used.
# ref to verify this is the rigth func: https://stackoverflow.com/questions/41380126/what-does-tf-strided-slice-do
#strided_slice(
#input_,
# begin,
#end,
#strides=None,
#begin_mask=0,
#end_mask=0,
#ellipsis_mask=0,
#new_axis_mask=0,
#shrink_axis_mask=0,
#var=None,
#name=None
#
#)
#ret = tf.strided_slice(input_=target_data,begin=[0],end=[batch_size],)
# FROM UDACITY seq2seq assignment
#ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
#dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)
#return dec_input
ret = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
#ret =tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), target_data], 1)
ret =tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ret], 1)
return ret #None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# done: Implement Function
##################
##
## This is simlar to 2.1 Encoder of the UDACITY seq2seq hw
#def encoding_layer(input_data, rnn_size, num_layers,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)
# RNN cell
def make_cell(rnn_size):
enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return enc_cell
enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)
return enc_output, enc_state
#
##
##########
# the respective documents for this cell are:
#https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence
#https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell
#https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper
#https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
#rrnoutput=
#rrnstate=
#embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, rnn_size, encoding_embedding_size)
embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
#tf.contrib.layers.embed_sequence()
def make_cell(rnn_size):
#https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper
enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) # I had AN INSANE AMMOUNT OF ERRORS BECAUSE I ACCIDENTALLY EDINTED THIS LINE TO HAVE PROB INSTEAD OF THE DROPOUT. >.> no good error codes
enc_cell = tf.contrib.rnn.DropoutWrapper(enc_cell,output_keep_prob=keep_prob)
# Not sure which one. Probably not input. EIther output or state..
#input_keep_prob: unit Tensor or float between 0 and 1, input keep probability; if it is constant and 1, no input dropout will be added.
#output_keep_prob: unit Tensor or float between 0 and 1, output keep probability; if it is constant and 1, no output dropout will be added.
#state_keep_prob: unit Tensor or float between 0 and 1, output keep probability; if it is constant and 1, no output dropout will be added. State dropout is performed on the output states of the cell.
return enc_cell
enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, embed_input, sequence_length=source_sequence_length, dtype=tf.float32)
return enc_output, enc_state
#return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
######
# SUPEEEER TRICKY UDACITY!
# I spent half a day trying to figure out why I had cryptic errors - turns out only
# Tensorflow 1.1 can run this.
# not 1.0 . Not 1.2.
# wasting my time near the submission deadline even though my code is OK.
# Used the UDACITY sequence_to_sequence_implementation as reference for this
# did find operation (ctrl+f) fro "rainingHelper"
# Found decoding_layer(...) function which seems to address this cell's requirements
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# done: Implement Function
#from seq 2 seq:
# Helper for the training process. Used by BasicDecoder to read inputs.
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
# Basic decoder
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
training_helper,
enc_state,
output_layer)
# Perform dynamic decoding using the decoder
training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,
# Helper for the training process. Used by BasicDecoder to read inputs.
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
#encoder_state ... ameError: name 'enc_state' is not defined
# Basic decoder
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
training_helper,
encoder_state,
output_layer)
# Perform dynamic decoding using the decoder
#NameError: name 'max_target_sequence_length' is not defined ... same deal
training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length)
#ValueError: too many values to unpack (expected 2)
#training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length)
return training_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
#########################
#
# Searched course tutorial Seq2seq again, same functtion as last code cell
#
# See below:
with tf.variable_scope("decode", reuse=True):
start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')
# Helper for the inference process.
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
start_tokens,
target_letter_to_int['<EOS>'])
# Basic decoder
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
inference_helper,
enc_state,
output_layer)
# Perform dynamic decoding using the decoder
inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
return training_decoder_output, inference_decoder_output
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# done: Implement Function
#### BASED STRONGLY ON CLASS COURSEWORK, THE SEQ2SEQ material
#https://www.tensorflow.org/api_docs/python/tf/tile
#NameError: name 'target_letter_to_int' is not defined
#start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')
#start_tokens = tf.tile(tf.constant(['<GO>'], dtype=tf.int32), [batch_size], name='start_tokens')
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
# Helper for the inference process.
#https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper
#inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,start_tokens,target_letter_to_int['<EOS>'])
#NameError: name 'target_letter_to_int' is not defined
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,start_tokens,end_of_sequence_id)
# Basic decoder
#enc_state # encoder_state changed naes
#https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,inference_helper,encoder_state,output_layer)
# Perform dynamic decoding using the decoder
#https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode
inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,impute_finished=True,maximum_iterations=max_target_sequence_length)
return inference_decoder_output#None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
#
##
# Again, as suggested by a Uedacity TA (live support), SEQ 2 SEQ
# Largely based on the decoding_layer in the udadcity seq2seq tutorial/example material.
# See here:
def decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size,
target_sequence_length, max_target_sequence_length, enc_state, dec_input):
# 1. Decoder Embedding
target_vocab_size = len(target_letter_to_int)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# 2. Construct the decoder cell
def make_cell(rnn_size):
dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return dec_cell
dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
# 3. Dense layer to translate the decoder's output at each time
# step into a choice from the target vocabulary
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
# 4. Set up a training decoder and an inference decoder
# Training Decoder
with tf.variable_scope("decode"):
# Helper for the training process. Used by BasicDecoder to read inputs.
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
# Basic decoder
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
training_helper,
enc_state,
output_layer)
# Perform dynamic decoding using the decoder
training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
# 5. Inference Decoder
# Reuses the same parameters trained by the training process
with tf.variable_scope("decode", reuse=True):
start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')
# Helper for the inference process.
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
start_tokens,
target_letter_to_int['<EOS>'])
# Basic decoder
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
inference_helper,
enc_state,
output_layer)
# Perform dynamic decoding using the decoder
inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
return training_decoder_output, inference_decoder_output
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
# 1. Decoder Embedding
#NameError: name 'target_letter_to_int' is not defined
#target_vocab_size = len(target_letter_to_int) # already param
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# 2. Construct the decoder cell
def make_cell(rnn_size):
dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return dec_cell
dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
# 3. Dense layer to translate the decoder's output at each time
# step into a choice from the target vocabulary
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
# 4. Set up a training decoder and an inference decoder
# Training Decoder
with tf.variable_scope("decode"):
# Helper for the training process. Used by BasicDecoder to read inputs.
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
# Basic decoder
#NameError: name 'enc_state' is not defined
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
training_helper,
encoder_state,
output_layer)
# Perform dynamic decoding using the decoder
training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
# 5. Inference Decoder
# Reuses the same parameters trained by the training process
with tf.variable_scope("decode", reuse=True):
#NameError: name 'target_letter_to_int' is not defined
#target_vocab_to_int is the closest equivalent
start_tokens = tf.tile(tf.constant([target_vocab_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')
# Helper for the inference process.
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
start_tokens,
target_vocab_to_int['<EOS>'])
# Basic decoder
#NameError: name 'enc_state' is not defined
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
inference_helper,
encoder_state,
output_layer)
# Perform dynamic decoding using the decoder
inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
return training_decoder_output, inference_decoder_output
#return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# done: Implement Function
#ENcode
RNN_output, RNN_state= encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size)
#process
Preprocessedtargetdata=process_decoder_input(target_data, target_vocab_to_int, batch_size)
#decode
reta,retb= decoding_layer(Preprocessedtargetdata, RNN_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)
return reta,retb#None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# PHyper parameters are expected to be of similar range to those of the Seq 2 seq lesson
# Number of Epochs
epochs = 16 #60 #None
# Batch Size
batch_size = 256 #None
# RNN Size
rnn_size = 50#None
# Number of Layers
num_layers = 2#None
# Embedding Size
encoding_embedding_size = 256 #15None
decoding_embedding_size = 256 #None
# Learning Rate
learning_rate = 0.01# None
# Dropout Keep Probability
keep_probability = 0.75 # reasoning: should be more than 50/50.. but it should still be able to drop values so it can search #None
display_step = 32#None
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
#translate_sentence = 'he saw a old yellow truck .'
#Why does this have a typo in it? It should be "He saw AN old, yellow truck."
translate_sentence = "There once was a man from Nantucket."
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoder Input
Step22: Encoding
Step26: Decoding - Training
Step30: Decoding - Inference
Step34: Build the Decoding Layer
Step37: Build the Neural Network
Step38: Neural Network Training
Step40: Build the Graph
Step44: Batch and pad the source and target sequences
Step47: Train
Step49: Save Parameters
Step51: Checkpoint
Step54: Sentence to Sequence
Step56: Translate
|
11,643
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import print_causal_directions, print_dagc, make_dot
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
np.random.seed(0)
x3 = np.random.uniform(size=1000)
x0 = 3.0*x3 + np.random.uniform(size=1000)
x2 = 6.0*x3 + np.random.uniform(size=1000)
x1 = 3.0*x0 + 2.0*x2 + np.random.uniform(size=1000)
x5 = 4.0*x0 + np.random.uniform(size=1000)
x4 = 8.0*x0 - 1.0*x2 + np.random.uniform(size=1000)
X = pd.DataFrame(np.array([x0, x1, x2, x3, x4, x5]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4', 'x5'])
X.head()
m = np.array([[0.0, 0.0, 0.0, 3.0, 0.0, 0.0],
[3.0, 0.0, 2.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 6.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[8.0, 0.0,-1.0, 0.0, 0.0, 0.0],
[4.0, 0.0, 0.0, 0.0, 0.0, 0.0]])
make_dot(m)
model = lingam.DirectLiNGAM()
result = model.bootstrap(X, n_sampling=100)
cdc = result.get_causal_direction_counts(n_directions=8, min_causal_effect=0.01, split_by_causal_effect_sign=True)
print_causal_directions(cdc, 100)
dagc = result.get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.01, split_by_causal_effect_sign=True)
print_dagc(dagc, 100)
prob = result.get_probabilities(min_causal_effect=0.01)
print(prob)
causal_effects = result.get_total_causal_effects(min_causal_effect=0.01)
# Assign to pandas.DataFrame for pretty display
df = pd.DataFrame(causal_effects)
labels = [f'x{i}' for i in range(X.shape[1])]
df['from'] = df['from'].apply(lambda x : labels[x])
df['to'] = df['to'].apply(lambda x : labels[x])
df
df.sort_values('effect', ascending=False).head()
df.sort_values('probability', ascending=True).head()
df[df['to']=='x1'].head()
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
from_index = 3 # index of x3
to_index = 0 # index of x0
plt.hist(result.total_effects_[:, to_index, from_index])
from_index = 3 # index of x3
to_index = 1 # index of x0
pd.DataFrame(result.get_paths(from_index, to_index))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Test data
Step2: Bootstrapping
Step3: Causal Directions
Step4: We can check the result by utility function.
Step5: Directed Acyclic Graphs
Step6: We can check the result by utility function.
Step7: Probability
Step8: Total Causal Effects
Step9: We can easily perform sorting operations with pandas.DataFrame.
Step10: And with pandas.DataFrame, we can easily filter by keywords. The following code extracts the causal direction towards x1.
Step11: Because it holds the raw data of the total causal effect (the original data for calculating the median), it is possible to draw a histogram of the values of the causal effect, as shown below.
Step12: Bootstrap Probability of Path
|
11,644
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
h = np.array([[-1,0,1],
[-2,0,2],
[-1,0,1]])
r,c = np.nonzero(h)
print(r,c)
xx = np.transpose(np.nonzero(h))
print(xx)
import numpy as np
def ptrans(f,t):
H,W = f.shape
rr,cc = t
row,col = np.indices(f.shape)
g = f[(row-rr)%H, (col-cc)%W]
return g
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import matplotlib.colors as mpc
f = mpimg.imread('../data/boat.tif')
f_hsv = mpc.rgb_to_hsv(f)
import sys,os
homepath = os.path.abspath('/home/lotufo/Aula4_entregue/')
if homepath not in sys.path:
sys.path.append(homepath)
! jupyter nbconvert --to 'python' /home/lotufo/Aula4_entregue/a207744_hsv_to_rgb.ipynb
import a207744_hsv_to_rgb
g = a207744_hsv_to_rgb.hsv_to_rgb(f_hsv[:,:,0],f_hsv[:,:,1],f_hsv[:,:,2])
import numpy as np
print((np.abs(g-f)).max())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercícios para a próxima aula, dia 27 de abril
Step2: Converter para ipynb e melhorar (com bons exemplos e equações) as demonstrações feitas no adessowiki
|
11,645
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from fastai.structured import *
from fastai.column_data import *
np.set_printoptions(threshold=50, edgeitems=20)
PATH='data/rossmann/'
def concat_csvs(dirname):
path = f'{PATH}{dirname}'
filenames=glob.glob(f"{path}/*.csv")
wrote_header = False
with open(f"{path}.csv","w") as outputfile:
for filename in filenames:
name = filename.split(".")[0]
with open(filename) as f:
line = f.readline()
if not wrote_header:
wrote_header = True
outputfile.write("file,"+line)
for line in f:
outputfile.write(name + "," + line)
outputfile.write("\n")
# concat_csvs('googletrend')
# concat_csvs('weather')
table_names = ['train', 'store', 'store_states', 'state_names',
'googletrend', 'weather', 'test']
tables = [pd.read_csv(f'{PATH}{fname}.csv', low_memory=False) for fname in table_names]
from IPython.display import HTML
for t in tables: display(t.head())
for t in tables: display(DataFrameSummary(t).summary())
train, store, store_states, state_names, googletrend, weather, test = tables
len(train),len(test)
train.StateHoliday = train.StateHoliday!='0'
test.StateHoliday = test.StateHoliday!='0'
def join_df(left, right, left_on, right_on=None, suffix='_y'):
if right_on is None: right_on = left_on
return left.merge(right, how='left', left_on=left_on, right_on=right_on,
suffixes=("", suffix))
weather = join_df(weather, state_names, "file", "StateName")
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]
googletrend['State'] = googletrend.file.str.split('_', expand=True)[2]
googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
add_datepart(weather, "Date", drop=False)
add_datepart(googletrend, "Date", drop=False)
add_datepart(train, "Date", drop=False)
add_datepart(test, "Date", drop=False)
add_datepart(googletrend, "Date", drop=False)
trend_de = googletrend[googletrend.file == 'Rossmann_DE']
store = join_df(store, store_states, "Store")
len(store[store.State.isnull()])
joined = join_df(train, store, "Store")
joined_test = join_df(test, store, "Store")
len(joined[joined.StoreType.isnull()]),len(joined_test[joined_test.StoreType.isnull()])
joined = join_df(joined, googletrend, ["State","Year", "Week"])
joined_test = join_df(joined_test, googletrend, ["State","Year", "Week"])
len(joined[joined.trend.isnull()]),len(joined_test[joined_test.trend.isnull()])
joined = joined.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
joined_test = joined_test.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
len(joined[joined.trend_DE.isnull()]),len(joined_test[joined_test.trend_DE.isnull()])
joined = join_df(joined, weather, ["State","Date"])
joined_test = join_df(joined_test, weather, ["State","Date"])
len(joined[joined.Mean_TemperatureC.isnull()]),len(joined_test[joined_test.Mean_TemperatureC.isnull()])
for df in (joined, joined_test):
for c in df.columns:
if c.endswith('_y'):
if c in df.columns: df.drop(c, inplace=True, axis=1)
for df in (joined,joined_test):
df['CompetitionOpenSinceYear'] = df.CompetitionOpenSinceYear.fillna(1900).astype(np.int32)
df['CompetitionOpenSinceMonth'] = df.CompetitionOpenSinceMonth.fillna(1).astype(np.int32)
df['Promo2SinceYear'] = df.Promo2SinceYear.fillna(1900).astype(np.int32)
df['Promo2SinceWeek'] = df.Promo2SinceWeek.fillna(1).astype(np.int32)
for df in (joined,joined_test):
df["CompetitionOpenSince"] = pd.to_datetime(dict(year=df.CompetitionOpenSinceYear,
month=df.CompetitionOpenSinceMonth, day=15))
df["CompetitionDaysOpen"] = df.Date.subtract(df.CompetitionOpenSince).dt.days
for df in (joined,joined_test):
df.loc[df.CompetitionDaysOpen<0, "CompetitionDaysOpen"] = 0
df.loc[df.CompetitionOpenSinceYear<1990, "CompetitionDaysOpen"] = 0
for df in (joined,joined_test):
df["CompetitionMonthsOpen"] = df["CompetitionDaysOpen"]//30
df.loc[df.CompetitionMonthsOpen>24, "CompetitionMonthsOpen"] = 24
joined.CompetitionMonthsOpen.unique()
for df in (joined,joined_test):
df["Promo2Since"] = pd.to_datetime(df.apply(lambda x: Week(
x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1).astype(pd.datetime))
df["Promo2Days"] = df.Date.subtract(df["Promo2Since"]).dt.days
for df in (joined,joined_test):
df.loc[df.Promo2Days<0, "Promo2Days"] = 0
df.loc[df.Promo2SinceYear<1990, "Promo2Days"] = 0
df["Promo2Weeks"] = df["Promo2Days"]//7
df.loc[df.Promo2Weeks<0, "Promo2Weeks"] = 0
df.loc[df.Promo2Weeks>25, "Promo2Weeks"] = 25
df.Promo2Weeks.unique()
joined.to_feather(f'{PATH}joined')
joined_test.to_feather(f'{PATH}joined_test')
def get_elapsed(fld, pre):
day1 = np.timedelta64(1, 'D')
last_date = np.datetime64()
last_store = 0
res = []
for s,v,d in zip(df.Store.values,df[fld].values, df.Date.values):
if s != last_store:
last_date = np.datetime64()
last_store = s
if v: last_date = d
res.append(((d-last_date).astype('timedelta64[D]') / day1).astype(int))
df[pre+fld] = res
columns = ["Date", "Store", "Promo", "StateHoliday", "SchoolHoliday"]
df = train[columns]
df = test[columns]
fld = 'SchoolHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
fld = 'StateHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
fld = 'Promo'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
df = df.set_index("Date")
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
for o in ['Before', 'After']:
for p in columns:
a = o+p
df[a] = df[a].fillna(0)
bwd = df[['Store']+columns].sort_index().groupby("Store").rolling(7, min_periods=1).sum()
fwd = df[['Store']+columns].sort_index(ascending=False
).groupby("Store").rolling(7, min_periods=1).sum()
bwd.drop('Store',1,inplace=True)
bwd.reset_index(inplace=True)
fwd.drop('Store',1,inplace=True)
fwd.reset_index(inplace=True)
df.reset_index(inplace=True)
df = df.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw'])
df = df.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw'])
df.drop(columns,1,inplace=True)
df.head()
df.to_feather(f'{PATH}df')
df = pd.read_feather(f'{PATH}df', index_col=0)
df["Date"] = pd.to_datetime(df.Date)
df.columns
joined = join_df(joined, df, ['Store', 'Date'])
joined_test = join_df(joined_test, df, ['Store', 'Date'])
joined = joined[joined.Sales!=0]
joined.reset_index(inplace=True)
joined_test.reset_index(inplace=True)
joined.to_feather(f'{PATH}joined')
joined_test.to_feather(f'{PATH}joined_test')
joined = pd.read_feather(f'{PATH}joined')
joined_test = pd.read_feather(f'{PATH}joined_test')
joined.head().T.head(40)
cat_vars = ['Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday', 'CompetitionMonthsOpen',
'Promo2Weeks', 'StoreType', 'Assortment', 'PromoInterval', 'CompetitionOpenSinceYear', 'Promo2SinceYear',
'State', 'Week', 'Events', 'Promo_fw', 'Promo_bw', 'StateHoliday_fw', 'StateHoliday_bw',
'SchoolHoliday_fw', 'SchoolHoliday_bw']
contin_vars = ['CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC',
'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h',
'Mean_Wind_SpeedKm_h', 'CloudCover', 'trend', 'trend_DE',
'AfterStateHoliday', 'BeforeStateHoliday', 'Promo', 'SchoolHoliday']
n = len(joined); n
dep = 'Sales'
joined_test[dep] = 0
joined = joined[cat_vars+contin_vars+[dep, 'Date']].copy()
joined_test = joined_test[cat_vars+contin_vars+[dep, 'Date', 'Id']].copy()
for v in cat_vars: joined[v] = joined[v].astype('category').cat.as_ordered()
apply_cats(joined_test, joined)
for v in contin_vars:
joined[v] = joined[v].astype('float32')
joined_test[v] = joined_test[v].astype('float32')
idxs = get_cv_idxs(n, val_pct=150000/n)
joined_samp = joined.iloc[idxs].set_index("Date")
samp_size = len(joined_samp); samp_size
samp_size = n
joined_samp = joined.set_index("Date")
joined_samp.head(2)
df, y, nas, mapper = proc_df(joined_samp, 'Sales', do_scale=True)
yl = np.log(y)
joined_test = joined_test.set_index("Date")
df_test, _, nas, mapper = proc_df(joined_test, 'Sales', do_scale=True, skip_flds=['Id'],
mapper=mapper, na_dict=nas)
df.head(2)
train_ratio = 0.75
# train_ratio = 0.9
train_size = int(samp_size * train_ratio); train_size
val_idx = list(range(train_size, len(df)))
val_idx = np.flatnonzero(
(df.index<=datetime.datetime(2014,9,17)) & (df.index>=datetime.datetime(2014,8,1)))
val_idx=[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create datasets
Step2: Feature Space
Step3: We'll be using the popular data manipulation framework pandas. Among other things, pandas allows you to manipulate tables/data frames in python as one would in a database.
Step4: We can use head() to get a quick look at the contents of each table
Step5: This is very representative of a typical industry dataset.
Step6: Data Cleaning / Feature Engineering
Step7: We turn state Holidays to booleans, to make them more convenient for modeling. We can do calculations on pandas fields using notation very similar (often identical) to numpy.
Step8: join_df is a function for joining tables on specific fields. By default, we'll be doing a left outer join of right on the left argument using the given fields for each table.
Step9: Join weather/state names.
Step10: In pandas you can add new columns to a dataframe by simply defining it. We'll do this for googletrends by extracting dates and state names from the given data and adding those columns.
Step11: The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals.
Step12: The Google trends data has a special category for the whole of the US - we'll pull that out so we can use it explicitly.
Step13: Now we can outer join all of our data into a single dataframe. Recall that in outer joins everytime a value in the joining field on the left table does not have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields. One way to check that all records are consistent and complete is to check for Null values post-join, as we do here.
Step14: Next we'll fill in missing values to avoid complications with NA's. NA (not available) is how Pandas indicates missing values; many models have problems when missing values are present, so it's always important to think about how to deal with them. In these cases, we are picking an arbitrary signal value that doesn't otherwise appear in the data.
Step15: Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of apply() in mapping a function across dataframe values.
Step16: We'll replace some erroneous / outlying data.
Step17: We add "CompetitionMonthsOpen" field, limiting the maximum to 2 years to limit number of unique categories.
Step18: Same process for Promo dates.
Step19: Durations
Step20: We'll be applying this to a subset of columns
Step21: Let's walk through an example.
Step22: We'll do this for two more fields.
Step23: We're going to set the active index to Date.
Step24: Then set null values from elapsed field calculations to 0.
Step25: Next we'll demonstrate window functions in pandas to calculate rolling quantities.
Step26: Next we want to drop the Store indices grouped together in the window function.
Step27: Now we'll merge these values onto the df.
Step28: It's usually a good idea to back up large tables of extracted / wrangled features before you join them onto another one, that way you can go back to it easily if you need to make changes to it.
Step29: The authors also removed all instances where the store had zero sale / was closed. We speculate that this may have cost them a higher standing in the competition. One reason this may be the case is that a little exploratory data analysis reveals that there are often periods where stores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. By ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior.
Step30: We'll back this up as well.
Step31: We now have our final set of engineered features.
Step32: Now that we've engineered all our features, we need to convert to input compatible with a neural network.
Step33: We're going to run on a sample.
Step34: To run on the full dataset, use this instead
Step35: We can now process our data...
Step36: In time series data, cross-validation is not random. Instead, our holdout data is generally the most recent data, as it would be in real application. This issue is discussed in detail in this post on our web site.
Step37: An even better option for picking a validation set is using the exact same length of time period as the test set uses - this is implemented here
|
11,646
|
<ASSISTANT_TASK:>
Python Code:
header1 = r\documentclass[a4paper,11pt]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[croatian]{babel}
\usepackage{minted}
\usepackage{amsmath,amsfonts}
\usepackage{graphicx}
\usepackage{booktabs}
\usepackage[hmargin=1.5cm,vmargin=1cm]{geometry}
\pagestyle{empty}
\begin{document}
header2 = r\begin{center}
{\LARGE \textbf{1.\ kolokvij iz Matematičkog sofvera}}\\
{\Large\textbf{12.\ svibnja 2017.}}\\
\end{center}
header3=r\begin{enumerate}
footer1 = r\end{enumerate}
\vspace{5mm}
\textbf{Uputa}: Kolokvij se piše u Jupyter bilježnici (unutar direktorija \textit{1.\ kolokvij})
koju sam kreirao u tu svrhu.
Drugi zadatak se rješava korištenjem biblioteke \texttt{Numpy},
treći korištenjem biblioteke \texttt{Scipy}, četvrti korištenjem
biblioteke \texttt{Matplotlib} a peti korištenjem biblioteke \texttt{Sympy}.
\vspace{5mm}
\begin{flushright}
Potpis studenta:
\end{flushright}
\newpage
footer2=r
\end{document}
from numpy import random
with open('studenti.txt','r') as f:
studenti = list(f)
broj_studenata = len(studenti)
broj_zadataka = 30
datoteka = "ms_kol1.tex"
with open(datoteka,'w') as f:
f.write(header1+'\n')
for i in range(broj_studenata):
random.seed()
r=random.randint(1,broj_zadataka,5)
f.write(header2)
f.write("\\begin{center}{\large \\textbf{Student: "+studenti[i][:-1]+"}}\end{center}\n\n")
f.write(header3)
for j in range(5):
z = str(j+1)+str(r[j]).zfill(2)
f.write('\\input zadaci-1/z'+z+'\n')
f.write(footer1)
f.write(footer2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: Skripta za generiranje kolokvija
Step6: Učitavanje potrebnih paketa & podataka
Step7: Kreiranje datoteke
|
11,647
|
<ASSISTANT_TASK:>
Python Code:
import geopyspark as gps
from pyspark import SparkContext
conf=gps.geopyspark_conf(appName="BristleConePine")
conf.set('spark.ui.enabled', True)
sc = SparkContext(conf = conf)
elev_rdd = gps.geotiff.get(
layer_type='spatial',
uri='s3://geopyspark-demo/elevation/ca-elevation.tif')
elev_tiled_rdd = elev_rdd.tile_to_layout(
layout=gps.GlobalLayout(),
target_crs=3857)
elev_pyramided_rdd = elev_tiled_rdd.pyramid().cache()
from geopyspark.geotrellis.color import get_colors_from_matplotlib
elev_histo = elev_pyramided_rdd.get_histogram()
elev_colors = get_colors_from_matplotlib('viridis', 100)
elev_color_map = gps.ColorMap.from_histogram(elev_histo, elev_colors)
elev_tms = gps.TMS.build(elev_pyramided_rdd, elev_color_map)
elev_tms.bind('0.0.0.0')
import folium
map_center = [37.75, -118.85]
zoom = 7
m = folium.Map(location=map_center, zoom_start=zoom)
folium.TileLayer(tiles="Stamen Terrain", overlay=False).add_to(m)
folium.TileLayer(tiles=elev_tms.url_pattern, attr="GeoPySpark", overlay=True).add_to(m)
m
# use: elev_reprojected_rdd
elev_reclass_pre = elev_tiled_rdd.reclassify({1000:2, 2000:2, 3000:2, 4000:1, 5000:2}, int)
elev_reclass_rdd = elev_reclass_pre.reclassify({1:1}, int)
elev_reclass_pyramid_rdd = elev_reclass_rdd.pyramid()
elev_reclass_histo = elev_reclass_pyramid_rdd.get_histogram()
#elev_reclass_color_map = ColorMap.from_histogram(sc, elev_reclass_histo, get_breaks(sc, 'Viridis', num_colors=100))
elev_reclass_color_map = gps.ColorMap.from_colors(
breaks =[1],
color_list = [0xff000080])
elev_reclass_tms = gps.TMS.build(elev_reclass_pyramid_rdd, elev_reclass_color_map)
elev_reclass_tms.bind('0.0.0.0')
m2 = folium.Map(location=map_center, zoom_start=zoom)
folium.TileLayer(tiles="Stamen Terrain", overlay=False).add_to(m2)
folium.TileLayer(tiles=elev_tms.url_pattern, attr='GeoPySpark', name="Elevation", overlay=True).add_to(m2)
folium.TileLayer(tiles=elev_reclass_tms.url_pattern, attr='GeoPySpark', name="High Elevation Areas", overlay=True).add_to(m2)
folium.LayerControl().add_to(m2)
m2
# square_neighborhood = Square(extent=1)
aspect_rdd = elev_tiled_rdd.focal(
gps.Operation.ASPECT,
gps.Neighborhood.SQUARE, 1)
aspect_pyramid_rdd = aspect_rdd.pyramid()
aspect_histo = aspect_pyramid_rdd.get_histogram()
aspect_color_map = gps.ColorMap.from_histogram(aspect_histo, get_colors_from_matplotlib('viridis', num_colors=256))
aspect_tms = gps.TMS.build(aspect_pyramid_rdd, aspect_color_map)
aspect_tms.bind('0.0.0.0')
m3 = folium.Map(tiles='Stamen Terrain', location=map_center, zoom_start=zoom)
folium.TileLayer(tiles=aspect_tms.url_pattern, attr='GeoPySpark', name="High Elevation Areas", overlay=True).add_to(m3)
m3
aspect_tms.unbind()
aspect_reclass_pre = aspect_rdd.reclassify({120:2, 240:1, 360: 2}, int)
aspect_reclass = aspect_reclass_pre.reclassify({1:1}, int)
aspect_reclass_pyramid_rdd = aspect_reclass.pyramid()
aspect_reclass_histo = aspect_reclass_pyramid_rdd.get_histogram()
aspect_reclass_color_map = gps.ColorMap.from_histogram(aspect_reclass_histo, get_colors_from_matplotlib('viridis', num_colors=256))
aspect_reclass_tms = gps.TMS.build(aspect_reclass_pyramid_rdd, aspect_reclass_color_map)
aspect_reclass_tms.bind('0.0.0.0')
m4 = folium.Map(tiles='Stamen Terrain', location=map_center, zoom_start=zoom)
folium.TileLayer(tiles=aspect_reclass_tms.url_pattern, attr='GeoPySpark', name="High Elevation Areas", overlay=True).add_to(m4)
m4
aspect_reclass_tms.unbind()
added = elev_reclass_pyramid_rdd + aspect_reclass_pyramid_rdd
added_histo = added.get_histogram()
added_color_map = gps.ColorMap.from_histogram(added_histo, get_colors_from_matplotlib('viridis', num_colors=256))
added_tms = gps.TMS.build(added, added_color_map)
added_tms.bind('0.0.0.0')
m5 = folium.Map(tiles='Stamen Terrain', location=map_center, zoom_start=zoom)
folium.TileLayer(tiles=added_tms.url_pattern, attr='GeoPySpark', name="High Elevation Areas", overlay=True).add_to(m5)
m5
import matplotlib.pyplot as plt
%matplotlib inline
v = elev_tiled_rdd.lookup(342,787)
plt.imshow(v[0].cells[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You will need to set up a spark context. To learn more about what that means take a look here
Step2: Retrieving an elevation .tif from AWS S3
Step3: Tile, reproject, pyramid
Step4: Imports for creating a TMS server capable of serving layers with custom colormaps
Step5: Display the tiles in an embedded Folium map
Step6: Classify the elevation such that values of interest (between 3,000 and 4,000 meters) return a value of 1.
Step7: Focal operation
Step8: Reclassify values such that values between 120 and 240 degrees (south) have a value of 1
Step9: Now add the values togehter to find the suitable range
|
11,648
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-vhr4', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
11,649
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.pipeline import Pipeline
from skutil.preprocessing import BoxCoxTransformer, SelectiveScaler
from skutil.decomposition import SelectivePCA
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
# build a pipeline
pipe = Pipeline([
('collinearity', MulticollinearityFilterer(threshold=0.85)),
('scaler' , SelectiveScaler()),
('boxcox' , BoxCoxTransformer()),
('pca' , SelectivePCA(n_components=0.9)),
('model' , RandomForestClassifier())
])
# fit the pipe, report scores
pipe.fit(X_train, y_train)
# report scores
print 'Train RF accuracy: %.5f' % accuracy_score(y_train, pipe.predict(X_train))
print 'Test RF accuracy: %.5f' % accuracy_score(y_test, pipe.predict(X_test))
from skutil.grid_search import RandomizedSearchCV
from sklearn.cross_validation import KFold
from sklearn.preprocessing import StandardScaler, RobustScaler
from skutil.feature_selection import NearZeroVarianceFilterer
from scipy.stats import randint, uniform
# default CV does not shuffle, so we define our own
custom_cv = KFold(n=y_train.shape[0], n_folds=5, shuffle=True, random_state=42)
# build a pipeline -- let's also add a NearZeroVarianceFilterer prior to PCA
pipe = Pipeline([
('collinearity', MulticollinearityFilterer(threshold=0.85)),
('scaler' , SelectiveScaler()),
('boxcox' , BoxCoxTransformer()),
('filterer' , NearZeroVarianceFilterer()),
('pca' , SelectivePCA(n_components=0.9)),
('model' , RandomForestClassifier(n_jobs=-1))
])
# let's define a set of hyper-parameters over which to search
hp = {
'collinearity__threshold' : uniform(loc=.8, scale=.15),
'collinearity__method' : ['pearson','kendall','spearman'],
'scaler__scaler' : [StandardScaler(), RobustScaler()],
'filterer__threshold' : uniform(loc=1e-6, scale=0.005),
'pca__n_components' : uniform(loc=.75, scale=.2),
'pca__whiten' : [True, False],
'model__n_estimators' : randint(5,100),
'model__max_depth' : randint(2,25),
'model__min_samples_leaf' : randint(1,15),
'model__max_features' : uniform(loc=.5, scale=.5),
'model__max_leaf_nodes' : randint(10,75)
}
# define the gridsearch
search = RandomizedSearchCV(pipe, hp,
n_iter=50,
scoring='accuracy',
cv=custom_cv,
random_state=42)
# fit the search
search.fit(X_train, y_train)
# report scores
print 'Train RF accuracy: %.5f' % accuracy_score(y_train, search.predict(X_train))
print 'Test RF accuracy: %.5f' % accuracy_score(y_test, search.predict(X_test))
search.best_params_
from sklearn.externals import joblib
# write the model
joblib.dump(search, 'final_model.pkl', compress=3)
from __future__ import print_function
# load the model
final_model = joblib.load('final_model.pkl')
# load your data
# new_data = pd.read_csv('...')
# ... any other pre-processing you may have done outside of the pipeline
# here's our example data
new_data = X
# make predictions
predictions = final_model.predict(new_data)
# view the top few
print(predictions[:5])
# view the performance (we can do this because we have the ground truth)
print(accuracy_score(iris.target, predictions))
# disk cleanup for git
!rm final_model.pkl
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The performance isn't bad. The training accuracy is phenomenal, but the validation accuracy is sub-par. Plus, there's quite of variance in the model, isn't there? Let's try to improve our performance as well as reduce the variability (while sacrificing some bias, unfortunately).
Step2: This is much better! We've dramatically reduced the variance in our model, but we've taken a slight hit in terms of bias. With different models, or even creating an ensemble of different models (ensemble of ensembles?), we could probably create an even better score.
Step3: Model persistence
Step4: Making predictions from a persistent model
|
11,650
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import permutation_cluster_1samp_test
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
tmin, tmax, event_id = -0.3, 0.6, 1
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
# Load condition 1
event_id = 1
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
# Take only one channel
ch_name = 'MEG 1332'
epochs.pick_channels([ch_name])
evoked = epochs.average()
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet. Decimation occurs after frequency decomposition and can
# be used to reduce memory usage (and possibly computational time of downstream
# operations such as nonparametric statistics) if you don't need high
# spectrotemporal resolution.
decim = 5
freqs = np.arange(8, 40, 2) # define frequencies of interest
sfreq = raw.info['sfreq'] # sampling in Hz
tfr_epochs = tfr_morlet(epochs, freqs, n_cycles=4., decim=decim,
average=False, return_itc=False, n_jobs=1)
# Baseline power
tfr_epochs.apply_baseline(mode='logratio', baseline=(-.100, 0))
# Crop in time to keep only what is between 0 and 400 ms
evoked.crop(0., 0.4)
tfr_epochs.crop(0., 0.4)
epochs_power = tfr_epochs.data[:, 0, :, :] # take the 1 channel
threshold = 2.5
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_1samp_test(epochs_power, n_permutations=100,
threshold=threshold, tail=0)
evoked_data = evoked.data
times = 1e3 * evoked.times
plt.figure()
plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43)
# Create new stats image with only significant clusters
T_obs_plot = np.nan * np.ones_like(T_obs)
for c, p_val in zip(clusters, cluster_p_values):
if p_val <= 0.05:
T_obs_plot[c] = T_obs[c]
vmax = np.max(np.abs(T_obs))
vmin = -vmax
plt.subplot(2, 1, 1)
plt.imshow(T_obs, cmap=plt.cm.gray,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.imshow(T_obs_plot, cmap=plt.cm.RdBu_r,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title('Induced power (%s)' % ch_name)
ax2 = plt.subplot(2, 1, 2)
evoked.plot(axes=[ax2])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Compute statistic
Step3: View time-frequency plots
|
11,651
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
ioos_tools = os.path.join(os.path.pardir)
sys.path.append(ioos_tools)
from datetime import datetime, timedelta
import dateutil.parser
service_type = 'WMS'
min_lon, min_lat = -90.0, 30.0
max_lon, max_lat = -80.0, 40.0
bbox = [min_lon, min_lat, max_lon, max_lat]
crs = 'urn:ogc:def:crs:OGC:1.3:CRS84'
# Temporal range: Last week.
now = datetime.utcnow()
start, stop = now - timedelta(days=(7)), now
start = dateutil.parser.parse('2017-03-01T00:00:00Z')
stop = dateutil.parser.parse('2017-04-01T00:00:00Z')
# Ocean Model Names
model_names = ['NAM', 'GFS']
from owslib import fes
from ioos_tools.ioos import fes_date_filter
kw = dict(wildCard='*', escapeChar='\\',
singleChar='?', propertyname='apiso:AnyText')
or_filt = fes.Or([fes.PropertyIsLike(literal=('*%s*' % val), **kw)
for val in model_names])
kw = dict(wildCard='*', escapeChar='\\',
singleChar='?', propertyname='apiso:ServiceType')
serviceType = fes.PropertyIsLike(literal=('*%s*' % service_type), **kw)
begin, end = fes_date_filter(start, stop)
bbox_crs = fes.BBox(bbox, crs=crs)
filter_list = [
fes.And(
[
bbox_crs, # bounding box
begin, end, # start and end date
or_filt, # or conditions (CF variable names)
serviceType # search only for datasets that have WMS services
]
)
]
from owslib.csw import CatalogueServiceWeb
endpoint = 'https://data.ioos.us/csw'
csw = CatalogueServiceWeb(endpoint, timeout=60)
def get_csw_records(csw, filter_list, pagesize=10, maxrecords=1000):
Iterate `maxrecords`/`pagesize` times until the requested value in
`maxrecords` is reached.
from owslib.fes import SortBy, SortProperty
# Iterate over sorted results.
sortby = SortBy([SortProperty('dc:title', 'ASC')])
csw_records = {}
startposition = 0
nextrecord = getattr(csw, 'results', 1)
while nextrecord != 0:
csw.getrecords2(constraints=filter_list, startposition=startposition,
maxrecords=pagesize, sortby=sortby)
csw_records.update(csw.records)
if csw.results['nextrecord'] == 0:
break
startposition += pagesize + 1 # Last one is included.
if startposition >= maxrecords:
break
csw.records.update(csw_records)
get_csw_records(csw, filter_list, pagesize=10, maxrecords=1000)
records = '\n'.join(csw.records.keys())
print('Found {} records.\n'.format(len(csw.records.keys())))
for key, value in list(csw.records.items()):
print('[{}]\n{}\n'.format(value.title, key))
csw.request
#write to JSON for use in TerriaJS
csw_request = '"{}": {}"'.format('getRecordsTemplate',str(csw.request,'utf-8'))
import io
import json
with io.open('query.json', 'a', encoding='utf-8') as f:
f.write(json.dumps(csw_request, ensure_ascii=False))
f.write('\n')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start by creating the search filters.
Step2: With these 3 elements it is possible to assemble a OGC Filter Encoding (FE) using the owslib.fes* module.
Step4: The csw object created from CatalogueServiceWeb did not fetched anything yet.
|
11,652
|
<ASSISTANT_TASK:>
Python Code:
import functools
def myfunc(a, b=2):
"Docstring for myfunc()."
print(' called myfunc with:', (a, b))
def show_details(name, f, is_partial=False):
"Show details of a callable object."
print('{}:'.format(name))
print(' object:', f)
if not is_partial:
print(' __name__:', f.__name__)
if is_partial:
print(' func:', f.func)
print(' args:', f.args)
print(' keywords:', f.keywords)
return
show_details('myfunc', myfunc)
myfunc('a', 3)
print()
# Set a different default value for 'b', but require
# the caller to provide 'a'.
p1 = functools.partial(myfunc, b=4)
show_details('partial with named default', p1, True)
p1('passing a')
p1('override b', b=5)
print()
# Set default values for both 'a' and 'b'.
p2 = functools.partial(myfunc, 'default a', b=99)
show_details('partial with defaults', p2, True)
p2()
p2(b='override b')
print()
print('Insufficient arguments:')
p1()
import functools
def myfunc(a, b=2):
"Docstring for myfunc()."
print(' called myfunc with:', (a, b))
def show_details(name, f):
"Show details of a callable object."
print('{}:'.format(name))
print(' object:', f)
print(' __name__:', end=' ')
try:
print(f.__name__)
except AttributeError:
print('(no __name__)')
print(' __doc__', repr(f.__doc__))
print()
show_details('myfunc', myfunc)
p1 = functools.partial(myfunc, b=4)
show_details('raw wrapper', p1)
print('Updating wrapper:')
print(' assign:', functools.WRAPPER_ASSIGNMENTS)
print(' update:', functools.WRAPPER_UPDATES)
print()
functools.update_wrapper(p1, myfunc)
show_details('updated wrapper', p1)
import functools
class MyClass:
"Demonstration class for functools"
def __call__(self, e, f=6):
"Docstring for MyClass.__call__"
print(' called object with:', (self, e, f))
def show_details(name, f):
"Show details of a callable object."
print('{}:'.format(name))
print(' object:', f)
print(' __name__:', end=' ')
try:
print(f.__name__)
except AttributeError:
print('(no __name__)')
print(' __doc__', repr(f.__doc__))
return
o = MyClass()
show_details('instance', o)
o('e goes here')
print()
p = functools.partial(o, e='default for e', f=8)
functools.update_wrapper(p, o)
show_details('instance wrapper', p)
p()
import functools
def standalone(self, a=1, b=2):
"Standalone function"
print(' called standalone with:', (self, a, b))
if self is not None:
print(' self.attr =', self.attr)
class MyClass:
"Demonstration class for functools"
def __init__(self):
self.attr = 'instance attribute'
method1 = functools.partialmethod(standalone)
method2 = functools.partial(standalone)
o = MyClass()
print('standalone')
standalone(None)
print()
print('method1 as partialmethod')
o.method1()
print()
print('method2 as partial')
try:
o.method2()
except TypeError as err:
print('ERROR: {}'.format(err))
import functools
def show_details(name, f):
"Show details of a callable object."
print('{}:'.format(name))
print(' object:', f)
print(' __name__:', end=' ')
try:
print(f.__name__)
except AttributeError:
print('(no __name__)')
print(' __doc__', repr(f.__doc__))
print()
def simple_decorator(f):
@functools.wraps(f)
def decorated(a='decorated defaults', b=1):
print(' decorated:', (a, b))
print(' ', end=' ')
return f(a, b=b)
return decorated
def myfunc(a, b=2):
"myfunc() is not complicated"
print(' myfunc:', (a, b))
return
# The raw function
show_details('myfunc', myfunc)
myfunc('unwrapped, default b')
myfunc('unwrapped, passing b', 3)
print()
# Wrap explicitly
wrapped_myfunc = simple_decorator(myfunc)
show_details('wrapped_myfunc', wrapped_myfunc)
wrapped_myfunc()
wrapped_myfunc('args to wrapped', 4)
print()
# Wrap with decorator syntax
@simple_decorator
def decorated_myfunc(a, b):
myfunc(a, b)
return
show_details('decorated_myfunc', decorated_myfunc)
decorated_myfunc()
decorated_myfunc('args to decorated', 4)
import functools
import inspect
from pprint import pprint
@functools.total_ordering
class MyObject:
def __init__(self, val):
self.val = val
def __eq__(self, other):
print(' testing __eq__({}, {})'.format(
self.val, other.val))
return self.val == other.val
def __gt__(self, other):
print(' testing __gt__({}, {})'.format(
self.val, other.val))
return self.val > other.val
print('Methods:\n')
pprint(inspect.getmembers(MyObject, inspect.isfunction))
a = MyObject(1)
b = MyObject(2)
print('\nComparisons:')
for expr in ['a < b', 'a <= b', 'a == b', 'a >= b', 'a > b']:
print('\n{:<6}:'.format(expr))
result = eval(expr)
print(' result of {}: {}'.format(expr, result))
import functools
class MyObject:
def __init__(self, val):
self.val = val
def __str__(self):
return 'MyObject({})'.format(self.val)
def compare_obj(a, b):
Old-style comparison function.
print('comparing {} and {}'.format(a, b))
if a.val < b.val:
return -1
elif a.val > b.val:
return 1
return 0
# Make a key function using cmp_to_key()
get_key = functools.cmp_to_key(compare_obj)
def get_key_wrapper(o):
"Wrapper function for get_key to allow for print statements."
new_key = get_key(o)
print('key_wrapper({}) -> {!r}'.format(o, new_key))
return new_key
objs = [MyObject(x) for x in range(5, 0, -1)]
for o in sorted(objs, key=get_key_wrapper):
print(o)
import functools
@functools.lru_cache()
def expensive(a, b):
print('expensive({}, {})'.format(a, b))
return a * b
MAX = 2
print('First set of calls:')
for i in range(MAX):
for j in range(MAX):
expensive(i, j)
print(expensive.cache_info())
print('\nSecond set of calls:')
for i in range(MAX + 1):
for j in range(MAX + 1):
expensive(i, j)
print(expensive.cache_info())
print('\nClearing cache:')
expensive.cache_clear()
print(expensive.cache_info())
print('\nThird set of calls:')
for i in range(MAX):
for j in range(MAX):
expensive(i, j)
print(expensive.cache_info())
import functools
def do_reduce(a, b):
print('do_reduce({}, {})'.format(a, b))
return a + b
data = range(1, 5)
print(data)
result = functools.reduce(do_reduce, data)
print('result: {}'.format(result))
import functools
@functools.singledispatch
def myfunc(arg):
print('default myfunc({!r})'.format(arg))
@myfunc.register(int)
def myfunc_int(arg):
print('myfunc_int({})'.format(arg))
@myfunc.register(list)
def myfunc_list(arg):
print('myfunc_list()')
for item in arg:
print(' {}'.format(item))
myfunc('string argument')
myfunc(1)
myfunc(2.3)
myfunc(['a', 'b', 'c'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Acquiring Function Properties
Step2: Other Callables
Step3: Methods and Functions
Step4: method1() can be called from an instance of MyClass, and the instance is passed as the first argument just as with methods defined normally. method2() is not set up as a bound method, and so the self argument must be passed explicitly, or the call will result in a TypeError.
Step5: Comparison
Step7: Collation Order
Step8: Caching
Step9: Reducing a Data Set
Step10: Generic Functions
|
11,653
|
<ASSISTANT_TASK:>
Python Code:
dc = DrawControl(marker={'shapeOptions': {'color': '#0000FF'}},
rectangle={'shapeOptions': {'color': '#0000FF'}},
circle={'shapeOptions': {'color': '#0000FF'}},
circlemarker={},
)
def handle_draw(self, action, geo_json):
print(action)
print(geo_json)
dc.on_draw(handle_draw)
m.add_control(dc)
dc.last_action
dc.last_draw
dc.clear_circles()
dc.clear_polylines()
dc.clear_rectangles()
dc.clear_markers()
dc.clear_polygons()
dc.clear()
m2 = Map(center=center, zoom=zoom, layout=dict(width='600px', height='400px'))
m2
map_center_link = link((m, 'center'), (m2, 'center'))
map_zoom_link = link((m, 'zoom'), (m2, 'zoom'))
new_poly = GeoJSON(data=dc.last_draw)
m2.add_layer(new_poly)
dc2 = DrawControl(polygon={'shapeOptions': {'color': '#0000FF'}}, polyline={},
circle={'shapeOptions': {'color': '#0000FF'}})
m2.add_control(dc2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In addition, the DrawControl also has last_action and last_draw attributes that are created dynamicaly anytime a new drawn path arrives.
Step2: It's possible to remove all drawings from the map
Step3: Let's draw a second map and try to import this GeoJSON data into it.
Step4: We can use link to synchronize traitlets of the two maps
Step5: Note that the style is preserved! If you wanted to change the style, you could edit the properties.style dictionary of the GeoJSON data. Or, you could even style the original path in the DrawControl by setting the polygon dictionary of that object. See the code for details.
|
11,654
|
<ASSISTANT_TASK:>
Python Code:
def htop(h,units='milibar'):
'''h in m
returns p in Pa'''
k=1
if units=='Pa':
k=1
if units=='mmhg':
k=7.50061683/1000.
if units=='milibar':
k=1./100.
return 101325*k* (1. - 2.25577E-5* h)**5.25588
def ptoh(p,units='milibar'):
'''p in m
returns height in m above the sea level'''
return 44330.76*( 1-(100*p/101325)**(0.1902631) )
import numpy as np
import pandas as pd
ap=pd.DataFrame()
ap['p']=np.loadtxt('p.txt')
ap['h']=ptoh(ap.p)
plt.plot(ap.index,ap.h,'r-',lw=2)
plt.xlabel('Tiempo transcurrido (s)',size=20)
plt.ylabel('Altura del edificio (m)',size=20)
plt.plot(ap.index,ap.h-ap.h.min(),'r-',lw=2)
plt.title('La Altura Entrebosques del S3 al 26 es %g m' %round( (ap.h.max()-ap.h.min()),1 ))
plt.xlabel('Tiempo transcurrido (s)',size=20)
plt.ylabel('Altura del edificio (m)',size=20)
ap.p.min()
ptoh(ap.p.min())
print('floor')
raw_input()
print('head')
raw_input()
print('height')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Altura sobre el nivel del mar
Step2: Altura del edificio
|
11,655
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
from sequana import GenomeCov, sequana_data
rcParams['figure.figsize'] = (10,6)
gc = GenomeCov(sequana_data("virus.bed", "data"), low_threshold=-2.5, high_threshold=2.5)
chrom = gc[0]
N = 4001
chrom.running_median(N, circular=True)
chrom.compute_zscore()
chrom.plot_coverage()
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
def f(N):
chrom.running_median(N, circular=True)
chrom.compute_zscore()
chrom.plot_coverage()
ylim([1000,5500])
plt.show()
# plt.show is to fix issue reported in :
# https://stackoverflow.com/questions/44329068/jupyter-notebook-interactive-plot-with-widgets
interact(f, N=widgets.IntSlider(min=501,max=8001, step=200))
chrom.running_median(4101)
chrom.compute_zscore()
chrom.get_rois().get_low_rois()
print(chrom)
chrom.get_centralness()
print(chrom.get_stats())
filename = sequana_data("JB409847.bed")
reference = sequana_data("JB409847.fasta")
gc = GenomeCov(filename)
gc.compute_gc_content(reference)
chrom = gc[0]
chrom.get_gc_correlation()
chrom.plot_gc_vs_coverage(cmap="BrBG", Nlevels=0, bins=[80,50])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read a Coverage file in BED format
Step2: Select one chromosome (there is only one in this case)
Step3: Compute the running median and plot the results
Step4: Interactive view of the effects of the running window
Step5: Region of interests
Step6: Some statistics
Step7: GC correlation
|
11,656
|
<ASSISTANT_TASK:>
Python Code:
s = pd.Series([4, 7, -5, 3])
s
s.values
type(s.values)
s.index
type(s.index)
s * 2
np.exp(s)
s2 = pd.Series([4, 7, -5, 3], index=["d", "b", "a", "c"])
s2
s2.index
s2['a']
s2['b':'c']
s2[["a", "b"]]
s2[2]
s2[1:4]
s2[[2, 1]]
s2[s2 > 0]
"a" in s2, "e" in s2
for i, j in s2.iteritems():
print(i, j)
s2["d":"a"]
sdata = {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000, 'Utah': 5000}
s3 = pd.Series(sdata)
s3
states = ['Califonia', 'Ohio', 'Oregon', 'Texas']
s4 = pd.Series(sdata, index=states)
s4
pd.isnull(s)
pd.notnull(s4)
s4.isnull()
s4.notnull()
print(s3.values, s4.values)
s3.values + s4.values
s3 + s4 #Utah가 NaN인 것을 보아하니 값이 둘 다 있을 때만 연산이 되고 하나라도 없으면 NaN으로 처리되나보네
s4
s4.name = "population"
s4
s4.index.name = "state"
s4
s
s.index
s.index = ['Bob', 'Steve', 'Jeff', 'Ryan']
s
s.index
data = {
'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
'year': [2001, 2001, 2002, 2001, 2002],
'pop': [1.5, 1.7, 3.6, 2.4, 2.9]
}
df = pd.DataFrame(data)
df
pd.DataFrame(data, columns=['year', 'state', 'pop'])
df.dtypes
df2 = pd.DataFrame(data,
columns=['year', 'state', 'pop', 'debt'],
index=['one', 'two', 'three', 'four', 'five'])
df2
df["state"]
type(df["state"]), type([df["state"]])
[df["state"]]
df.state
df2['debt'] = 16.5, 16.2, 16.3, 16.7, 16.2
df2
df2['debt'] = 16.5
df2
df2['debt'] = np.arange(5)
df2
df2['debt'] = pd.DataFrame([-1.2, -1.5, -1.7], index=['two', 'four', 'five'])
df2
df2['eastern'] = df2.state == 'Ohio'
df2
del df2["eastern"]
df2
x = [3, 6, 1, 4]
sorted(x)
x
x.sort()
x
s = pd.Series(np.arange(5.), index=['a', 'b', 'c', 'd', 'e'])
s
s2 = s.drop('c')
s2
s
s.drop(["b", "c"])
df = pd.DataFrame(np.arange(16).reshape((4, 4)),
index=['Ohio', 'Colorado', 'Utah', 'New York'],
columns=['one', 'two', 'three', 'four'])
df
df.drop(['Colorado', 'Ohio'])
df.drop('two', axis=1)
df.drop(['two', 'four'], axis=1)
pop = {
'Nevada': {
2001: 2.4,
2002: 2.9
},
'Ohio': {
2000: 1.5,
2001: 1.7,
2002: 3.6
}
}
df3 = pd.DataFrame(pop)
df3
pdata = {
'Ohio': df3['Ohio'][:-1],
'Nevada': df3['Nevada'][:3]
}
pd.DataFrame(pdata)
df3.values
df2.values
df3.values
df2.values
df2
df2["year"]
df2.year
df2[["state", "debt", "year"]]
df2[["year"]]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Vectorized Operation
Step2: 명시적인 Index를 가지는 Series
Step3: Series Indexing 1
Step4: Series Indexing 2
Step5: dict 연산
Step6: dict 데이터를 이용한 Series 생성
Step7: Index 기준 연산
Step8: Index 이름
Step9: Index 변경
Step10: DataFrame
Step11: 명시적인 Column/Row Index를 가지는 DataFrame
Step12: Single Column Access
Step13: Cloumn Data Update
Step14: Add Column
Step15: Delete Column
Step16: inplace 옵션
Step17: drop 메소드를 사용한 Row/Column 삭제
Step18: Nested dict를 사용한 DataFrame 생성
Step19: Series dict를 사용한 DataFrame 생성
Step20: NumPy array로 변환
Step21: DataFrame의 Column Indexing
|
11,657
|
<ASSISTANT_TASK:>
Python Code:
# this is a comment and will not run in the code
'''this is just a mulit line comment'''
pwd
#addition
2+1
# substraction
2-1
1-2
2*2
3/2
3.0/2
float(3)/2
3/float(2)
from __future__ import division
3/2
1/2
2/3
root(2)
sqrt(2)
4^2
4^.5
4**.5
a=5
a=6
a+a
a
0.1+0.2-0.3
'hello'
'this entire thing can be a string'
"this is using double quotes"
print 'hello'
print("hello")
s='hello'
s
len(s)
print(s)
s[3]
s[10]
s[5]
s[2:4]
z*10
letter='z'
letter*10
letter.upper()
letter.center('z')
print 'this is a string'
s = 'STRING'
print 'place another string with a mod and s: %s' %(s)
from __future__ import print_function
print('hello')
print('one: {x}'.format(x='INSERT'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: strings yoiu can use the %s to format strings into your print statements
|
11,658
|
<ASSISTANT_TASK:>
Python Code:
import sys
import scipy.io as sio
import glob
import numpy as np
import matplotlib.pyplot as plt
from skimage.filters import threshold_otsu
sys.path.append('../code/functions')
import qaLib as qLib
sys.path.append('../../pipeline_1/code/functions')
import connectLib as cLib
from IPython.display import Image
import random
from connectLib import otsuVox
Image(filename = "images/nonMaxima.png")
def nonMaximaSupression(clusterList, image, z):
randClusterDist = []
for i in range(100000):
point = [int(random.random()*image.shape[0]), int(random.random()*image.shape[1]), int(random.random()*image.shape[2])]
randClusterDist.append(image[point[0]][point[1]][point[2]])
mu = np.average(randClusterDist)
sigma = np.std(randClusterDist)
aveList = []
for cluster in clusterList:
curClusterDist = []
for member in cluster.members:
curClusterDist.append(image[member[0]][member[1]][member[2]])
aveList.append(np.mean(curClusterDist))
finalClusters = []
for i in range(len(aveList)): #this is bad and i should feel bad
if (aveList[i] - mu)/float(sigma) > z:
finalClusters.append(clusterList[i])
return finalClusters
simEasyGrid = np.zeros((100, 100, 100))
for i in range(4):
for j in range(4):
for k in range(4):
simEasyGrid[20*(2*j): 20*(2*j + 1), 20*(2*i): 20*(2*i + 1), 20*(2*k): 20*(2*k + 1)] = i + j + k + 1
plt.imshow(simEasyGrid[5])
plt.axis('off')
plt.title('Easy Data Raw Plot at z=0')
plt.show()
plt.hist(simEasyGrid[0])
plt.title("Histogram of Easy Data")
plt.show()
simDiff = np.zeros((100, 100, 100))
for i in range(100):
for j in range(100):
for k in range(100):
simDiff[i][j][k] = 100
plt.imshow(simDiff[5])
plt.axis('off')
plt.title('Challenging Data Raw Plot at z=0')
plt.show()
plt.hist(simDiff[0], bins=20)
plt.title("Histogram of Challenging Data")
plt.show()
simEasyGrid = np.zeros((100, 100, 100))
for i in range(4):
for j in range(4):
for k in range(4):
simEasyGrid[20*(2*j): 20*(2*j + 1), 20*(2*i): 20*(2*i + 1), 20*(2*k): 20*(2*k + 1)] = i + j + k + 1
plt.imshow(simEasyGrid[5])
plt.axis('off')
plt.title('Easy Data Raw Plot at z=0')
plt.show()
plt.hist(simEasyGrid[0])
plt.title("Histogram of Easy Data")
plt.show()
simDiff = np.zeros((100, 100, 100))
for i in range(100):
for j in range(100):
for k in range(100):
simDiff[i][j][k] = 100
plt.imshow(simDiff[5])
plt.axis('off')
plt.title('Challenging Data Raw Plot at z=0')
plt.show()
plt.hist(simDiff[0], bins=20)
plt.title("Histogram of Challenging Data")
plt.show()
otsuOutEasy = otsuVox(simEasyGrid)
otsuClustersEasy = cLib.clusterThresh(otsuOutEasy, 0, 1000000)
nonMaxClusters = nonMaximaSupression(otsuClustersEasy, simEasyGrid, 1)
nonMaxEasy = np.zeros_like(simEasy)
for cluster in nonMaxClusters:
for member in cluster.members:
nonMaxEasy[member[0]][member[1]][member[2]] = 1
plt.imshow(nonMaxEasy[5])
plt.axis('off')
plt.title('Non Max Supression Output for Easy Data Slice at z=5')
plt.show()
otsuOutDiff = otsuVox(simDiff)
otsuClustersDiff = cLib.clusterThresh(otsuOutDiff, 0, 1000000)
nonMaxClusters = nonMaximaSupression(otsuClustersDiff, simDiff, 0)
nonMaxDiff = np.zeros_like(simDiff)
for cluster in nonMaxClusters:
for member in cluster.members:
nonMaxDiff[member[0]][member[1]][member[2]] = 1
plt.imshow(nonMaxDiff[5])
plt.axis('off')
plt.title('Non Max Supression Output for Difficult Data Slice at z=5')
plt.show()
procData = []
for mat in glob.glob('../../data/matlabData/collman15v2/*_p1.mat'):
name = mat[34:-7]
rawData = sio.loadmat(mat)
npData = np.rollaxis(rawData[name], 2, 0)
procData.append([name, npData])
realData = procData[12][1]
otsuOutReal = otsuVox(realData)
plt.imshow(otsuOutReal[0], cmap='gray')
plt.title('Real Data otsuVox Output At Slice 0')
plt.axis('off')
plt.show()
plt.hist(otsuOutReal[0])
plt.title("Histogram of Post-Otsu Data")
plt.show()
otsuClusters = cLib.clusterThresh(otsuOutReal, 0, 10000000)
nonMaxClusters = nonMaximaSupression(otsuClusters, realData, 6)
nonMaxImg = np.zeros_like(realData)
for cluster in nonMaxClusters:
for member in cluster.members:
nonMaxImg[member[0]][member[1]][member[2]] = 1
plt.imshow(nonMaxImg[0], cmap='gray')
plt.title('NonMaximaSupression Output At Slice 0')
plt.axis('off')
plt.show()
labelClusters = cLib.clusterThresh(procData[0][1], 0, 10000000)
otsuClusters = cLib.clusterThresh(otsuOutReal, 0, 10000000)
precision, recall, F1 = qLib.precision_recall_f1(labelClusters, otsuClusters)
print 'Precision: ' + str(precision)
print 'Recall: ' + str(recall)
print 'F1: ' + str(F1)
precision, recall, F1 = qLib.precision_recall_f1(labelClusters, nonMaxClusters)
print 'Precision: ' + str(precision)
print 'Recall: ' + str(recall)
print 'F1: ' + str(F1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Algorithm
Step2: Actual Code
Step3: Algorithm Conditions
Step4: Prediction on Good Data
Step5: Prediction on Challenging Data
Step6: The easy data looks exactly as I expected. The histogram is has deviation, meaning nonMaxSupression will be able to extract maxima.
Step7: The difficult data looks exactly as I expected. The histogram is a single value, which is the kind of data nonMaxSupression fails on.
Step8: As expected, otsuVox picked up just the brightest clusters.
Step9: As expected, otsuVox failed to pick out bright things because there was no deviation in the image.
Step10: As we can see, the real data has a mean and a standard deviation. This means that nonMaximaSupression should be able to extract the bright spots.
Step11: Precision/Recall/F1 before nonMaximaSupression
Step12: Precision/Recall/F1 after nonMaximaSupression
|
11,659
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pysalvador as sal
demerec=sal.demerec_data
np.transpose(demerec)
sal.newtonLD(demerec)
sal.newtonLD(demerec, show_iter=True)
sal.confintLD(demerec,show_iter=True)
luria16=sal.luria_16_data
luria16
sal.newtonLD_plating(luria16,e=0.4,show_iter=True)
sal.confintLD_plating(luria16,e=0.4,show_iter=True)
sal.newtonMK(demerec,w=0.9,show_iter=True)
sal.confintMK(demerec,w=0.9,show_iter=True)
mydata=[0,16,20,2,2,56,3,161,9]
sal.newtonLD(mydata)
sal.confintLD(mydata)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The basic model
Step2: To obtain a maximum likelihood estimate of the expected number of mutations per culture, m, you execute the following.
Step3: You may watch the iteration process as follows.
Step4: A 95% confident interval for m can be obtained as follows.
Step5: Partial plating
Step6: Accounting for fitness
Step7: Using your own data
|
11,660
|
<ASSISTANT_TASK:>
Python Code:
#obj = ["3C 454.3", 343.49062, 16.14821, 1.0]
obj = ["PKS J0006-0623", 1.55789, -6.39315, 1.0]
#obj = ["M87", 187.705930, 12.391123, 1.0]
#### name, ra, dec, radius of cone
obj_name = obj[0]
obj_ra = obj[1]
obj_dec = obj[2]
cone_radius = obj[3]
obj_coord = coordinates.SkyCoord(ra=obj_ra, dec=obj_dec, unit=(u.deg, u.deg), frame="icrs")
# Query data
data_2mass = Irsa.query_region(obj_coord, catalog="fp_psc", radius=cone_radius * u.deg)
data_wise = Irsa.query_region(obj_coord, catalog="allwise_p3as_psd", radius=cone_radius * u.deg)
__data_galex = Vizier.query_region(obj_coord, catalog='II/335', radius=cone_radius * u.deg)
data_galex = __data_galex[0]
num_2mass = len(data_2mass)
num_wise = len(data_wise)
num_galex = len(data_galex)
print("Number of object in (2MASS, WISE, GALEX): ", num_2mass, num_wise, num_galex)
# use only coordinate columns
ra_2mass = data_2mass['ra']
dec_2mass = data_2mass['dec']
c_2mass = coordinates.SkyCoord(ra=ra_2mass, dec=dec_2mass, unit=(u.deg, u.deg), frame="icrs")
ra_wise = data_wise['ra']
dec_wise = data_wise['dec']
c_wise = coordinates.SkyCoord(ra=ra_wise, dec=dec_wise, unit=(u.deg, u.deg), frame="icrs")
ra_galex = data_galex['RAJ2000']
dec_galex = data_galex['DEJ2000']
c_galex = coordinates.SkyCoord(ra=ra_galex, dec=dec_galex, unit=(u.deg, u.deg), frame="icrs")
####
sep_min = 1.0 * u.arcsec # minimum separation in arcsec
# Only 2MASS and WISE matching
#
idx_2mass, idx_wise, d2d, d3d = c_wise.search_around_sky(c_2mass, sep_min)
# select only one nearest if there are more in the search reagion (minimum seperation parameter)!
print("Only 2MASS and WISE: ", len(idx_2mass))
# from matching of 2 cats (2MASS and WISE) coordinate
data_2mass_matchwith_wise = data_2mass[idx_2mass]
data_wise_matchwith_2mass = data_wise[idx_wise] # WISE dataset
w1 = data_wise_matchwith_2mass['w1mpro']
j = data_2mass_matchwith_wise['j_m']
w1j = w1-j
cutw1j = -1.7 # https://academic.oup.com/mnras/article/448/2/1305/1055284
# WISE galaxy data -> from cut
galaxy = data_wise_matchwith_2mass[w1j < cutw1j]
print("Number of galaxy from cut W1-J:", len(galaxy))
w1j_galaxy = w1j[w1j<cutw1j]
w1_galaxy = w1[w1j<cutw1j]
plt.scatter(w1j, w1, marker='o', color='blue')
plt.scatter(w1j_galaxy, w1_galaxy, marker='.', color="red")
plt.axvline(x=cutw1j) # https://academic.oup.com/mnras/article/448/2/1305/1055284
# GALEX
###
# coord of object in 2mass which match wise (first objet/nearest in sep_min region)
c_2mass_matchwith_wise = c_2mass[idx_2mass]
c_wise_matchwith_2mass = c_wise[idx_wise]
#Check with 2mass cut
idx_2mass_wise_galex, idx_galex1, d2d, d3d = c_galex.search_around_sky(c_2mass_matchwith_wise, sep_min)
num_galex1 = len(idx_galex1)
#Check with wise cut
idx_wise_2mass_galex, idx_galex2, d2d, d3d = c_galex.search_around_sky(c_wise_matchwith_2mass, sep_min)
num_galex2 = len(idx_galex2)
print("Number of GALEX match in 2MASS cut (with WISE): ", num_galex1)
print("Number of GALEX match in WISE cut (with 2MASS): ", num_galex2)
# diff/average
print("Confusion level: ", abs(num_galex1 - num_galex2)/np.mean([num_galex1, num_galex2])*100, "%")
# Choose which one is smaller!
if num_galex1 > num_galex2:
select_from_galex = idx_galex1
match_galex = data_galex[select_from_galex]
c_selected_galex = c_galex[select_from_galex]
# 2MASS from GALEX_selected
_idx_galex1, _idx_2mass, d2d, d3d = c_2mass.search_around_sky(c_selected_galex, sep_min)
match_2mass = data_2mass[_idx_2mass]
# WISE from 2MASS_selected
_ra_match_2mass = match_2mass['ra']
_dec_match_2mass = match_2mass['dec']
_c_match_2mass = coordinates.SkyCoord(ra=_ra_match_2mass, dec=_dec_match_2mass, unit=(u.deg, u.deg), frame="icrs")
_idx, _idx_wise, d2d, d3d = c_wise.search_around_sky(_c_match_2mass, sep_min)
match_wise = data_wise[_idx_wise]
else:
select_from_galex = idx_galex2
match_galex = data_galex[select_from_galex]
c_selected_galex = c_galex[select_from_galex]
# WISE from GALEX_selected
_idx_galex1, _idx_wise, d2d, d3d = c_wise.search_around_sky(c_selected_galex, sep_min)
match_wise = data_wise[_idx_wise]
# 2MASS from WISE_selected
_ra_match_wise = match_wise['ra']
_dec_match_wise = match_wise['dec']
_c_match_wise = coordinates.SkyCoord(ra=_ra_match_wise, dec=_dec_match_wise, unit=(u.deg, u.deg), frame="icrs")
_idx, _idx_2mass, d2d, d3d = c_2mass.search_around_sky(_c_match_wise, sep_min)
match_2mass = data_2mass[_idx_2mass]
print("Number of match in GALEX: ", len(match_galex))
print("Number of match in 2MASS: ", len(match_2mass))
print("Number of match in WISE : ", len(match_wise))
joindata = np.array([match_2mass['j_m'],
match_2mass['j_m']-match_2mass['h_m'],
match_2mass['j_m']-match_2mass['k_m'],
match_2mass['j_m']-match_wise['w1mpro'],
match_2mass['j_m']-match_wise['w2mpro'],
match_2mass['j_m']-match_wise['w3mpro'],
match_2mass['j_m']-match_wise['w4mpro'],
match_2mass['j_m']-match_galex['NUVmag']])
joindata = joindata.T
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
X = scale(joindata)
pca = PCA(n_components=4)
X_r = pca.fit(X).transform(X)
print(pca.components_)
print(pca.explained_variance_)
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,0], X_r[:,1], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,1], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,0], X_r[:,2], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,2], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,0], X_r[:,3], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,3], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,1], X_r[:,2], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,1], X_r[i,2], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,1], X_r[:,3], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,1], X_r[i,3], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,2], X_r[:,3], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,2], X_r[i,3], marker=".", color="red")
from sklearn.cluster import DBSCAN
from sklearn.preprocessing import StandardScaler
X = scale(joindata)
db = DBSCAN(eps=1, min_samples=3).fit(X)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('Estimated number of clusters: %d' % n_clusters_)
#print(labels)
# Black removed and is used for noise instead.
unique_labels = set(labels)
colors = [plt.cm.Spectral(each) for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (labels == k)
## J vs J-W1
xy = X[class_member_mask & core_samples_mask]
plt.plot(xy[:, 3], xy[:, 0], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=14)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 3], xy[:, 0], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=8)
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.plot(X[i,3], X[i,0], marker="X", markerfacecolor='red', markeredgecolor='none', markersize=8)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
from sklearn.manifold import TSNE
X = scale(joindata)
X_r = TSNE(n_components=2).fit_transform(X)
plt.scatter(X_r[:,0], X_r[:,1], marker='o', color="blue")
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,1], marker='.', color="red")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Matching coordinates
Step2: Plot $W_1-J$ vs $W_1$
Step3: W1-J < -1.7 => galaxy
Step4: Collect relevant data
Step5: Analysis
Step6: DBSCAN
Step7: Plot $W_1 - J$ vs $J$
Step8: t-SNE
|
11,661
|
<ASSISTANT_TASK:>
Python Code:
from pymatgen import MPRester, Composition
from pymatgen.analysis.phase_diagram import PhaseDiagram
from pymatgen.entries.computed_entries import ComputedEntry
from pymatgen.apps.borg.hive import VaspToComputedEntryDrone
from pymatgen.entries.compatibility import MaterialsProjectCompatibility
from pymatgen.analysis.phase_diagram import ReactionDiagram, PDPlotter, PDEntry
%matplotlib inline
mp = MPRester()
compat = MaterialsProjectCompatibility()
chemsys = ["H", "P", "V","O", "C"]
all_entries = mp.get_entries_in_chemsys(chemsys)
CO_entries = [e for e in all_entries if e.composition.reduced_formula == "CO"]
CO2_entries = [e for e in all_entries if e.composition.reduced_formula == "CO2"]
H2O_entries = [e for e in all_entries if e.composition.reduced_formula == "H2O"]
VPO5_entries = [e for e in all_entries if e.composition.reduced_formula == "VPO5"]
non_solid = ["CO", "CO2", "H2O", "VPO5"]
entries = list(filter(lambda e: e.composition.reduced_formula not in non_solid, all_entries))
potcars = set()
for e in all_entries:
if len(e.composition) == 1 and e.composition.reduced_formula in ["C", "H2", "O2"]:
potcars.update(e.parameters["potcar_symbols"])
factor = 1000.0 / 6.0221409e23 / 1.60217662e-19
ec_form_energy = -682.8 * factor
ec = ComputedEntry(composition="C3H4O3", energy=0, parameters={"potcar_symbols": list(potcars)})
ec.data["oxide_type"] = "oxide"
# MaterialsProjectCompatibility
ec = compat.process_entry(ec)
pd = PhaseDiagram(all_entries)
ec.uncorrected_energy = ec_form_energy + sum([pd.el_refs[el].energy_per_atom * amt \
for el, amt in ec.composition.items()]) - ec.correction
vopo4 = []
vc = VaspToComputedEntryDrone()
for d in ["VOPO4/"]:
e = vc.assimilate(d)
e.data["oxide_type"] = "oxide"
e = compat.process_entry(e)
vopo4.append(e)
hxvopo4 = []
for d in ["HVOPO4/", "H2VOPO4/"]:
e = vc.assimilate(d)
e.data["oxide_type"] = "oxide"
e = compat.process_entry(e)
hxvopo4.append(e)
potcars = set()
for e in all_entries:
if len(e.composition) == 1 and e.composition.reduced_formula in ["C", "O2"]:
potcars.update(e.parameters["potcar_symbols"])
co_form_energy = -110.53 * factor
co = ComputedEntry(composition="CO", energy=0, parameters={"potcar_symbols": list(potcars)})
co.data["oxide_type"] = "oxide"
co = compat.process_entry(co)
pd = PhaseDiagram(all_entries)
co.uncorrected_energy = co_form_energy + sum([pd.el_refs[el].energy_per_atom * amt \
for el, amt in co.composition.items()]) - co.correction
potcars = set()
for e in all_entries:
if len(e.composition) == 1 and e.composition.reduced_formula in ["C", "O2"]:
potcars.update(e.parameters["potcar_symbols"])
co2_form_energy = -393.52 * factor
co2 = ComputedEntry(composition="CO2", energy=0, parameters={"potcar_symbols": list(potcars)})
co2.data["oxide_type"] = "oxide"
co2 = compat.process_entry(co2)
pd = PhaseDiagram(all_entries)
co2.uncorrected_energy = co2_form_energy + sum([pd.el_refs[el].energy_per_atom * amt
for el, amt in co2.composition.items()]) - co2.correction
potcars = set()
for e in all_entries:
if len(e.composition) == 1 and e.composition.reduced_formula in ["H2", "O2"]:
potcars.update(e.parameters["potcar_symbols"])
h2o_form_energy = -286.629 * factor
h2o = ComputedEntry(composition="H2O", energy=0, parameters={"potcar_symbols": list(potcars)})
h2o.data["oxide_type"] = "oxide"
h2o = compat.process_entry(h2o)
pd = PhaseDiagram(all_entries)
h2o.uncorrected_energy = h2o_form_energy + sum([pd.el_refs[el].energy_per_atom * amt for el, amt in h2o.composition.items()]) - h2o.correction
entry1 = vopo4[0]
entry2 = ec
useful_entries = entries + hxvopo4 + [h2o, co2, co]
from scipy import stats
import numpy as np
%matplotlib inline
import matplotlib as mpl
mpl.rcParams['axes.linewidth']=3
mpl.rcParams['lines.markeredgewidth']=2
mpl.rcParams['lines.linewidth']=3
mpl.rcParams['lines.markersize']=13
mpl.rcParams['xtick.major.width']=3
mpl.rcParams['xtick.major.size']=8
mpl.rcParams['xtick.minor.width']=3
mpl.rcParams['xtick.minor.size']=4
mpl.rcParams['ytick.major.width']=3
mpl.rcParams['ytick.major.size']=8
mpl.rcParams['ytick.minor.width']=3
mpl.rcParams['ytick.minor.size']=4
ra = ReactionDiagram(entry1=entry1, entry2=entry2, all_entries=useful_entries)
cpd = ra.get_compound_pd()
plotter = PDPlotter(cpd, show_unstable=False)
plotter.get_plot(label_stable=False, label_unstable=False)
for i, l in ra.labels.items():
print ("%s - %s" % (i, l))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get all H, P, V, O, C entries by MPRester
Step2: Remove CO, CO2, H2O, VPO5 entries from all_entries, use experimental data and our own calculations
Step3: Get POTCAR of C, H, O for EC to construct its ComputedEntry
Step4: EC solid phase
Step5: Use my own calculation entries
Step6: CO solid phase
Step7: CO2 gas phase
Step8: H2O liquid phase
|
11,662
|
<ASSISTANT_TASK:>
Python Code:
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
sns.set_style("white")
import util
df = util.load_burritos()
N = df.shape[0]
df.head()
print('Number of burritos:', df.shape[0])
print('Number of restaurants:', len(df.Location.unique()))
print('Number of reviewers:', len(df.Reviewer.unique()))
print('Number of reviews by Scott:', df.Reviewer.value_counts()['Scott'])
uniqlocidx = df.Location.drop_duplicates().index
print('Percentage of taco shops with free chips:', np.round(100 - 100*df.Chips[uniqlocidx].isnull().sum()/np.float(len(df.Location.unique())),1))
# Count of how many burritos each person has eaten
df['Reviewer'].value_counts()
# Number of each type of burrito
def burritotypes(x, types = {'California':'cali', 'Carnitas':'carnita', 'Carne asada':'carne asada',
'Chicken':'chicken', 'Surf & Turf':'surf.*turf', 'Adobada':'adobad', 'Al Pastor':'pastor'}):
import re
T = len(types)
Nmatches = {}
for b in x:
matched = False
for t in types.keys():
re4str = re.compile('.*'+types[t]+'.*', re.IGNORECASE)
if np.logical_and(re4str.match(b) is not None, matched is False):
try:
Nmatches[t] +=1
except KeyError:
Nmatches[t] = 1
matched = True
if matched is False:
try:
Nmatches['other'] +=1
except KeyError:
Nmatches['other'] = 1
return Nmatches
typecounts = burritotypes(df.Burrito)
plt.figure(figsize=(10,10))
ax = plt.axes([0.1, 0.1, 0.65, 0.65])
# The slices will be ordered and plotted counter-clockwise.
labels = typecounts.keys()
fracs = np.array([i for i in typecounts.values()])
explode=[.1]*len(typecounts)
patches, texts, autotexts = plt.pie(fracs, explode=explode, labels=labels,
autopct=lambda p: '{:.0f}'.format(p * np.sum(fracs) / 100), shadow=False, startangle=0)
# The default startangle is 0, which would start
# the Frogs slice on the x-axis. With startangle=90,
# everything is rotated counter-clockwise by 90 degrees,
# so the plotting starts on the positive y-axis.
plt.title('Types of burritos',size=30)
for t in texts:
t.set_size(30)
for t in autotexts:
t.set_size(30)
autotexts[0].set_color('w')
autotexts[6].set_color('w')
figname = 'burritotypes'
plt.savefig('/gh/fig/burrito/'+figname + '.png')
# Time series of ratings
import math
def dates2ts(dates):
from datetime import datetime
D = len(dates)
start = datetime.strptime('1/1/2016','%m/%d/%Y')
ts = np.zeros(D,dtype=int)
for d in range(D):
burrdate = datetime.strptime(df.Date[d],'%m/%d/%Y')
diff = burrdate - start
ts[d] = diff.days
return ts
def cumburritos(days):
from statsmodels.distributions.empirical_distribution import ECDF
ecdf = ECDF(days)
t = np.arange(days[-1]+1)
return t, ecdf(t)*len(days)
def datelabels(startdate = '1/1/2016', M = 12):
from datetime import datetime
start = datetime.strptime(startdate,'%m/%d/%Y')
datestrs = []
ts = np.zeros(M)
for m in range(M):
datestrs.append(str(m+1) + '/1')
burrdate = datetime.strptime(datestrs[m]+'/2016','%m/%d/%Y')
diff = burrdate - start
ts[m] = diff.days
return datestrs, ts
burrdays = dates2ts(df.Date)
t, burrcdf = cumburritos(burrdays)
datestrs, datets = datelabels()
plt.figure(figsize=(5,5))
plt.plot(t,burrcdf,'k-')
plt.xlabel('Date (2016)',size=20)
plt.ylabel('# burritos rated',size=15)
plt.xticks(datets,datestrs,size=10, rotation='vertical')
plt.yticks(size=10)
plt.tight_layout()
figname = 'burritoprogress'
plt.savefig('/gh/fig/burrito/'+figname + '.png')
# Distribution of hunger level
plt.figure(figsize=(4,4))
n, _, _ = plt.hist(df.Hunger.dropna(),np.arange(-.25,5.5,.5),color='k')
plt.xlabel('Hunger level',size=20)
plt.xticks(np.arange(0,5.5,.5),size=10)
plt.xlim((-.25,5.25))
plt.ylabel('Count',size=20)
plt.yticks((0,int(math.ceil(np.max(n) / 5.)) * 5),size=10)
plt.tight_layout()
figname = 'hungerleveldist'
plt.savefig('/gh/fig/burrito/'+figname + '.png')
# Average burrito cost
plt.figure(figsize=(4,4))
n, _, _ = plt.hist(df.Cost.dropna(),np.arange(4,10.25,.5),color='k')
plt.xlabel('Cost ($)',size=20)
plt.xticks(np.arange(4,11,1),size=15)
plt.xlim((4,10))
plt.ylabel('Count',size=20)
plt.yticks((0,int(math.ceil(np.max(n) / 5.)) * 5),size=15)
plt.tight_layout()
figname = 'costdist'
plt.savefig('/gh/fig/burrito/'+figname + '.png')
print(np.nanmean(df.Cost))
# Volume dist
plt.figure(figsize=(5,5))
n, _, _ = plt.hist(df.Volume.dropna(),np.arange(0.5,1.3,.05),color='k')
plt.xlabel('Volume (L)',size=20)
plt.xticks(np.arange(0.5,1.3,.1),size=15)
plt.xlim((0.5,1.2))
plt.ylabel('Count',size=20)
plt.yticks((0,int(math.ceil(np.max(n) / 5.)) * 5),size=15)
plt.tight_layout()
figname = 'volumedist'
plt.savefig('/gh/fig/burrito/'+figname + '.png')
print(np.mean(df.Volume))
def metrichist(metricname):
plt.figure(figsize=(5,5))
n, _, _ = plt.hist(df[metricname].dropna(),np.arange(-.25,5.5,.5),color='k')
plt.xlabel(metricname + ' rating',size=20)
plt.xticks(np.arange(0,5.5,.5),size=15)
plt.xlim((-.25,5.25))
plt.ylabel('Count',size=20)
plt.yticks((0,int(math.ceil(np.max(n) / 5.)) * 5),size=15)
plt.tight_layout()
if metricname == 'Meat:filling':
metricname = 'meattofilling'
figname = metricname + 'dist'
plt.savefig('/gh/fig/burrito/'+figname + '.png')
m_Hist = ['Tortilla','Temp','Meat','Fillings','Meat:filling','Uniformity','Salsa','Synergy','Wrap','overall']
for m in m_Hist:
metrichist(m)
# Overall recommendations
plt.figure(figsize=(6,6))
ax = plt.axes([0.1, 0.1, 0.8, 0.8])
# The slices will be ordered and plotted counter-clockwise.
labels = ['Yes','No']
fracs = np.array([np.sum(df.Rec==labels[0]),np.sum(df.Rec==labels[1])])
explode=[.01]*len(labels)
patches, texts, autotexts = plt.pie(fracs, explode=explode, labels=labels,
autopct=lambda p: '{:.0f}'.format(p * np.sum(fracs) / 100), shadow=False, startangle=90)
# The default startangle is 0, which would start
# the Frogs slice on the x-axis. With startangle=90,
# everything is rotated counter-clockwise by 90 degrees,
# so the plotting starts on the positive y-axis.
plt.title('Would you recommend this burrito?',size=30)
for t in texts:
t.set_size(20)
for t in autotexts:
t.set_size(30)
autotexts[0].set_color('w')
autotexts[1].set_color('w')
figname = 'recspie'
plt.savefig('/gh/fig/burrito/'+figname + '.png')
dfpca = df[['Volume','Tortilla','Temp','Meat','Fillings','Meat:filling','Uniformity','Salsa','Synergy','Wrap']]
dfpca = dfpca.fillna(dfpca.mean())
# Normalize
dfpca = (dfpca - dfpca.mean()) / dfpca.std()
dfpca
# Color: Taco Stand, Lucha, Los Primos
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(dfpca)
print(pca.components_)
print(pca.explained_variance_ratio_)
dfpca_proj = np.dot(pca.components_,dfpca.T)
dfpca_proj[0][np.where(df.Location=='taco stand')]
plt.plot(dfpca_proj[0],dfpca_proj[1],'k.')
plt.figure(figsize=(8,8))
shops = ['taco stand','lucha libre north park','los primos mexican food']
shops_marker = ['*','^','s']
shops_ms = [20,12,12]
overallcutoffs = [-.1, 3, 4, 5.1]
overallcolors = ['r','k','g']
for o in range(len(overallcolors)):
notshops = np.logical_and(df.Location != shops[0],np.logical_and(df.Location != shops[1],df.Location != shops[2]))
orange = np.logical_and(df.overall>=overallcutoffs[o],df.overall<overallcutoffs[o+1])
notshops = np.where(np.logical_and(notshops,orange))
plt.plot(dfpca_proj[0][notshops],dfpca_proj[1][notshops],'.',color=overallcolors[o],alpha=.5,ms=20)
for s in range(len(shops)):
burridx = np.where(np.logical_and(df.Location==shops[s],np.logical_and(df.overall>=overallcutoffs[o],df.overall<overallcutoffs[o+1])))
plt.plot(dfpca_proj[0][burridx],dfpca_proj[1][burridx],
shops_marker[s],color=overallcolors[o],ms=shops_ms[s],label = shops[s])
plt.xlim((-8,4.5))
plt.ylim((-3,4))
plt.xlabel('PC 1',size=20)
plt.ylabel('PC 2',size=20)
plt.xticks([])
plt.yticks([])
plt.legend(loc='best')
shopsalpha = [.2,.2,.2]
shops = ['taco stand','lucha libre north park','los primos mexican food']
overall_marker = ['v','.','*']
overall_ms = [12,25,20]
overallcutoffs = [-.1, 3, 4, 5.1]
shopscolors = ['g','b','r']
plt.figure(figsize=(8,8))
for o in range(len(overallcolors)):
notshops = np.logical_and(df.Location != shops[0],np.logical_and(df.Location != shops[1],df.Location != shops[2]))
orange = np.logical_and(df.overall>=overallcutoffs[o],df.overall<overallcutoffs[o+1])
notshops = np.where(np.logical_and(notshops,orange))[0]
#plt.plot(df.Meat[notshops],df.Fillings[notshops],'.',color=overallcolors[o],alpha=.2,ms=20)
for s in range(len(shops)):
burridx = np.where(np.logical_and(df.Location==shops[s],np.logical_and(df.overall>=overallcutoffs[o],df.overall<overallcutoffs[o+1])))[0]
plt.plot(df.Meat[burridx],df.Salsa[burridx],
overall_marker[o],color=shopscolors[s],ms=overall_ms[o],alpha=shopsalpha[s],label=shops[s])
plt.xlim((0,5.5))
plt.ylim((0,5.5))
plt.xlabel('Meat flavor',size=20)
plt.ylabel('Salsa flavor',size=20)
plt.xticks(np.arange(1,6),size=20)
plt.yticks(np.arange(1,6),size=20)
plt.legend(loc='best',fontsize=12)
plt.savefig('/gh/fig/burrito/superscatter.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Brief metadata
Step3: What types of burritos have been rated?
Step4: Progress in number of burritos rated
Step5: Burrito dimension distributions
Step6: Fraction of burritos recommended
Step7: PCA
|
11,663
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# dask and distributed are extra installs
from dask.distributed import Client, LocalCluster
import matplotlib.pyplot as plt
import mdtraj as md
traj = md.load("5550217/kras.xtc", top="5550217/kras.pdb")
topology = traj.topology
from contact_map import DaskContactFrequency, DaskContactTrajectory
from distributed import Client
client = Client()
client
%%time
freq = DaskContactFrequency(
client=client,
filename="5550217/kras.xtc",
top="5550217/kras.pdb"
)
# top must be given as keyword (passed along to mdtraj.load)
# did it add up to give us the right number of frames?
freq.n_frames
# do we get a familiar-looking residue map?
fig, ax = freq.residue_contacts.plot()
traj_2 = md.load("data/gsk3b_example.h5")
topology_2 = traj_2.topology
yyg = topology_2.select('resname YYG and element != "H"')
protein = topology_2.select('protein and element != "H"')
%%time
dctraj = DaskContactTrajectory(
client=client,
query=yyg,
haystack=protein,
filename="data/gsk3b_example.h5",
)
# did it add up to give us the right number of frames?
len(dctraj)
# do we get a familiar-looking residue map for rolling averages?
rolling_frequencies = dctraj.rolling_frequency(window_size=30, step=14)
rolling_frequencies
fig, axs = plt.subplots(3, 2, figsize=(12, 10))
for ax, freq in zip(axs.flatten(), rolling_frequencies):
freq.residue_contacts.plot_axes(ax=ax)
ax.set_xlim(*freq.query_residue_range);
from contact_map import ContactFrequency
# use all the atoms except atom 0
used_atoms = list(range(1, topology.n_atoms))
%%time
# with atom slicing
frame_contacts = ContactFrequency(traj[0], query=used_atoms,
haystack=used_atoms)
# disable atom slicing
ContactFrequency._class_use_atom_slice = False
%%time
# without atom slicing
frame_contacts = ContactFrequency(traj[0], query=used_atoms,
haystack=used_atoms)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Much of the core computational effort in Contact Map Explorer is performed by MDTraj, which uses OpenMP during the nearest-neighbors calculation. This already provides excellent performance for a bottleneck in the contact map creation process. However, Contact Map Explorer also has a few other tricks to further enhance performance.
Step2: The same can be done for a DaskContactTrajectory. Here, we use the data from and compare to contact_trajectory.ipynb
Step3: "Atom slicing"
|
11,664
|
<ASSISTANT_TASK:>
Python Code:
#Step 1 - Check spark version
#Type:
#sc.version
#Step 2 - Create RDD of Numbers 1-10
#Type:
#x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
#x_nbr_rdd = sc.parallelize(x)
#Step 2 - Extract first line
#Type:
#x_nbr_rdd.first()
#Step 2 - Extract first 5 lines
#Type:
#x_nbr_rdd.take(5)
#Step 2 - Create RDD String, Extract first line
#Type:
#y = ["Hello Spark!"]
#y_str_rdd = sc.parallelize(y)
#y_str_rdd.first()
#Step 3 - Create RDD String, Extract first line
#type:
#z = ["Hello World!, Hello Universe!, I love Spark"]
#z_str_rdd = sc.parallelize(z)
#z_str_rdd.first()
#Step 3 - Create RDD with object for each word, Extract first 7 words
#type:
#z_str2_rdd = z_str_rdd.flatMap(lambda line: line.split(" "))
#z_str2_rdd.take(7)
#Step 3 - Count of "Hello" words
#type:
#z_str3_rdd = z_str2_rdd.filter(lambda line: "Hello" in line)
#print "The count of words 'Hello' in: " + repr(z_str_rdd.first())
#print "Is: " + repr(z_str3_rdd.count())
#Step 3 - Count of "Spark" words
#type
#z_str4_rdd = z_str2_rdd.filter(lambda line: "Spark" in line)
#print "The count of words 'Spark' in: " + repr(z_str_rdd.first())
#print "Is: " + repr(z_str4_rdd.count())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step 1 - Working with Spark Context
Step1: Step 2 - Working with Resilient Distributed Datasets
Step2: Step 3 - Working with Strings
|
11,665
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import fitsio
from astropy.table import Table
from corner import corner
plt.style.use('seaborn-talk')
%matplotlib inline
basicdir = os.path.join(os.getenv('IM_DATA_DIR'), 'upenn-photdec', 'basic-catalog', 'v2')
adddir = os.path.join(os.getenv('IM_DATA_DIR'), 'upenn-photdec', 'additional-catalogs')
castfile = os.path.join(basicdir, 'UPenn_PhotDec_CAST.fits')
castinfo = fitsio.FITS(castfile)
castinfo[1]
allcast = castinfo[1].read()
thisband = 'gband'
def photdec_select(finalflag, bit):
Select subsets of the catalog using the finalflag bitmask.
1 - good bulge-only galaxy
4 - good disk-only galaxy
10 - good two-component fit (logical_or of flags 11, 12, and 13)
20 - bad total magnitude and size
return finalflag & np.power(2, bit) != 0
def select_meert(modelcat, onecomp=False, twocomp=False):
Select various (good) subsets of galaxies.
Args:
modelcat: 'UPenn_PhotDec_Models_[g,r,i]band.fits' catalog.
onecomp (bool): galaxies fitted with single-Sersic model.
twocomp (bool): galaxies fitted with Sersic-exponential model.
Notes:
* Flag 10 is a logical_or of 11, 12, 13.
* Flag 1, 4, and 10 are mutually exclusive.
* If Flag 1 or 4 are set then n_disk,r_disk are -999.
finalflag = modelcat['finalflag']
smalln = modelcat['n_bulge'] < 8
goodr = modelcat['r_bulge'] > 0 # Moustakas hack
two = photdec_select(finalflag, 10)
two = np.logical_and( two, smalln )
two = np.logical_and( two, goodr )
if twocomp:
return two
one = np.logical_or( photdec_select(finalflag, 1), photdec_select(finalflag, 4) )
one = np.logical_and( one, smalln )
if onecomp:
return one
both = np.logical_or( one, two )
return both
measfile = os.path.join(basicdir, 'UPenn_PhotDec_nonParam_{}.fits'.format(thisband))
measinfo = fitsio.FITS(measfile)
fitfile = os.path.join(basicdir, 'UPenn_PhotDec_Models_{}.fits'.format(thisband))
fitinfo = fitsio.FITS(fitfile)
print(measinfo[1], fitinfo[1])
_fit = fitinfo[1].read(columns=['finalflag', 'n_bulge', 'r_bulge'])
good = select_meert(_fit)
goodindx = np.where(good)[0]
nobj = len(goodindx)
print('Selected {}/{} good targets.'.format(nobj, len(_fit)))
fit, meas = [], []
fitfile = os.path.join(basicdir, 'UPenn_PhotDec_Models_{}.fits'.format(thisband))
measfile = os.path.join(basicdir, 'UPenn_PhotDec_NonParam_{}.fits'.format(thisband))
gfit = fitsio.read(fitfile, ext=1, rows=goodindx)
gmeas = fitsio.read(measfile, ext=1, rows=goodindx)
cast = allcast[goodindx]
one = select_meert(gfit, onecomp=True)
two = select_meert(gfit, twocomp=True)
print('g-band range = {:.3f} - {:.3f}'.format(gfit['m_tot'].min(), gfit['m_tot'].max()))
print('Redshift range = {:.4f} - {:.4f}'.format(cast['z'].min(), cast['z'].max()))
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 5))
_ = ax1.hist(cast['z'], bins=100, range=(0, 0.4), alpha=0.5, label='All Galaxies')
_ = ax1.hist(cast['z'][two], bins=100, range=(0, 0.4), alpha=0.5, label='Two-Component Fits')
ax1.legend(loc='upper right')
ax1.set_xlabel('Redshift')
ax1.set_ylabel('Number of Galaxies')
hb = ax2.hexbin(cast['ra'], cast['dec'], C=cast['z'], vmin=0, vmax=0.3,
cmap=plt.cm.get_cmap('RdYlBu'))
cb = plt.colorbar(hb)
cb.set_label('Redshift')
ax2.set_xlabel('RA')
ax2.set_ylabel('Dec')
labels = [r'$g_{tot}$', r'B/T ($g$-band)', r'Bulge $n$ ($g$-band)',
r'Bulge $r_{50, g}$', r'Disk $r_{50, g}$']
data = np.array([
gfit['m_tot'][two],
gfit['BT'][two],
gfit['n_bulge'][two],
np.log10(gfit['r_bulge'][two]),
np.log10(gfit['r_disk'][two])
]).T
data.shape
_ = corner(data, quantiles=[0.25, 0.50, 0.75], labels=labels,
range=np.repeat(0.9999, len(labels)), verbose=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the parent CAST catalog.
Step4: Read the g-band model fitting results and select a "good" sample.
Step5: Identify the subset of galaxies with good 1- and 2-component fits.
Step6: Generate some plots.
|
11,666
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
import pyqg
from pyqg import diagnostic_tools as tools
year = 24*60*60*360.
m = pyqg.QGModel(tmax=10*year, twrite=10000, tavestart=5*year)
m.run()
m_ds = m.to_dataset().isel(time=-1)
m_ds
m_ds['q_upper'] = m_ds.q.isel(lev=0) + m_ds.Qy.isel(lev=0)*m_ds.y
m_ds['q_upper'].attrs = {'long_name': 'upper layer PV anomaly'}
m_ds.q_upper.plot.contourf(levels=18, cmap='RdBu_r');
m.describe_diagnostics()
kr, kespec_upper = tools.calc_ispec(m, m_ds.KEspec.isel(lev=0).data)
_, kespec_lower = tools.calc_ispec(m, m_ds.KEspec.isel(lev=1).data)
plt.loglog(kr, kespec_upper, 'b.-', label='upper layer')
plt.loglog(kr, kespec_lower, 'g.-', label='lower layer')
plt.legend(loc='lower left')
plt.ylim([1e-14,1e-8])
plt.xlabel(r'k (m$^{-1}$)'); plt.grid()
plt.title('Kinetic Energy Spectrum');
kr, APEgenspec = tools.calc_ispec(m, m_ds.APEgenspec.data)
_, APEflux = tools.calc_ispec(m, m_ds.APEflux.data)
_, KEflux = tools.calc_ispec(m, m_ds.KEflux.data)
_, KEfrictionspec = tools.calc_ispec(m, m_ds.KEfrictionspec.data)
_, Dissspec = tools.calc_ispec(m, m_ds.Dissspec.data)
ebud = [ APEgenspec,
APEflux,
KEflux,
KEfrictionspec,
Dissspec]
ebud.append(-np.vstack(ebud).sum(axis=0))
ebud_labels = ['APE gen','APE flux','KE flux','Bottom drag','Diss.','Resid.']
[plt.semilogx(kr, term) for term in ebud]
plt.legend(ebud_labels, loc='upper right')
plt.xlabel(r'k (m$^{-1}$)'); plt.grid()
plt.title('Spectral Energy Transfer');
_, ENSflux = tools.calc_ispec(m, m_ds.ENSflux.data.squeeze())
_, ENSgenspec = tools.calc_ispec(m, m_ds.ENSgenspec.data.squeeze())
_, ENSfrictionspec = tools.calc_ispec(m, m_ds.ENSfrictionspec.data.squeeze())
_, ENSDissspec = tools.calc_ispec(m, m_ds.ENSDissspec.data.squeeze())
ebud = [ ENSgenspec,
ENSflux,
ENSDissspec,
ENSfrictionspec]
ebud.append(-np.vstack(ebud).sum(axis=0))
ebud_labels = ['ENS gen','ENS flux div.','Dissipation','Friction','Resid.']
[plt.semilogx(kr, term) for term in ebud]
plt.legend(ebud_labels, loc='best')
plt.xlabel(r'k (m$^{-1}$)'); plt.grid()
plt.title('Spectral Enstrophy Transfer');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialize and Run the Model
Step2: Convert Model Outpt to an xarray Dataset
Step3: Visualize Output
Step4: Plot Diagnostics
Step5: To look at the wavenumber energy spectrum, we plot the KEspec diagnostic.
Step6: We can also plot the spectral fluxes of energy and enstrophy.
|
11,667
|
<ASSISTANT_TASK:>
Python Code:
# Importing tensorflow lib
import tensorflow as tf
tf.__version__ #Checking if notebook is working in tensorflow
# Reading the dataset from Yann LeCun's Website: http://yann.lecun.com/exdb/mnist/
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
# Implement Softmax Regression
y = tf.nn.softmax(tf.matmul(x, W) + b)
# Implementing Cross entropy to calculate the loss/error
y_ = tf.placeholder(tf.float32, [None, 10]) # a placeholder to input the correct answers
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) # Learning rate = 0.5
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
mnist.test.images
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: x is a placeholder, a value that we'll input when we ask TensorFlow to run a computation. We want to be able to input any number of MNIST images, each flattened into a 784-dimensional vector. We represent this as a 2-D tensor of floating-point numbers, with a shape [None, 784]. (Here None means that a dimension can be of any length)
Step2: Execution of the Model in Session
Step3: Evaluating the model
|
11,668
|
<ASSISTANT_TASK:>
Python Code:
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard
from tensorflow.keras.layers import Dense, Flatten, Softmax
print(tf.__version__)
!python3 -m pip freeze | grep 'tensorflow==2\|tensorflow-gpu==2' || \
python3 -m pip install tensorflow==2
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
HEIGHT, WIDTH = x_train[0].shape
NCLASSES = tf.size(tf.unique(y_train).y)
print("Image height x width is", HEIGHT, "x", WIDTH)
tf.print("There are", NCLASSES, "classes")
IMGNO = 12
# Uncomment to see raw numerical values.
# print(x_test[IMGNO])
plt.imshow(x_test[IMGNO].reshape(HEIGHT, WIDTH));
print("The label for image number", IMGNO, "is", y_test[IMGNO])
def linear_model():
# TODO: Build a sequential model and compile it.
return model
BUFFER_SIZE = 5000
BATCH_SIZE = 100
def scale(image, label):
# TODO
def load_dataset(training=True):
Loads MNIST dataset into a tf.data.Dataset
(x_train, y_train), (x_test, y_test) = mnist
x = x_train if training else x_test
y = y_train if training else y_test
# TODO: a) one-hot encode labels, apply `scale` function, and create dataset.
# One-hot encode the classes
if training:
# TODO
return dataset
def create_shape_test(training):
dataset = load_dataset(training=training)
data_iter = dataset.__iter__()
(images, labels) = data_iter.get_next()
expected_image_shape = (BATCH_SIZE, HEIGHT, WIDTH)
expected_label_ndim = 2
assert(images.shape == expected_image_shape)
assert(labels.numpy().ndim == expected_label_ndim)
test_name = 'training' if training else 'eval'
print("Test for", test_name, "passed!")
create_shape_test(True)
create_shape_test(False)
NUM_EPOCHS = 10
STEPS_PER_EPOCH = 100
model = linear_model()
train_data = load_dataset()
validation_data = load_dataset(training=False)
OUTDIR = "mnist_linear/"
checkpoint_callback = ModelCheckpoint(
OUTDIR, save_weights_only=True, verbose=1)
tensorboard_callback = TensorBoard(log_dir=OUTDIR)
history = model.fit(
# TODO: specify training/eval data, # epochs, steps per epoch.
verbose=2,
callbacks=[checkpoint_callback, tensorboard_callback]
)
BENCHMARK_ERROR = .12
BENCHMARK_ACCURACY = 1 - BENCHMARK_ERROR
accuracy = history.history['accuracy']
val_accuracy = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
assert(accuracy[-1] > BENCHMARK_ACCURACY)
assert(val_accuracy[-1] > BENCHMARK_ACCURACY)
print("Test to beat benchmark accuracy passed!")
assert(accuracy[0] < accuracy[1])
assert(accuracy[1] < accuracy[-1])
assert(val_accuracy[0] < val_accuracy[1])
assert(val_accuracy[1] < val_accuracy[-1])
print("Test model accuracy is improving passed!")
assert(loss[0] > loss[1])
assert(loss[1] > loss[-1])
assert(val_loss[0] > val_loss[1])
assert(val_loss[1] > val_loss[-1])
print("Test loss is decreasing passed!")
image_numbers = range(0, 10, 1) # Change me, please.
def load_prediction_dataset():
dataset = (x_test[image_numbers], y_test[image_numbers])
dataset = tf.data.Dataset.from_tensor_slices(dataset)
dataset = dataset.map(scale).batch(len(image_numbers))
return dataset
predicted_results = model.predict(load_prediction_dataset())
for index, prediction in enumerate(predicted_results):
predicted_value = np.argmax(prediction)
actual_value = y_test[image_numbers[index]]
if actual_value != predicted_value:
print("image number: " + str(image_numbers[index]))
print("the prediction was " + str(predicted_value))
print("the actual label is " + str(actual_value))
print("")
bad_image_number = 8
plt.imshow(x_test[bad_image_number].reshape(HEIGHT, WIDTH));
DIGIT = 0 # Change me to be an integer from 0 to 9.
LAYER = 1 # Layer 0 flattens image, so no weights
WEIGHT_TYPE = 0 # 0 for variable weights, 1 for biases
dense_layer_weights = model.layers[LAYER].get_weights()
digit_weights = dense_layer_weights[WEIGHT_TYPE][:, DIGIT]
plt.imshow(digit_weights.reshape((HEIGHT, WIDTH)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exploring the data
Step2: Each image is 28 x 28 pixels and represents a digit from 0 to 9. These images are black and white, so each pixel is a value from 0 (white) to 255 (black). Raw numbers can be hard to interpret sometimes, so we can plot the values to see the handwritten digit as an image.
Step3: Define the model
Step5: Write Input Functions
Step6: Time to train the model! The original MNIST linear classifier had an error rate of 12%. Let's use that to sanity check that our model is learning.
Step7: Evaluating Predictions
Step8: It's understandable why the poor computer would have some trouble. Some of these images are difficult for even humans to read. In fact, we can see what the computer thinks each digit looks like.
|
11,669
|
<ASSISTANT_TASK:>
Python Code:
def display_board(board):
for row in board:
print(row)
# Runing the tests...
test()
# Note if you recieve an error message saying test_board not found
# try hitting the run button on the test_board cell and try again.
def display_board(board):
print(*board, sep="\n")
# Runing the tests...
test()
# Note if you recieve an error message saying test_board not found
# try hitting the run button on the test_board cell and try again.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Alternate Solution
|
11,670
|
<ASSISTANT_TASK:>
Python Code:
target = pd.read_csv('../data/train_target.csv')
target.describe()
target = target / 1000
sns.distplot(target);
plt.title('SalePrice')
import scipy as sp
sp.stats.skew(target)
sp.stats.skewtest(target)
logtarget = np.log1p(target)
print('skewness of logtarget = ', sp.stats.skew(logtarget)[0])
print('skewness test of logtarget = ', sp.stats.skewtest(logtarget))
sns.distplot(logtarget)
plt.title(r'log(1 + SalePrice)')
def read():
Read training and test data and return a dataframe with ['Dataset','Id'] multi-index
raw_train = pd.read_csv('../data/train_prepared_light.csv')
raw_test = pd.read_csv('../data/test_prepared_light.csv')
df = pd.concat([raw_train, raw_test], keys=['train', 'test'])
df.index.names = 'Dataset', 'Id'
return df
df = read()
df.shape
df.head()
df.tail()
pp = samlib.Pipeline(df.copy())
assert pp == df # the pipeline output equals df
df.columns, len(df.columns)
df.dtypes.value_counts()
is_categorical = (df.dtypes == object)
is_numerical = ~is_categorical
dfnum = df.loc[:, is_numerical].copy()
dfnum.columns, len(dfnum.columns)
dfnum.describe()
def select_numerical_features(df):
return df.loc[:, df.dtypes != object]
pp.append(select_numerical_features)
# Check the pipline
pp == dfnum
cols_with_nulls = dfnum.columns[dfnum.isnull().sum() > 0]
cols_with_nulls
dfnum[cols_with_nulls].isnull().sum().sort_values(ascending=False)
# We may want to refine this in the future. Perhaps build a model to predict the missing GarageCars from the other features?
median_list = 'LotFrontage', 'BsmtFullBath','BsmtHalfBath', 'GarageCars', 'GarageArea', 'MasVnrArea', 'BsmtFinSF1', 'BsmtFinSF2', 'TotalBsmtSF', 'BsmtUnfSF'
zero_list = []
def fillnans(dfnum):
return dfnum.pipe(samlib.fillna, 'median', median_list)\
.pipe(samlib.fillna, lambda df: 0, zero_list)\
.assign(GarageYrBlt=dfnum.GarageYrBlt.fillna(
dfnum.YearBuilt[dfnum.GarageYrBlt.isnull()])) # fill with year garage was built
dfnum = fillnans(dfnum)
# Check that we got rid of the nulls
assert not samlib.has_nulls(dfnum)
pp.append(fillnans)
# Check the pipline
pp == dfnum
def order_columns(df):
return df.reindex_axis(df.columns.sort_values(), 1)
pp.append(order_columns)
pp().head()
dfnum = pp()
dfnum.head()
dfnum.shape
samlib.featureplot(dfnum, ncols=6, nrows=6, figsize=(12, 4))
fig, ax = plt.subplots(1,1, figsize=(4, 4))
sns.distplot(dfnum.ScreenPorch, ax=ax)
ax.set_title('Distribution of ScreenPorch')
def test_nearly_constant(series):
counts = series.value_counts()
max_val_count = max(counts)
other_val_count = counts.drop(counts.argmax()).sum()
return other_val_count / max_val_count < 0.25
is_nearly_constant = dfnum.apply(test_nearly_constant)
is_nearly_constant.value_counts()
dropme = dfnum.columns[is_nearly_constant]
dropme
def drop_constant_features(df):
return df.drop(df.columns[df.apply(test_nearly_constant)], axis=1)
pp.append(drop_constant_features)
pp == dfnum.drop(dropme, axis=1)
dfnum = dfnum.drop(dropme, axis=1)
fig, axes = plt.subplots(1,2, figsize=(8, 4))
sns.distplot(dfnum['LotArea'], ax=axes[0])
sns.distplot(np.log1p(dfnum['LotArea']), ax=axes[1])
def skewtest(train, sort=True, ascending=True):
Return dataframe of zfactor and pvalue for skew test
test = sp.stats.skewtest(train)
zfactor = test[0]
pvalue = test[1]
df = pd.DataFrame(dict(zfactor=zfactor, pvalue=pvalue), index=train.columns)
if sort:
return df.sort_values(by='zfactor', ascending=ascending)
else:
return df
skewtest(dfnum).head()
def is_skewed(train, min_zfactor=10, plot=False):
Return series of booleans indicating whether a column is skewed or not.
sk = skewtest(train)
if plot:
plt.figure(1)
plt.title('Z-factor distribution from skewtest')
plt.xlabel('Z-factor')
sns.distplot(sk.zfactor)
plt.figure(2)
sk.zfactor.plot(kind='barh')
plt.title('Z-factor for skewtest')
return sk.zfactor > min_zfactor
is_skewed(dfnum, min_zfactor=10, plot=True)
def transform_skewed_colums(dfnum):
dfnum: dataframe to transform
dropme: columns to drop
is_skewed: iterable of length dfnum.columns indicating if a column is skewed
dfnum2 = dfnum.copy()
skewed_colz = is_skewed(dfnum)
dfnum2.loc[:, skewed_colz] = dfnum2.loc[:, skewed_colz].apply(np.log1p)
return dfnum2
pp.append(transform_skewed_colums)
# the transformed dataset has fewer columns and we only want those
dfnum2 = pp()
dfnum2.columns
is_skewed(dfnum2)
sorted(sp.stats.skewtest(dfnum2)[0])
zfactors2 = sp.stats.skewtest(dfnum2)[0]
pd.Series(data=zfactors2, index=dfnum2.columns)[is_skewed(dfnum)].sort_values().plot(kind='barh')
skewed = is_skewed(dfnum)
skewed.value_counts()
dfnum.shape
samlib.featureplot(dfnum2.loc[:, skewed], nrows=3, ncols=6, figsize=(10,3))
samlib.featureplot(dfnum2.loc[:, ~skewed], nrows=2, ncols=5, figsize=(10, 3))
dfnum2.to_csv('transformed_dataset_dfnum2.csv', index=True)
def correlation(train, target_t):
corr = pd.DataFrame(data=train.apply(lambda feature: sp.stats.pearsonr(feature, target_t['SalePrice'])),
columns=['pearsonr'])
corr = corr.assign(correlation=corr.applymap(lambda x: x[0]),
pvalue=corr.applymap(lambda x: x[1]))
corr = corr.drop('pearsonr', axis=1)
return corr.sort_values('pvalue', ascending=False)['correlation']
correlation(dfnum2.loc['train', :], logtarget).plot(kind='barh')
def sort_columns_by_correlation(dfnum2, target_t=logtarget):
corr = correlation(dfnum2.loc['train',:], target_t)
return dfnum2.reindex_axis(corr[::-1].index, axis=1)
#sort_columns_by_correlation(dfnum2)
pp.append(sort_columns_by_correlation)
pp().head()
pp().to_csv('transformed_numerical_dataset.csv', index=True)
train = pp().loc['train'].assign(target=logtarget)
samlib.featureplot2(train, ncols=4, size=3, aspect=1.0, plotfunc=sns.regplot, y="target", data=train)
#(train.iloc[:, :-1], ncols=4, nrows=7, plotfunc=scatter, figsize=(12,3))
cols_with_zeros = ['OpenPorchSF', 'MasVnrArea', 'TotalBsmtSF', 'WoodDeckSF', 'BsmtUnfSF', 'BsmtFinSF1', '2ndFlrSF']
not_oktrain = train.loc[:, cols_with_zeros + ["target"]]
samlib.featureplot2(not_oktrain, ncols=4, size=3, aspect=1.0,
plotfunc=sns.regplot, y="target", data=not_oktrain)
notok = not_oktrain[not_oktrain != 0].drop('target', axis=1)
#correlation(notok, logtarget)
notok.head()
def correlation2(train, target=logtarget['SalePrice'], ignorena=True):
def corr(series):
if ignorena:
mask = ~series.isnull()
return sp.stats.pearsonr(series[mask], target[mask])
df = pd.DataFrame(data=train.apply(corr), columns=['pearsonr'])
return df.assign(pearson=df.applymap(lambda x: x[0]), pvalue=df.applymap(lambda x: x[1])).drop('pearsonr', axis=1)
notok_corrs = correlation2(notok).sort_values('pearson')
corrs_with_zeros = correlation(not_oktrain, logtarget).reindex(notok_corrs.index)
corrs_without_zeros = notok_corrs['pearson']
pd.DataFrame(dict(with_zeros=corrs_with_zeros, no_zeros=corrs_without_zeros)).plot(kind='barh')
plt.title('Effect of removing zeros on correlation')
not_oktrain2 = not_oktrain.reindex_axis(notok_corrs.index[::-1], axis=1);
def regplot_dropzeros(data=not_oktrain, drop_zeros=False, **kwargs):
col = data.columns[0]
if drop_zeros:
mask = data[col] != 0
xt = data[mask].squeeze()
yt = logtarget[mask].squeeze()
else:
xt = data.squeeze()
yt = logtarget.squeeze()
sns.regplot(xt, yt, **kwargs)
samlib.featureplot(not_oktrain2, ncols=4, nrows=2, figsize=(12, 3), plotfunc=regplot_dropzeros, drop_zeros=False)
samlib.featureplot(not_oktrain2, ncols=4, nrows=2, figsize=(12, 3), plotfunc=regplot_dropzeros, drop_zeros=True)
import statsmodels.api as sm
from statsmodels.imputation import mice
df = pp()
# Replace zeros by NaNs
df[df.loc[:, cols_with_zeros] == 0] = np.nan
df = df.rename_axis({'1stFlrSF':'FrstFlrSF', '2ndFlrSF':'SndFlrSF'}, axis=1)
df.loc['train','SalePrice'] = logtarget.values
df.head()
samlib.has_nulls(df)
imp = mice.MICEData(df)
imp.update_all()
imp.data.head()
cols = ['OpenPorchSF', 'MasVnrArea', 'TotalBsmtSF', 'WoodDeckSF', 'BsmtUnfSF', 'BsmtFinSF1', 'SndFlrSF','SalePrice']
imputed_notok = imp.data.loc[:, cols]
imputed_notok.columns
samlib.featureplot2(imputed_notok, ncols=4, size=3, aspect=1.0,
plotfunc=sns.regplot, y="SalePrice", data=imputed_notok)
imp.data.shape
df.shape
imp.data.index=df.index
imp.data.head()
imp.data.reindex_like(df).to_csv('transformed_numerical_dataset_imputed.csv', index=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The sale price is in hte hundreds of thousands, so let's divide the price by 1000 to get more manageable numbers.
Step2: The distribution is skewed (as demonstrated by the large z-score (and small pvalue) of teh skewtest). It is right skewed (the skew is positive). Skewed distribution are not ideal for linear models, which often assume a normal distribution. One way to correct for right-skewness is to take the log [1,2,3]
Step4: Merge the training and test datasets for data preparation
Step5: Initialize pipeline with raw data. We can always get the data and apply all the transformations in the pipeline by calling pp().
Step6: Select Numerical features
Step7: We've got 3 data types
Step8: Split the data between categorical and numerical features
Step9: We've got 36 numerical features. We can use the describe method to get some statistics
Step10: But that's a lot of numbers to digest. Better get started plotting!
Step11: Deal with NaN values
Step12: Based on the description, the null values for the MasVnrArea should be 0 (no massonry veneer type)
Step13: Order columns in alphabetical order
Step14: Plot violinplots for each feature
Step15: Many of the features are higly skewed and some have very long tails. Some have discrete values (YrSold, Fireplaces).
Step16: Drop nearly constant features
Step17: Log transform the other features if they have a high skewness
Step20: Use dataframe & series whenever possible for maximum flexibility (see below)
Step22: Let's apply a log1p transform to all these and plot the distributions again
Step23: Now our originally skewed features look more symmetric.
Step24: Save transformed numerical data
Step25: Correlations
Step26: Sort columns in dfnum2 by correlation.
Step27: Scatter plots
Step28: Some features have some sort of bi-modal distribution with a lots of 0 values.
Step29: Dealing with the zeros
Step30: With the zeros
Step31: Without the zeros
Step32: Data imputation
|
11,671
|
<ASSISTANT_TASK:>
Python Code:
! pip uninstall -y tensorflow
! pip install -U tf-nightly
import tensorflow as tf
tf.enable_eager_execution()
! git clone --depth 1 https://github.com/tensorflow/models
import sys
import os
if sys.version_info.major >= 3:
import pathlib
else:
import pathlib2 as pathlib
# Add `models` to the python path.
models_path = os.path.join(os.getcwd(), "models")
sys.path.append(models_path)
saved_models_root = "/tmp/mnist_saved_model"
# The above path addition is not visible to subprocesses, add the path for the subprocess as well.
# Note: channels_last is required here or the conversion may fail.
!PYTHONPATH={models_path} python models/official/mnist/mnist.py --train_epochs=1 --export_dir {saved_models_root} --data_format=channels_last
saved_model_dir = str(sorted(pathlib.Path(saved_models_root).glob("*"))[-1])
saved_model_dir
import tensorflow as tf
tf.enable_eager_execution()
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
# Note: If you don't have a recent tf-nightly installed, the
# "post_training_quantize" line will have no effect.
tf.logging.set_verbosity(tf.logging.INFO)
converter.post_training_quantize = True
tflite_quant_model = converter.convert()
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_quant_model)
!ls -lh {tflite_models_dir}
import numpy as np
mnist_train, mnist_test = tf.keras.datasets.mnist.load_data()
images, labels = tf.to_float(mnist_test[0])/255.0, mnist_test[1]
# Note: If you change the batch size, then use
# `tf.lite.Interpreter.resize_tensor_input` to also change it for
# the interpreter.
mnist_ds = tf.data.Dataset.from_tensor_slices((images, labels)).batch(1)
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
tf.logging.set_verbosity(tf.logging.DEBUG)
interpreter_quant = tf.lite.Interpreter(model_path=str(tflite_model_quant_file))
interpreter_quant.allocate_tensors()
input_index = interpreter_quant.get_input_details()[0]["index"]
output_index = interpreter_quant.get_output_details()[0]["index"]
for img, label in mnist_ds.take(1):
break
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(img[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(label[0].numpy()),
predict=str(predictions[0,0])))
plt.grid(False)
def eval_model(interpreter, mnist_ds):
total_seen = 0
num_correct = 0
for img, label in mnist_ds:
total_seen += 1
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
if predictions == label.numpy():
num_correct += 1
if total_seen % 500 == 0:
print("Accuracy after %i images: %f" %
(total_seen, float(num_correct) / float(total_seen)))
return float(num_correct) / float(total_seen)
print(eval_model(interpreter, mnist_ds))
print(eval_model(interpreter_quant, mnist_ds))
archive_path = tf.keras.utils.get_file("resnet_v2_101.tgz", "https://storage.googleapis.com/download.tensorflow.org/models/tflite_11_05_08/resnet_v2_101.tgz", extract=True)
archive_path = pathlib.Path(archive_path)
archive_dir = str(archive_path.parent)
! cat {archive_dir}/resnet_v2_101_299_info.txt
graph_def_file = pathlib.Path(archive_path).parent/"resnet_v2_101_299_frozen.pb"
input_arrays = ["input"]
output_arrays = ["output"]
converter = tf.lite.TFLiteConverter.from_frozen_graph(
str(graph_def_file), input_arrays, output_arrays, input_shapes={"input":[1,299,299,3]})
converter.post_training_quantize = True
resnet_tflite_file = graph_def_file.parent/"resnet_v2_101_quantized.tflite"
resnet_tflite_file.write_bytes(converter.convert())
!ls -lh {archive_dir}/*.tflite
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Train and export the model
Step2: For the example, we only trained the model for a single epoch, so it only trains to ~96% accuracy.
Step3: Using the python TFLiteConverter, the saved model can be converted into a TFLite model.
Step4: Write it out to a tflite file
Step5: To quantize the model on export, set the post_training_quantize flag
Step6: Note how the resulting file, with post_training_quantize set, is approximately 1/4 the size.
Step7: Run the TFLite models
Step8: Load the model into an interpreter
Step9: Test the model on one image
Step10: Evaluate the models
Step11: We can repeat the evaluation on the weight quantized model to obtain
Step12: In this example, we have compressed model with no difference in the accuracy.
Step13: The info.txt file lists the input and output names. You can also find them using TensorBoard to visually inspect the graph.
|
11,672
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import numpy
from matplotlib import pyplot
pyplot.style.use('ggplot')
def matmult1(A, x):
Entries of y are dot products of rows of A with x
y = numpy.zeros_like(A[:,0])
for i in range(len(A)):
row = A[i,:]
for j in range(len(row)):
y[i] += row[j] * x[j]
return y
A = numpy.array([[1,2],[3,5],[7,11]])
x = numpy.array([10,20])
matmult1(A, x)
def matmult2(A, x):
Same idea, but more compactly
y = numpy.zeros_like(A[:,0])
for i,row in enumerate(A):
y[i] = row.dot(x)
return y
matmult2(A, x)
def matmult3(A, x):
y is a linear expansion of the columns of A
y = numpy.zeros_like(A[:,0])
for j,col in enumerate(A.T):
y += col * x[j]
return y
matmult3(A, x)
# We will use this version
A.dot(x)
B = numpy.array([[2, 3],[0, 4]])
print(B)
print(B.dot(B.T), B.T.dot(B))
Binv = numpy.linalg.inv(B)
Binv.dot(B), B.dot(Binv)
# Make some polynomials
x = numpy.linspace(-1,1)
A = numpy.vander(x, 4)
q0 = A.dot(numpy.array([0,0,0,.5])) # .5
q1 = A.dot(numpy.array([0,0,1,0])) # x
q2 = A.dot(numpy.array([0,1,0,0])) # x^2
pyplot.figure()
pyplot.plot(x, numpy.array([q0, q1, q2]).T)
x
# Inner products of even and odd functions
q0 = q0 / numpy.linalg.norm(q0)
q1.dot(q0), q2.dot(q0), q2.dot(q1)
q0
# What is the constant component of q2?
pyplot.figure()
pyplot.plot(x, q2.dot(q0)*q0)
# Let's project that away so that q2 is orthogonal to q0
q2 = q2 - q2.dot(q0)*q0
Q = numpy.array([q0, q1, q2]).T
print(Q.T.dot(Q))
pyplot.figure()
pyplot.plot(x, Q)
def gram_schmidt_naive(X):
Q = numpy.zeros_like(X)
R = numpy.zeros((len(X.T),len(X.T)))
for i in range(len(Q.T)):
v = X[:,i].copy()
for j in range(i):
r = v.dot(Q[:,j])
R[j,i] = r
v -= r * Q[:,j] # "modified Gram-Schmidt" - remove each component before next dot product
R[i,i] = numpy.linalg.norm(v)
Q[:,i] = v / R[i,i]
return Q, R
x = numpy.linspace(-1,1,50)
k = 6
A = numpy.vander(x, k, increasing=True)
Q, R = gram_schmidt_naive(A)
print(numpy.linalg.norm(Q.T.dot(Q) - numpy.eye(k)))
print(numpy.linalg.norm(Q.dot(R)-A))
pyplot.figure()
pyplot.plot(x, Q)
A.shape
Q, R = gram_schmidt_naive(numpy.vander(x, 4, increasing=True))
pyplot.figure()
pyplot.plot(x, Q)
eps = numpy.float32(1)
while numpy.float32(1) + eps > 1:
eps /= numpy.float64(2)
eps_machine = 2*eps # We call this "machine epsilon"
print(eps_machine)
format((.2 - 1/3) + 2/15, '.20f')
format(.1, '.20f')
numpy.log(1 + 1e-10) - numpy.log1p(1e-10)
x1 = numpy.array([-0.9, 0.1, 0.5, 0.8]) # points where we know values
y = numpy.array([1, 2.4, -0.2, 1.3]) # values at those points
pyplot.figure()
pyplot.plot(x1, y, '*')
B = numpy.vander(x1, 4) # Vandermonde matrix at the known points
Q, R = gram_schmidt_naive(B)
p = numpy.linalg.solve(R, Q.T.dot(y)) # Compute the polynomial coefficients
print(p)
pyplot.plot(x, numpy.vander(x,4).dot(p)) # Plot the polynomial evaluated at all points
print('B =', B, '\np =', p)
m = 20
V = numpy.vander(numpy.linspace(-1,1,m), increasing=False)
Q, R = gram_schmidt_naive(V)
def qr_test(qr, V):
Q, R = qr(V)
m = len(Q.T)
print(qr.__name__,
numpy.linalg.norm(Q.dot(R) - V),
numpy.linalg.norm(Q.T.dot(Q) - numpy.eye(m)))
qr_test(gram_schmidt_naive, V)
qr_test(numpy.linalg.qr, V)
def gram_schmidt_classical(X):
Q = numpy.zeros_like(X)
R = numpy.zeros((len(X.T),len(X.T)))
for i in range(len(Q.T)):
v = X[:,i].copy()
R[:i,i] = Q[:,:i].T.dot(v)
v -= Q[:,:i].dot(R[:i,i])
R[i,i] = numpy.linalg.norm(v)
Q[:,i] = v / R[i,i]
return Q, R
qr_test(gram_schmidt_classical, V[:,:15])
# Q, R = numpy.linalg.qr(V)
#print(Q[:,0])
def gram_schmidt_modified(X):
Q = X.copy()
R = numpy.zeros((len(X.T), len(X.T)))
for i in range(len(Q.T)):
R[i,i] = numpy.linalg.norm(Q[:,i])
Q[:,i] /= R[i,i]
R[i,i+1:] = Q[:,i+1:].T.dot(Q[:,i])
Q[:,i+1:] -= numpy.outer(Q[:,i], R[i,i+1:])
return Q, R
qr_test(gram_schmidt_modified, V)
def householder_Q_times(V, x):
Apply orthogonal matrix represented as list of Householder reflectors
y = x.copy()
for i in reversed(range(len(V))):
y[i:] -= 2 * V[i] * V[i].dot(y[i:])
return y
def qr_householder1(A):
"Compute QR factorization using naive Householder reflection"
m, n = A.shape
R = A.copy()
V = []
for i in range(n):
x = R[i:,i]
v = -x
v[0] += numpy.linalg.norm(x)
v = v/numpy.linalg.norm(v) # Normalized reflector plane
R[i:,i:] -= 2 * numpy.outer(v, v.dot(R[i:,i:]))
V.append(v) # Storing reflectors is equivalent to storing orthogonal matrix
Q = numpy.eye(m, n)
for i in range(n):
Q[:,i] = householder_Q_times(V, Q[:,i])
return Q, numpy.triu(R[:n,:])
qr_test(qr_householder1, numpy.array([[1.,2],[3,4],[5,6]]))
qr_test(qr_householder1, V)
qr_test(numpy.linalg.qr, V)
qr_test(qr_householder1, numpy.eye(1))
qr_test(qr_householder1, numpy.eye(3,2))
qr_test(qr_householder1, numpy.array([[1.,1], [2e-8,1]]))
print(qr_householder1(numpy.array([[1.,1], [2e-8,1]])))
def qr_householder2(A):
"Compute QR factorization using Householder reflection"
m, n = A.shape
R = A.copy()
V = []
for i in range(n):
v = R[i:,i].copy()
v[0] += numpy.copysign(numpy.linalg.norm(v), v[0]) # Choose the further of the two reflections
v = v/numpy.linalg.norm(v) # Normalized reflector plane
R[i:,i:] -= 2 * numpy.outer(v, v.dot(R[i:,i:]))
V.append(v) # Storing reflectors is equivalent to storing orthogonal matrix
Q = numpy.eye(m, n)
for i in range(n):
Q[:,i] = householder_Q_times(V, Q[:,i])
return Q, numpy.triu(R[:n,:])
qr_test(qr_householder2, numpy.eye(3,2))
qr_test(qr_householder2, numpy.array([[1.,1], [1e-8,1]]))
print(qr_householder2(numpy.array([[1.,1], [1e-8,1]])))
qr_test(qr_householder2, V)
def R_solve(R, b):
Solve Rx = b using back substitution.
x = b.copy()
m = len(b)
for i in reversed(range(m)):
x[i] -= R[i,i+1:].dot(x[i+1:])
x[i] /= R[i,i]
return x
x = numpy.linspace(-1,1,15)
A = numpy.vander(x, 4)
print(A.shape)
Q, R = numpy.linalg.qr(A)
b = Q.T.dot(A.dot(numpy.array([1,2,3,4])))
numpy.linalg.norm(R_solve(R, b) - numpy.linalg.solve(R, b))
R_solve(R, b)
# Test accuracy of solver for an ill-conditioned square matrix
x = numpy.linspace(-1,1,19)
A = numpy.vander(x)
print('cond(A) = ',numpy.linalg.cond(A))
Q, R = numpy.linalg.qr(A)
print('cond(R^{-1} Q^T A) =', numpy.linalg.cond(numpy.linalg.solve(R, Q.T.dot(A))))
L = numpy.linalg.cholesky(A.T.dot(A))
print('cond(L^{-T} L^{-1} A^T A) =', numpy.linalg.cond(numpy.linalg.solve(L.T, numpy.linalg.solve(L, A.T.dot(A)))))
class QR:
def __init__(self, F, tau):
self.F = F
self.tau = tau
def R(self):
n = len(self.tau)
return numpy.triu(self.F[:n,:])
def Qdot(self, x):
n = len(x)
k = x.shape[1] if len(x.shape) == 2 else 1
if n != len(self.tau):
raise ValueError("operands could not be multiplied with shapes", self.F.shape, x.shape)
y = numpy.zeros((len(self.F), k))
y[:n,:] = x
for i in reversed(range(len(self.tau))):
# y -= tau * v * v' y
# where v = [1; F[i+1:,i]]
tmp = y[i] + self.F[i+1:,i].dot(y[i+1:])
y[i] -= self.tau[i] * tmp
y[i+1:] -= self.tau[i] * numpy.outer(self.F[i+1:,i], tmp)
return y
def qr_householder_inplace(A):
"Compute QR factorization using Householder reflection"
m, n = A.shape
F = A.copy()
tau = numpy.zeros(len(A.T))
for i in range(n):
v = F[i:,i].copy()
F[i,i] = -numpy.copysign(numpy.linalg.norm(v), v[0]) # Choose the further of the two reflections
v[0] -= F[i,i]
v /= v[0]
tau[i] = 2 / numpy.linalg.norm(v)**2
# Store the reflector in the lower triangular part of R
F[i+1:,i] = v[1:]
# Update the remaining panel
F[i:,i+1:] -= tau[i] * numpy.outer(v, v.dot(F[i:,i+1:]))
return QR(F, tau)
m, n = 5, 3
x = numpy.linspace(-1,1,m)
A = numpy.vander(x, n, increasing=True)
qr = qr_householder_inplace(A)
R = qr.R()
I = numpy.eye(n)
Q1 = qr.Qdot(I)
print(numpy.linalg.norm(Q1.T.dot(Q1) - I),
numpy.linalg.norm(qr.Qdot(R)-A))
Q, R = qr_householder2(A)
print(numpy.linalg.norm(R - qr.R()),
numpy.linalg.norm(Q1 - Q))
def cosspace(a, b, n):
return (a+b)/2 + (b-a)/2 * numpy.cos(numpy.linspace(0, numpy.pi, n))
def vander_cheb(x, k):
A = numpy.zeros((len(x), k))
if k > 0:
A[:,0] = 1
if k > 1:
A[:,1] = x
for j in range(2,k):
A[:,j] = 2*x*A[:,j-1] - A[:,j-2]
return A
x = cosspace(-1, 1, 1000)
A = vander_cheb(x, 500)
numpy.linalg.cond(A)
class QRT:
def __init__(self, F, T):
self.F = F
self.T = T
self.shape = self.F.shape
def R(self):
n = len(self.T)
return numpy.triu(self.F[:n,:])
def Qdot(self, x):
m, n = self.shape
k = x.shape[1] if len(x.shape) == 2 else 1
if x.shape[0] not in {n, m}:
raise ValueError("operands could not be multiplied with shapes", self.shape, x.shape)
y = numpy.zeros((m, k))
y[:x.shape[0]] = x
D = numpy.tril(self.F[:n])
numpy.fill_diagonal(D, 1)
tmp = D.T.dot(x[:n])
if x.shape[0] > n:
tmp += self.F[n:].T.dot(x[n:])
tmp = self.T.dot(tmp)
y[:n] -= D.dot(tmp)
y[n:] -= self.F[n:].dot(tmp)
return y
def qr_householder_wy(A):
"Compute QR factorization using blocked Householder reflection, result in compact WY"
m, n = A.shape
F = A.copy()
T = numpy.zeros((n, n))
for i in range(n):
v = F[i:,i] # in-place
d = v[1:].dot(v[1:])
norm = numpy.sqrt(v[0]**2 + d)
Rii = -numpy.copysign(norm, v[0]) # Choose the further of the two reflections
v[0] -= Rii
tau = 2 * v[0]**2 / (v[0]**2 + d)
v /= v[0]
# Update the remaining panel
F[i:,i+1:] -= tau * numpy.outer(v, v.dot(F[i:,i+1:]))
T[i,i] = tau
if i > 0: # Add this column to T
T[:i,i] = -tau * T[:i,:i].dot(F[i:,:i].T.dot(v))
F[i,i] = Rii
return QRT(F, T)
m, n = 5, 4
A = vander_cheb(cosspace(-1,1,m), n)
qrt = qr_householder_wy(A)
qr = qr_householder_inplace(A)
I = numpy.eye(n,n)
print(numpy.linalg.norm(qrt.R() - qr.R()), numpy.linalg.norm(qrt.Qdot(I) - qr.Qdot(I)))
class QRTS:
def __init__(self, qr0, qr1, qr):
self.qr0 = qr0
self.qr1 = qr1
self.qr = qr
self.shape = (self.qr0.shape[0] + self.qr1.shape[0], self.qr.shape[1])
def R(self):
return self.qr.R()
def Qdot(self, x):
m, n = self.shape
k = x.shape[1] if len(x.shape) == 2 else 1
y = numpy.zeros((m, k))
tmp = self.qr.Qdot(x)
y[:m//2] = self.qr0.Qdot(tmp[:n])
y[m//2:] = self.qr1.Qdot(tmp[n:])
return y
def tsqr(A):
m, n = A.shape
if m <= 2*n:
return qr_householder_wy(A)
else:
qr0 = tsqr(A[:m//2])
qr1 = tsqr(A[m//2:])
B = numpy.concatenate([qr0.R(), qr1.R()])
qr = tsqr(B)
return QRTS(qr0, qr1, qr)
m, n = 50, 4
A = vander_cheb(cosspace(-1,1,m), n)
qrt = qr_householder_wy(A)
qrts = tsqr(A)
I = numpy.eye(n, n)
Qts = qrts.Qdot(I)
print(numpy.linalg.norm(Qts.T.dot(Qts) - I), numpy.linalg.norm(Qts.dot(qrts.R()) - A))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Jupyter notebooks
Step4: Some common terminology
Step5: Inner products and orthogonality
Step6: Gram-Schmidt Orthogonalization
Step7: Theorem
Step8: Relative condition number
Step9: Stability
Step10: Classical Gram-Schmidt is highly parallel, but unstable, as evidenced by the lack of orthogonality in $Q$.
Step12: Householder triangularization
Step13: Choice of two projections
Step14: Inside qr_householder1, we have the lines
Step15: The error $QR - A$ is still $10^{-8}$ for this very well-conditioned matrix so something else must be at play here.
Step17: We now have a usable implementation of Householder QR. There are some further concerns for factoring rank-deficient matrices. We will visit the concept of pivoting later, in the context of LU and Cholesky factorization.
Step18: Cost of Householder factorization
Step19: The Singular Value Decomposition
Step20: QR blocking
Step21: Factoring a large square matrix in this way is generally not optimal; it usually better to factor each block of columns using a WY representation, resulting in a sequence of block reflectors. Start by partitioning the original matrix to create a $k\times k$ block $A_{00}$.
|
11,673
|
<ASSISTANT_TASK:>
Python Code:
# this is a python comment
# this cell contains python code
# executing the cell yields the results of the python command
2+2
# live code some graphics here
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot([3,1,4,1,5])
plt.style.use("fivethirtyeight")
plt.plot([3,1,4,1,5])
# your turn: plot some additional digits of pi
import sympy
# to digits and then plot
pi_str = str(sympy.N(sympy.pi, n=100))
pi_digits = [int(x) for x in pi_str if x != '.']
plt.plot(pi_digits)
# live code an example of loading the va data csv with pandas here
import pandas as pd
df = pd.read_csv('../3-data/IHME_PHMRC_VA_DATA_ADULT_Y2013M09D11_0.csv', low_memory=False)
# DataFrame.iloc method selects row and columns by "integer location"
df.iloc[5:10, 5:10]
# If you are new to this sort of thing, what do you think this does?
df.iloc[5:10, :10]
# I don't have time to show you the details now, but I find that
# pandas DataFrames have really done things well. For example:
df.gs_text34
df.gs_text34.value_counts()
# you can guess what the next line does,
# even if you have never used python before:
import sklearn.neighbors
# here is how sklearn creates a "classifier":
clf = sklearn.neighbors.KNeighborsClassifier()
# I didn't mention `numpy` before, but this is "the fundamental
# package for scientific computing with Python"
import numpy as np
# sklearn gets mixed up with Pandas DataFrames and Series,
# so you need to turn things into np.arrays:
X = np.array(df.loc[:, ['va46']])
y = np.array(df.gs_text34)
# one nice thing about sklearn is that it has all different
# fancy machine learning methods, but they all follow a
# common pattern:
clf.fit(X, y)
clf.predict([[19]])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Why does it work so well for me?
Step2: We will use two "packages" for the hands-on portion of this tutorial
Step3: Scikit-Learn
|
11,674
|
<ASSISTANT_TASK:>
Python Code:
# Create an object called "foo" and assign it the value 9.
foo = 9
# Try running this block!
foo + 5
# Evaluate the variable itself
foo
# Here, foo has the value of 9. Let's add 9 to it.
foo = foo + 9
foo
# Correct variable names
building_height = 100
water = 4
result = building_height + water
# Addition
3 + 15
#Subtraction
9 - 3
#Multiplication
3 * 4
# Division
15 / 3
# Integer Division
15 // 3
# Division
9 / 4
# Integer Division
9 // 4
# Modulo
3 % 2
# Integer
a = 3
# floating-point
b = 3.0
# Adding two integers
2 + 3
# Adding two doubles
9.1 + 3.0
# Adding an integer and float (automatic type conversion)
4 + 11.3
2.1 + .9
# Converts the argument (3) which is an integer, and spits out a float (3.0) in the REPL.
float(3)
# Coverts the argument (5.9) which is a float, and spits out an integer (5) in the REPL.
# Note: truncation, not rounding.
int(5.9)
# Let's add 0.1 and 0.2, to get 0.3.
result = 0.1 + 0.2
result
# This operation should return 16.3
7.6 + 8.7
import decimal
# We use the 'decimal' word as the module name, and use the "getcontext" function that is a part of the module, and
# set it's .prec value (precision) to 3 decimal places.
decimal.getcontext().prec=3
# We use what's called a print() function to literally print our result to the screen (similar to the REPL behavior)
print(decimal.Decimal(0.1) + decimal.Decimal(0.2))
# printing a number calculation
print(4 + 3)
# printing a number
print(11)
# printing a word (more on "strings" later)
print("Hello, there!")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Jupyter Notebooks are know to be REPL environments. This simply means that the notebooks serve as "interactive" programming environments, where calculations or other processes can be done impromptu. REPL stands for Read-Evaluate-Print-Loop.
Step2: We can use declaration constructs that contain a variable name twice. This allows a programmer to update variables as they place. When the variable that appears twice is on the right of the "=" operator, it is using whatever value it held from the last assignment. If there are other operators, like the "+" addition operator, then the calculation will be done first, then stored/updated into the variable.
Step3: As you can see, the foo that is to the right of the "=" operator has the value of 9 currently (as we defined before). We added 9 to it, and then updated it by setting it to "foo =". Note that the previous "foo + 5" we did did NOT store foo again, and so the value of foo never changed.
Step4: Arithmetic Operators
Step5: Division has two forms in Python 3 (and 2)
Step6: Types of Numbers (int and float)
Step7: Whenever you add two integers together, you will end up with an integer. Same applies to floats. However, you if you add a float and an integer together, it will always become a float. This process is called automatic type conversion.
Step8: Conversion Functions
Step9: Floating-point Inaccuracy
Step10: The result was not what we expected. Why can't the interpreter get this right? Simply put, it is because computers have their number systems represented in base-2, while we typically use base 10. There are some numbers that are easy to represent in base-10 that are impossible to accurately represent in base-2, and vice-versa.
Step11: Print Functions
|
11,675
|
<ASSISTANT_TASK:>
Python Code:
from parcels import FieldSet, ParticleSet, JITParticle
from parcels import AdvectionRK4
import numpy as np
from datetime import timedelta as delta
fieldset = FieldSet.from_parcels("Peninsula_data/peninsula", allow_time_extrapolation=True)
npart = 10 # number of particles to be released
lon = 3e3 * np.ones(npart)
lat = np.linspace(3e3 , 45e3, npart, dtype=np.float32)
time = np.arange(0, npart) * delta(hours=2).total_seconds() # release every particle two hours later
pset = ParticleSet(fieldset=fieldset, pclass=JITParticle, lon=lon, lat=lat, time=time)
output_file = pset.ParticleFile(name="Output.nc", outputdt=delta(hours=2))
pset.execute(AdvectionRK4, runtime=delta(hours=24), dt=delta(minutes=5),
output_file=output_file)
output_file.close() # export the trajectory data to a netcdf file
pset = ParticleSet(fieldset=fieldset, pclass=JITParticle, lon=lon, lat=lat, time=time)
output_file = pset.ParticleFile(name="Output.zarr", outputdt=delta(hours=2)) # note .zarr extension in name!
pset.execute(AdvectionRK4, runtime=delta(hours=24), dt=delta(minutes=5),
output_file=output_file)
output_file.close() # export the trajectory data to a netcdf file
import netCDF4
data_netcdf4 = netCDF4.Dataset('Output.nc')
print(data_netcdf4)
trajectory_netcdf4 = data_netcdf4.variables['trajectory'][:]
time_netcdf4 = data_netcdf4.variables['time'][:]
lon_netcdf4 = data_netcdf4.variables['lon'][:]
lat_netcdf4 = data_netcdf4.variables['lat'][:]
print(trajectory_netcdf4)
import xarray as xr
data_xarray = xr.open_dataset('Output.nc')
print(data_xarray)
data_xarray_zarr = xr.open_zarr('Output.zarr')
print(data_xarray_zarr)
print(data_xarray['trajectory'])
np.set_printoptions(linewidth=160)
ns_per_hour = np.timedelta64(1, 'h') # nanoseconds in an hour
print(data_xarray['time'].data/ns_per_hour) # time is stored in nanoseconds
import matplotlib.pyplot as plt
x = data_xarray['lon'].values
y = data_xarray['lat'].values
distance = np.cumsum(np.sqrt(np.square(np.diff(x))+np.square(np.diff(y))),axis=1) # d = (dx^2 + dy^2)^(1/2)
real_time = data_xarray['time']/ns_per_hour # convert time to hours
time_since_release = (real_time.values.transpose() - real_time.values[:,0]) # substract the initial time from each timeseries
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(10,4),constrained_layout=True)
ax1.set_ylabel('Distance travelled [m]')
ax1.set_xlabel('observation',weight='bold')
d_plot = ax1.plot(distance.transpose())
ax2.set_ylabel('Distance travelled [m]')
ax2.set_xlabel('time since release [hours]',weight='bold')
d_plot_t = ax2.plot(time_since_release[1:],distance.transpose())
plt.show()
plt.figure()
ax= plt.axes()
ax.set_ylabel('Distance travelled [m]')
ax.set_xlabel('time [hours]',weight='bold')
d_plot_t = ax.plot(real_time.T[1:],distance.transpose())
# Using xarray
mean_lon_x = []
mean_lat_x = []
timerange = np.arange(np.nanmin(data_xarray['time'].values),
np.nanmax(data_xarray['time'].values)+np.timedelta64(delta(hours=2)),
delta(hours=2)) # timerange in nanoseconds
for time in timerange:
if np.all(np.any(data_xarray['time']==time,axis=1)): # if all trajectories share an observation at time
mean_lon_x += [np.nanmean(data_xarray['lon'].where(data_xarray['time']==time).values)] # find the data that share the time
mean_lat_x += [np.nanmean(data_xarray['lat'].where(data_xarray['time']==time).values)] # find the data that share the time
# Using netCDF4
mean_lon_n = []
mean_lat_n = []
timerange = np.arange(np.nanmin(time_netcdf4),
np.nanmax(time_netcdf4)+delta(hours=2).total_seconds(),
delta(hours=2).total_seconds())
for time in timerange:
if np.all(np.any(time_netcdf4 == time, axis=1)): # if all trajectories share an observation at time
mean_lon_n += [np.mean(lon_netcdf4[time_netcdf4 == time])] # find the data that share the time
mean_lat_n += [np.mean(lat_netcdf4[time_netcdf4 == time])] # find the data that share the time
plt.figure()
ax = plt.axes()
ax.set_ylabel('Meridional distance [m]')
ax.set_xlabel('Zonal distance [m]')
ax.grid()
ax.scatter(mean_lon_x,mean_lat_x,marker='^',label='xarray',s = 80)
ax.scatter(mean_lon_n,mean_lat_n,marker='o',label='netcdf')
plt.legend()
plt.show()
fig, (ax1,ax2,ax3,ax4) = plt.subplots(1,4,figsize=(16,3.5),constrained_layout=True)
###-Points-###
ax1.set_title('Points')
ax1.scatter(data_xarray['lon'].T,data_xarray['lat'].T)
###-Lines-###
ax2.set_title('Lines')
ax2.plot(data_xarray['lon'].T,data_xarray['lat'].T)
###-Points + Lines-###
ax3.set_title('Points + Lines')
ax3.plot(data_xarray['lon'].T,data_xarray['lat'].T,marker='o')
###-Not Transposed-###
ax4.set_title('Not transposed')
ax4.plot(data_xarray['lon'],data_xarray['lat'],marker='o')
plt.show()
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
outputdt = delta(hours=2)
timerange = np.arange(np.nanmin(data_xarray['time'].values),
np.nanmax(data_xarray['time'].values)+np.timedelta64(outputdt),
outputdt) # timerange in nanoseconds
%%capture
fig = plt.figure(figsize=(5,5),constrained_layout=True)
ax = fig.add_subplot()
ax.set_ylabel('Meridional distance [m]')
ax.set_xlabel('Zonal distance [m]')
ax.set_xlim(0, 90000)
ax.set_ylim(0, 50000)
plt.xticks(rotation=45)
time_id = np.where(data_xarray['time'] == timerange[0]) # Indices of the data where time = 0
scatter = ax.scatter(data_xarray['lon'].values[time_id], data_xarray['lat'].values[time_id])
t = str(timerange[0].astype('timedelta64[h]'))
title = ax.set_title('Particles at t = '+t)
def animate(i):
t = str(timerange[i].astype('timedelta64[h]'))
title.set_text('Particles at t = '+t)
time_id = np.where(data_xarray['time'] == timerange[i])
scatter.set_offsets(np.c_[data_xarray['lon'].values[time_id], data_xarray['lat'].values[time_id]])
anim = FuncAnimation(fig, animate, frames = len(timerange), interval=500)
HTML(anim.to_jshtml())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exporting trajectory data in zarr format
Step2: Reading the output file
Step3: Using the xarray package
Step4: Note that opening the .zarr file (see Exporting trajectory data in zarr format) using xr.open_zarr() leads to a similar object
Step5: Trajectory data structure
Step6: Note how the first observation occurs at a different time for each trajectory. obs != time
Step7: The two figures above show the same graph. Time is not needed to create the first figure. The time variable minus the first value of each trajectory gives the x-axis the correct units in the second figure.
Step8: Conditional selection
Step9: Conditional selection is even easier in numpy arrays without the xarray formatting since it accepts the 2D boolean array that results from time_netcdf4 == time as a mask that you can use to directly select the data.
Step10: Plotting
Step11: Animations
|
11,676
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from jyquickhelper import add_notebook_menu
add_notebook_menu()
import matplotlib.pyplot as plt
poids = [ 0.2, 0.15, 0.15, 0.1, 0.4 ]
valeur = [ 0,1,2,3,4 ]
plt.figure(figsize=(8,4))
plt.bar(valeur,poids)
import numpy.random as rnd
draw = rnd.multinomial(1000, poids)
draw / sum(draw)
draw = rnd.multinomial(1, poids, 1000)
draw
import numpy
cum = numpy.cumsum( poids ) # voir http://docs.scipy.org/doc/numpy/reference/generated/numpy.cumsum.html
print(cum)
plt.bar( valeur, cum)
import functools, random
def simulation_multinomiale(poids):
# calcule la fonction de répartition
# voir https://docs.python.org/3.4/library/functools.html#functools.reduce
def agg(x,y):
x.append(y)
return x
cum = functools.reduce(agg, poids, [])
x = random.random()
s = 0
i = 0
while s <= x and i < len(cum):
s += cum[i]
i += 1
return i-1
alea = [ simulation_multinomiale(poids) for i in range(0,1000) ]
alea[:10]
import collections
c = collections.Counter(alea)
c
def simulation_multinomiale(pc):
x = random.random()
s = 0
i = 0
while s <= x and i < len(pc):
s += pc[i]
i += 1
return i-1
def agg(x,y):
x.append(y)
return x
poids_cumule = functools.reduce(agg, poids, [])
poids_cumule_trie = functools.reduce(agg, sorted(poids, reverse=True), [])
print(poids_cumule, poids_cumule_trie)
import time
for p in range(0,3):
print("passage",p)
a = time.perf_counter()
alea = [ simulation_multinomiale(poids_cumule) for i in range(0,10000) ]
b = time.perf_counter()
print(" non trié", b-a)
a = time.perf_counter()
alea = [ simulation_multinomiale(poids_cumule_trie) for i in range(0,10000) ]
b = time.perf_counter()
print(" trié", b-a)
poids_trie = list(sorted(poids, reverse=True))
for p in range(0,3):
print("passage",p)
a = time.perf_counter()
rnd.multinomial(10000, poids)
b = time.perf_counter()
print(" non trié", b-a)
a = time.perf_counter()
rnd.multinomial(10000, poids_trie)
b = time.perf_counter()
print(" trié", b-a)
K = 100
poids = [ 1/K for i in range(0,K) ]
poids_cumule = functools.reduce(agg, poids, [])
vecteur = [ simulation_multinomiale(poids_cumule) for i in range(0,100000) ]
N = len(vecteur)-1
for p in range(0,3):
print("passage",p)
a = time.perf_counter()
alea = [ simulation_multinomiale(poids_cumule) for i in range(0,10000) ]
b = time.perf_counter()
print(" simulation_multinomiale", b-a)
a = time.perf_counter()
alea = [ vecteur [ random.randint(0,N) ] for i in range(0,10000) ]
b = time.perf_counter()
print(" bootstrap", b-a)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Une variable qui suit une loi multinomiale est une variable à valeurs entières qui prend ses valeurs dans un ensemble fini, et chacune de ces valeurs a une probabilité différente.
Step2: Lorsqu'on simule une telle loi, chaque valeur a une probabilité de sortir proportionnelle à chaque poids. La fonction numpy.random.multinomial permet de calculer cela.
Step3: Pour avoir 1000 tirages plutôt que l'aggrégation des 1000 tirages
Step4: Algorithme de simulation
Step5: Cette fonction de répartition $(x,F(x))$ est croissante. On définit les cinq intervalles
Step6: On vérifie que les probabilités d'apparition de chaque nombre sont celles attendues.
Step7: Une première optimisation
Step8: La seconde version est plus rapide.Son intérêt dépend du nombre d'observations aléatoire à tirer. En effet, si $K$ est le nombre de valeurs distinctes, les coûts fixes des deux méthodes sont les suivants
Step9: C'est plus rapide aussi. On voit aussi que cette façon de faire est beaucoup plus rapide que la version implémentée en Python pur. Cela vient du faire que le module numpy est optimisé pour le calcul numérique et surtout implémenté en langages C++ et Fortran.
|
11,677
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.ndimage import label, find_objects
from scipy.ndimage.morphology import generate_binary_structure
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
from nacoustik import Wave
from nacoustik.spectrum import psd
from nacoustik.noise import remove_background_noise
%matplotlib inline
filepath = ""
sound = Wave(filepath)
sound.read()
sound.normalize()
f, t, a = psd(sound)
ale = remove_background_noise(a, iterations=10)
s = generate_binary_structure(2, 2)
s
labels = np.empty_like(ale, dtype=np.int32)
n_features = np.empty(shape=(2), dtype=np.int32)
for channel in range(sound.n_channels):
labels[channel], n_features[channel] = label(ale[channel], structure=s)
# figure configuration
dpi = 192
channels = sound.n_channels
fig, ax = plt.subplots(channels, 1)
fig.set_dpi(dpi)
fig.set_figwidth((920 / dpi) * 3)
fig.set_figheight((460 / dpi) * 3)
plt.subplots_adjust(left=0, bottom=0, right=1, top=1, wspace=0, hspace=0)
fig.set_frameon(False)
# specify frequency bins (width of 1 kiloherz)
bins = np.arange(0, (sound.rate / 2), 1000)
# calculate the t_step and f_step
t_step = t[1] - t[0]
f_step = f[1] - f[0]
# psd spectrogram ale
for channel in range(channels):
spec = ax[channel].pcolormesh(t, f, ale[channel], cmap='viridis')
ax[channel].set(ylim=([0, sound.rate / 2]),
#xticks = np.arange(30, sound.duration, 30).astype(np.int),
yticks = bins.astype(np.int) + 1000)
ax[channel].tick_params(length=12, color='white',
bottom=False, labelbottom=False,
top=False, labeltop=False,
labelleft=False,
labelright=False)
ax[channel].set_frame_on(False)
# draw bounding boxes
for i in range(labels[channel].max()):
loc = find_objects(labels[channel] == i)[0]
x = loc[1].start * t_step
y = loc[0].start * f_step
width = (loc[1].stop - loc[1].start) * t_step
height = (loc[0].stop - loc[0].start) * f_step
rec = Rectangle((x, y), width = width, height = height, color='#00FF80', fill=False)
p = ax[channel].add_patch(rec)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Variable definitions
Step2: Compute spectrogram
Step3: Remove background noise
Step4: Label regions of interest
Step5: Plot regions of interest
|
11,678
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import os
import re
# Defining hyperparameters
VOCAB_SIZE = 8192
MAX_SAMPLES = 50000
BUFFER_SIZE = 20000
MAX_LENGTH = 40
EMBED_DIM = 256
LATENT_DIM = 512
NUM_HEADS = 8
BATCH_SIZE = 64
path_to_zip = keras.utils.get_file(
"cornell_movie_dialogs.zip",
origin="http://www.cs.cornell.edu/~cristian/data/cornell_movie_dialogs_corpus.zip",
extract=True,
)
path_to_dataset = os.path.join(
os.path.dirname(path_to_zip), "cornell movie-dialogs corpus"
)
path_to_movie_lines = os.path.join(path_to_dataset, "movie_lines.txt")
path_to_movie_conversations = os.path.join(path_to_dataset, "movie_conversations.txt")
def load_conversations():
# Helper function for loading the conversation splits
id2line = {}
with open(path_to_movie_lines, errors="ignore") as file:
lines = file.readlines()
for line in lines:
parts = line.replace("\n", "").split(" +++$+++ ")
id2line[parts[0]] = parts[4]
inputs, outputs = [], []
with open(path_to_movie_conversations, "r") as file:
lines = file.readlines()
for line in lines:
parts = line.replace("\n", "").split(" +++$+++ ")
# get conversation in a list of line ID
conversation = [line[1:-1] for line in parts[3][1:-1].split(", ")]
for i in range(len(conversation) - 1):
inputs.append(id2line[conversation[i]])
outputs.append(id2line[conversation[i + 1]])
if len(inputs) >= MAX_SAMPLES:
return inputs, outputs
return inputs, outputs
questions, answers = load_conversations()
# Splitting training and validation sets
train_dataset = tf.data.Dataset.from_tensor_slices((questions[:40000], answers[:40000]))
val_dataset = tf.data.Dataset.from_tensor_slices((questions[40000:], answers[40000:]))
def preprocess_text(sentence):
sentence = tf.strings.lower(sentence)
# Adding a space between the punctuation and the last word to allow better tokenization
sentence = tf.strings.regex_replace(sentence, r"([?.!,])", r" \1 ")
# Replacing multiple continuous spaces with a single space
sentence = tf.strings.regex_replace(sentence, r"\s\s+", " ")
# Replacing non english words with spaces
sentence = tf.strings.regex_replace(sentence, r"[^a-z?.!,]+", " ")
sentence = tf.strings.strip(sentence)
sentence = tf.strings.join(["[start]", sentence, "[end]"], separator=" ")
return sentence
vectorizer = layers.TextVectorization(
VOCAB_SIZE,
standardize=preprocess_text,
output_mode="int",
output_sequence_length=MAX_LENGTH,
)
# We will adapt the vectorizer to both the questions and answers
# This dataset is batched to parallelize and speed up the process
vectorizer.adapt(tf.data.Dataset.from_tensor_slices((questions + answers)).batch(128))
def vectorize_text(inputs, outputs):
inputs, outputs = vectorizer(inputs), vectorizer(outputs)
# One extra padding token to the right to match the output shape
outputs = tf.pad(outputs, [[0, 1]])
return (
{"encoder_inputs": inputs, "decoder_inputs": outputs[:-1]},
{"outputs": outputs[1:]},
)
train_dataset = train_dataset.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE)
val_dataset = val_dataset.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE)
train_dataset = (
train_dataset.cache()
.shuffle(BUFFER_SIZE)
.batch(BATCH_SIZE)
.prefetch(tf.data.AUTOTUNE)
)
val_dataset = val_dataset.cache().batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)
class FNetEncoder(layers.Layer):
def __init__(self, embed_dim, dense_dim, **kwargs):
super(FNetEncoder, self).__init__(**kwargs)
self.embed_dim = embed_dim
self.dense_dim = dense_dim
self.dense_proj = keras.Sequential(
[
layers.Dense(dense_dim, activation="relu"),
layers.Dense(embed_dim),
]
)
self.layernorm_1 = layers.LayerNormalization()
self.layernorm_2 = layers.LayerNormalization()
def call(self, inputs):
# Casting the inputs to complex64
inp_complex = tf.cast(inputs, tf.complex64)
# Projecting the inputs to the frequency domain using FFT2D and
# extracting the real part of the output
fft = tf.math.real(tf.signal.fft2d(inp_complex))
proj_input = self.layernorm_1(inputs + fft)
proj_output = self.dense_proj(proj_input)
return self.layernorm_2(proj_input + proj_output)
class PositionalEmbedding(layers.Layer):
def __init__(self, sequence_length, vocab_size, embed_dim, **kwargs):
super(PositionalEmbedding, self).__init__(**kwargs)
self.token_embeddings = layers.Embedding(
input_dim=vocab_size, output_dim=embed_dim
)
self.position_embeddings = layers.Embedding(
input_dim=sequence_length, output_dim=embed_dim
)
self.sequence_length = sequence_length
self.vocab_size = vocab_size
self.embed_dim = embed_dim
def call(self, inputs):
length = tf.shape(inputs)[-1]
positions = tf.range(start=0, limit=length, delta=1)
embedded_tokens = self.token_embeddings(inputs)
embedded_positions = self.position_embeddings(positions)
return embedded_tokens + embedded_positions
def compute_mask(self, inputs, mask=None):
return tf.math.not_equal(inputs, 0)
class FNetDecoder(layers.Layer):
def __init__(self, embed_dim, latent_dim, num_heads, **kwargs):
super(FNetDecoder, self).__init__(**kwargs)
self.embed_dim = embed_dim
self.latent_dim = latent_dim
self.num_heads = num_heads
self.attention_1 = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim
)
self.attention_2 = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim
)
self.dense_proj = keras.Sequential(
[
layers.Dense(latent_dim, activation="relu"),
layers.Dense(embed_dim),
]
)
self.layernorm_1 = layers.LayerNormalization()
self.layernorm_2 = layers.LayerNormalization()
self.layernorm_3 = layers.LayerNormalization()
self.supports_masking = True
def call(self, inputs, encoder_outputs, mask=None):
causal_mask = self.get_causal_attention_mask(inputs)
if mask is not None:
padding_mask = tf.cast(mask[:, tf.newaxis, :], dtype="int32")
padding_mask = tf.minimum(padding_mask, causal_mask)
attention_output_1 = self.attention_1(
query=inputs, value=inputs, key=inputs, attention_mask=causal_mask
)
out_1 = self.layernorm_1(inputs + attention_output_1)
attention_output_2 = self.attention_2(
query=out_1,
value=encoder_outputs,
key=encoder_outputs,
attention_mask=padding_mask,
)
out_2 = self.layernorm_2(out_1 + attention_output_2)
proj_output = self.dense_proj(out_2)
return self.layernorm_3(out_2 + proj_output)
def get_causal_attention_mask(self, inputs):
input_shape = tf.shape(inputs)
batch_size, sequence_length = input_shape[0], input_shape[1]
i = tf.range(sequence_length)[:, tf.newaxis]
j = tf.range(sequence_length)
mask = tf.cast(i >= j, dtype="int32")
mask = tf.reshape(mask, (1, input_shape[1], input_shape[1]))
mult = tf.concat(
[tf.expand_dims(batch_size, -1), tf.constant([1, 1], dtype=tf.int32)],
axis=0,
)
return tf.tile(mask, mult)
def create_model():
encoder_inputs = keras.Input(shape=(None,), dtype="int32", name="encoder_inputs")
x = PositionalEmbedding(MAX_LENGTH, VOCAB_SIZE, EMBED_DIM)(encoder_inputs)
encoder_outputs = FNetEncoder(EMBED_DIM, LATENT_DIM)(x)
encoder = keras.Model(encoder_inputs, encoder_outputs)
decoder_inputs = keras.Input(shape=(None,), dtype="int32", name="decoder_inputs")
encoded_seq_inputs = keras.Input(
shape=(None, EMBED_DIM), name="decoder_state_inputs"
)
x = PositionalEmbedding(MAX_LENGTH, VOCAB_SIZE, EMBED_DIM)(decoder_inputs)
x = FNetDecoder(EMBED_DIM, LATENT_DIM, NUM_HEADS)(x, encoded_seq_inputs)
x = layers.Dropout(0.5)(x)
decoder_outputs = layers.Dense(VOCAB_SIZE, activation="softmax")(x)
decoder = keras.Model(
[decoder_inputs, encoded_seq_inputs], decoder_outputs, name="outputs"
)
decoder_outputs = decoder([decoder_inputs, encoder_outputs])
fnet = keras.Model([encoder_inputs, decoder_inputs], decoder_outputs, name="fnet")
return fnet
fnet = create_model()
fnet.compile("adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"])
fnet.fit(train_dataset, epochs=1, validation_data=val_dataset)
VOCAB = vectorizer.get_vocabulary()
def decode_sentence(input_sentence):
# Mapping the input sentence to tokens and adding start and end tokens
tokenized_input_sentence = vectorizer(
tf.constant("[start] " + preprocess_text(input_sentence) + " [end]")
)
# Initializing the initial sentence consisting of only the start token.
tokenized_target_sentence = tf.expand_dims(VOCAB.index("[start]"), 0)
decoded_sentence = ""
for i in range(MAX_LENGTH):
# Get the predictions
predictions = fnet.predict(
{
"encoder_inputs": tf.expand_dims(tokenized_input_sentence, 0),
"decoder_inputs": tf.expand_dims(
tf.pad(
tokenized_target_sentence,
[[0, MAX_LENGTH - tf.shape(tokenized_target_sentence)[0]]],
),
0,
),
}
)
# Calculating the token with maximum probability and getting the corresponding word
sampled_token_index = tf.argmax(predictions[0, i, :])
sampled_token = VOCAB[sampled_token_index.numpy()]
# If sampled token is the end token then stop generating and return the sentence
if tf.equal(sampled_token_index, VOCAB.index("[end]")):
break
decoded_sentence += sampled_token + " "
tokenized_target_sentence = tf.concat(
[tokenized_target_sentence, [sampled_token_index]], 0
)
return decoded_sentence
decode_sentence("Where have you been all this time?")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading data
Step2: Preprocessing and Tokenization
Step3: Tokenizing and padding sentences using TextVectorization
Step4: Creating the FNet Encoder
Step5: Creating the Decoder
Step6: Creating and Training the model
Step7: Here, the epochs parameter is set to a single epoch, but in practice the model will take around
Step8: Performing inference
|
11,679
|
<ASSISTANT_TASK:>
Python Code::
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
11,680
|
<ASSISTANT_TASK:>
Python Code:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
import io
import tarfile
import PIL
import boto3
from fastai.vision import *
path = untar_data(URLs.PETS); path
path_anno = path/'annotations'
path_img = path/'images'
fnames = get_image_files(path_img)
np.random.seed(2)
pat = re.compile(r'/([^/]+)_\d+.jpg$')
bs=64
img_size=299
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(),
size=img_size, bs=bs//2).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet50, metrics=error_rate)
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8)
learn.unfreeze()
learn.fit_one_cycle(3, max_lr=slice(1e-6,1e-4))
save_texts(path_img/'models/classes.txt', data.classes)
trace_input = torch.ones(1,3,img_size,img_size).cuda()
jit_model = torch.jit.trace(learn.model.float(), trace_input)
model_file='resnet50_jit.pth'
output_path = str(path_img/f'models/{model_file}')
torch.jit.save(jit_model, output_path)
tar_file=path_img/'models/model.tar.gz'
classes_file='classes.txt'
with tarfile.open(tar_file, 'w:gz') as f:
f.add(path_img/f'models/{model_file}', arcname=model_file)
f.add(path_img/f'models/{classes_file}', arcname=classes_file)
s3 = boto3.resource('s3')
s3.meta.client.upload_file(str(tar_file), 'REPLACE_WITH_YOUR_BUCKET_NAME', 'fastai-models/lesson1/model.tar.gz')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Export model and upload to S3
Step2: Now we need to export the model in the PyTorch TorchScript format so we can load into an AWS Lambda function.
Step3: Next step is to create a tarfile of the exported classes file and model weights.
Step4: Now we need to upload the model tarball to S3.
|
11,681
|
<ASSISTANT_TASK:>
Python Code:
345
339 + 6
345 - 6
2.7 / 12.1
345 - 12/6
# Importa tutte le procedure (funzioni) definite nel modulo "operator"
from operator import *
add(339, 6)
sub(345, truediv(12, 6))
mul(add(2,3), (sub(add(2,2), add(3,2))))
a = 13
3*a
add(a, add(a,a))
pi = 3.14159
raggio = 5
circonferenza = 2*pi*raggio
circonferenza
raggio = 10
circonferenza
who
quadrato = mul(3, 3)
quadrato
power = mul(x,x)
print(power)
x = 7
def Quadrato(numero):
return mul(numero, numero)
Quadrato
who
Quadrato(532)
quadrato(mul(3,2))
def SommaQuadrati(x, y):
return add(Quadrato(x), Quadrato(y))
SommaQuadrati(4,3)
x
del x
def F(a):
return SommaQuadrati(add(a, 1), mul(a, 2))
F(5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Semplici espressioni numeriche possono essere combinate usando delle procedure primitive che rappresentano l'applicazione di procedure a quei numeri. Per esempio
Step2: Si noti come in questo caso, per queste semplici procedure numeriche che corrispondono agli operatori aritmetici, viene implicitamente usata una notazione chiamata postfix. Importando la libreria operator è possibile esprimere le stesse espressioni in notazione prefix
Step3: Si consiglia di leggere la documentazione della libreria operator sul sito di python. Le funzioni principali che useremo in questo notebook sono
Step4: Uno dei vantaggi della notazione prefix è che rende sempre chiaro qual è l'operatore/procedura che deve essere svolta, applicandola a quali dati
Step5: Si noti come l'espressione precedente sarebbe più chiara se scritta come
Step6: In questo caso abbiamo una variabile, che abbiamo chiamato a, e il cui valore è il numero 13. A questo punto possiamo usare la variabile a come un oggetto di tipo numerico
Step7: In questo caso, l'interprete del linguaggio, ha prima valutato l'espressione 2*pi*raggio, e dopo ha assegnato il valore ottenuto dalla valutazione dell'espressione alla variabile di nome circonferenza.
Step8: La valutazione di espressioni composte
Step9: Per ottenere un livello di astrazione più alto abbiamo bisogno di un meccanismo (una sintassi del linguaggio) per definire nuove procedure (funzioni). La sintassi è la seguente
Step10: A questo punto possiamo anche definire nuove procedure in termini della procedura appena definita, definendo per esempio una nuova procedura chiamata SommaDiQuadrati
Step11: ESEMPIO
|
11,682
|
<ASSISTANT_TASK:>
Python Code:
# 3 x 3 filter shape
filter1 = [
[.1, .1, .2],
[.1, .1, .2],
[.2, .2, .2],
]
# Each filter only has one input channel (grey scale)
# 3 x 3 x 1
channel_filters1 = [filter1]
# We want to output 2 channels which requires another set of 3 x 3 x 1
filter2 = [
[.9, .5, .9],
[.5, .3, .5],
[.9, .5, .9],
]
channel_filters2 = [filter2]
# Initialized Weights
# 3 x 3 x 1 x 2
convolution_layer1 = [channel_filters1, channel_filters2]
print(convolution_layer1[0][0][2][0])
for filters in convolution_layer1:
for channel_filter in filters:
for row in channel_filter:
print(row)
print()
biases_1 = [0.1, 0.1]
# Number of pixels to shift want evaluating a filter
stride_1 = 1
# Transpose to match inputs
W1 = tf.Variable(np.transpose(convolution_layer1), dtype=tf.float32)
B1 = tf.Variable(biases_1, dtype=tf.float32)
print(W1.shape)
stride_shape = [1, stride_1, stride_1, 1]
preactivation = tf.nn.conv2d(X, W1, strides=stride_shape, padding='SAME') + B1
activation_1 = tf.nn.relu(preactivation)
print(activation_1.shape)
# Create a session
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
x = sess.run(W1)
# Transpose to match our model
print(np.transpose(x))
# Transpose to match our model
feed_dict = {X: input_x}
Y1 = activation_1.eval(session=sess, feed_dict=feed_dict)
print(np.round_(np.transpose(Y1), 1))
init_2 = tf.truncated_normal([4, 4, 2, 4], stddev=0.1)
W2 = tf.Variable(init_2)
B2 = tf.Variable(tf.ones([4])/10)
stride_2 = 2
strides = [1, stride_2, stride_2, 1]
preactivate = tf.nn.conv2d(activation_1, W2, strides=strides, padding='SAME') + B2
activation_2 = tf.nn.relu(preactivate)
print(activation_2.shape)
# reshape the output from the third convolution for the fully connected layer
reduced = int(np.multiply.reduce(list(activation_2.shape[1:])))
re_shape = [-1, reduced]
fully_connected_input = tf.reshape(activation_2, shape=re_shape)
print(fully_connected_input.shape)
fully_connected_nodes = 6
fc_w_init = tf.truncated_normal([reduced, fully_connected_nodes], stddev=0.1)
fully_connected_weights = tf.Variable(fc_w_init)
fc_b_init = tf.ones([fully_connected_nodes])/10
fully_connected_biases = tf.Variable(fc_b_init)
preactivate = tf.matmul(fully_connected_input, fully_connected_weights) + fully_connected_biases
fully_connected_activate = tf.nn.relu(preactivate)
print(fully_connected_activate.shape)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bias Shape
Step2: Convolutional Layers
Step3: Activation Shape
Step4: Activation2 Shape
Step5: Fully Connected Layer
|
11,683
|
<ASSISTANT_TASK:>
Python Code:
print("Exemplo 3.3")
import numpy as np
from sympy import *
Vsource = 2
Csource1 = 2
Csource2 = 7
R1 = 2
R2 = 4
R3 = 10
#i1 = v1/R1 = v1/2
#i2 = v2/R2 = v2/4
#i1 + i2 + 7 = 2 => i1 + i2 = -5
#v2 - v1 = 2
#v1/2 + v2/4 = -5 => (v2 - 2)/2 + v2/4 = - 5
#3v2/4 = -4
v2 = -16/3
v1 = v2 - 2
print("V1:", v1, "V")
print("V2:", v2, "V")
print("Problema Prático 3.3")
Vsource1 = 14
Vsource2 = 6
#v2 - v = 6
#i1 = i2 + i + i3
#i1 = (14 - v)/4
#i2 = v/3
#i = v2/2
#i3 = v2/6
#7/2 - v/4 = v/3 + 3 + v/2 + 1 + v/6 => 13v/12
v = (-1/2)*12/13
v2 = v + 6
i = v2/2
print("Valor de v:",v,"V")
print("Valor de i:",i,"A")
print("Exemplo 3.4")
import numpy as np
R1 = 2
R2 = 6
R3 = 4
R4 = 1
Rx = 3
#i1 = v1/R1 = v1/2
#i2 = (v2 - v3)/R2 = (v2 - v3)/6
#i3 = v3/R3 = v3/4
#i4 = v4/R4 = v4
#ix = vx/Rx = vx/3
#i1 + i2 + ix = 10
#i2 + ix = i3 + i4
#(v1 - v2) = 20
#(v3 - v4) = 3vx
#(v1 - v4) = vx
#(v2 - v3) = vx - 3vx - 20 = -2vx - 20
#v1/2 + (-2vx - 20)/6 + vx/3 = 10 => v1/2 = 40/3
v1 = 80/3
v2 = v1 - 20
#v3 - v4 -3vx = 0
#-v4 - vx = -80/3
#-3v4 -3vx = -80
#-v3 + 2vx = - 80/3
#-3v3 + 6vx = -80
#i2 + ix = i3 + i4
#=> (v2 - v3)/6 + vx/3 = v3/4 + v4
#=> -5v3/12 -v4 + vx/3 = -10/9
#=> -15v3 -36v4 + 12vx = -40
coef = np.matrix('1 -1 -3;0 -3 -3;-15 -36 12')
res = np.matrix('0;-80;-40')
V = np.linalg.inv(coef)*res
#10/9 - (20/3 + 2vx + 20)/6 + vx/3 = (20/3 + 2vx + 20)/4 + 80/3 - vx
#7vx/6 = -10/3 + 5/3 + 5 + 80/3
#7vx/6 = 30
vx = 180/7
v4 = v1 - vx
v3 = v2 + 2*vx + 20
print("V1:", v1, "V")
print("V2:", v2, "V")
print("V3:", float(V[0]), "V")
print("V4:", float(V[1]), "V")
print("Vx:", float(V[2]), "V")
print("Problema Prático 3.4")
#i = v1/2
#i2 = v2/4
#i3 = v3/3
#i4 = (v1 - v3)/6
#(v1 - v3) = 25 - 5i
#(v1 - v3) = 25 - 5v1/2
#7v1/2 - v3 = 25
#7v1 - 2v3 = 50
#(v1 - v2) = 25
#(v3 - v2) = 5i = 5v1/2
#-5v1/2 -v2 + v3 = 0
#-5v1 -2v2 + 2v3 = 0
#organizando
#7v1 - 2v3 = 50
#v1 - v2 = 25
#-5v1 -2v2 + 2v3 = 0
#i + i2 + i4 = 0
#=> v1/2 + v2/4 + (v1 - v3)/6 = 0
#=>2v1/3 + v2/4 - v3/6 = 0
#=> 8v1 + 3v2 - 2v3 = 0
#i2 + i3 = i4
#=> v2/4 + v3/3 = (v1 - v3)/6
#=>-v1/6 + v2/4 + v3/3 = 0
#=> -2v1 + 3v2 + 4v3 = 0
#i + i2 + i3 = 0
#=>v1/2 + v2/4 + v3/3 = 0
#=>6v1 + 3v2 + 4v3 = 0
coef = np.matrix('1 -1 0;6 3 4;-5 -2 2')
res = np.matrix('25; 0; 0')
V = np.linalg.inv(coef)*res
print("Valor de v1:",float(V[0]),"V")
print("Valor de v2:",float(V[1]),"V")
print("Valor de v3:",float(V[2]),"V")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problema Prático 3.3
Step2: Exemplo 3.4
Step3: Problema Prático 3.4
|
11,684
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from models import linear_model, logistic_model, log_cost, log_cost_dev, gd_update
from models import binary_confusion_matrix, std_normalize, binary_accuracy, create_parameters, data_normalize
from sklearn.model_selection import train_test_split
%matplotlib inline
df = pd.read_csv('./data/iris.csv')
df = df.reindex(np.random.permutation(df.index))
df.info()
df['IsSetosa'] = df['Species'].apply(lambda a: 1.0 if a=='Iris-setosa' else 0)
data = df[['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm', 'IsSetosa']]
data.head()
train, test = train_test_split(data, test_size=0.2)
train_X = np.array(train[['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm']])
train_y = np.array(train[['IsSetosa']])
np.mean(train_X, axis=0)
train_stds, train_means = std_normalize(train_X)
np.mean(train_X, axis=0)
np.std(train_X, axis=0)
feature_size = train_X.shape[1]
sample_count = train_X.shape[0]
W, b = create_parameters(feature_size)
threshold = 0.5
lr = 0.01
for epoch in range(0, 1000):
h = logistic_model(train_X, W, b)
dW, db = log_cost_dev(train_X, train_y, h)
W, b = gd_update(W, b, dW, db, lr)
if (epoch + 1) % 100 == 0:
cur_cost = log_cost(h, train_y)
conf = binary_confusion_matrix(h, train_y, threshold=threshold)
print('epoch: {0}, cost: {1}, conf: {2}'.format(epoch + 1, cur_cost, conf))
predictions = logistic_model(train_X, W, b)
final_cost = log_cost(predictions, train_y)
conf = binary_confusion_matrix(predictions, train_y, threshold=threshold)
print('training finished!')
print('final cost: {0}, conf: {1}'.format(final_cost, conf))
test_X = np.array(test[['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm']])
test_y = np.array(test[['IsSetosa']])
data_normalize(test_X, train_stds, train_means)
test_h = logistic_model(test_X, W, b)
test_cost = log_cost(test_h, test_y)
test_conf = binary_confusion_matrix(test_h, test_y, threshold=threshold)
print('test cost: {0}, conf: {1}'.format(test_cost, test_conf))
df['Species'].unique()
df['IsSetosa'] = df['Species'].apply(lambda a: 1.0 if a=='Iris-setosa' else 0)
df['IsVericolor'] = df['Species'].apply(lambda a: 1.0 if a=='Iris-versicolor' else 0)
df['IsVirginica'] = df['Species'].apply(lambda a: 1.0 if a=='Iris-virginica' else 0)
data = df[['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm', 'IsSetosa', 'IsVericolor', 'IsVirginica']]
train, test = train_test_split(data, test_size=0.2)
train_X = np.array(train[['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm']])
train_y0 = np.array(train[['IsSetosa']])
train_y1 = np.array(train[['IsVericolor']])
train_y2 = np.array(train[['IsVirginica']])
train_y_all = np.array(train[['IsSetosa', 'IsVericolor', 'IsVirginica']])
test_X = np.array(test[['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm']])
test_y_all = np.array(test[['IsSetosa', 'IsVericolor', 'IsVirginica']])
x_means, x_stds = std_normalize(train_X)
data_normalize(test_X, x_means, x_stds)
def train_lr_classifier(X, y, lr=0.01, threshold=0.5, epochs=1000, step_size=100):
feature_size = X.shape[1]
sample_count = y.shape[0]
W, b = create_parameters(feature_size)
for epoch in range(0, epochs):
h = logistic_model(X, W, b)
dW, db = log_cost_dev(X, y, h)
W, b = gd_update(W, b, dW, db, lr)
if (epoch + 1) % step_size == 0:
cur_cost = log_cost(h, y)
conf = binary_confusion_matrix(h, y, threshold=threshold)
print('epoch: {0}, cost: {1}, conf: {2}'.format(epoch + 1, cur_cost, conf))
predictions = logistic_model(X, W, b)
final_cost = log_cost(predictions, y)
conf = binary_confusion_matrix(predictions, y, threshold=threshold)
print('training finished!')
print('final cost: {0}, conf: {1}'.format(final_cost, conf))
return W, b
m0 = train_lr_classifier(train_X, train_y0, lr=0.01, threshold=0.5)
m1 = train_lr_classifier(train_X, train_y1, lr=0.01, threshold=0.5, epochs=50000, step_size=10000)
m2 = train_lr_classifier(train_X, train_y2, lr=0.01, threshold=0.5, epochs=50000, step_size=10000)
import models as ml
feature_size = train_X.shape[1]
sample_count = train_X.shape[0]
class_count = train_y_all.shape[1]
W, b = ml.create_parameters(feature_size, class_count)
for epoch in range(0, 100000):
h = ml.softmax_regression_model(train_X, W, b)
dW, db = ml.crossentropy_cost_dev(train_X, train_y_all, h)
W, b = ml.gd_update(W, b, dW, db, lr=0.01)
if (epoch + 1) % 10000 == 0:
cur_cost = ml.crossentropy_cost(h, train_y_all)
cur_acc = ml.categorical_accuracy(h, train_y_all)
print('epoch: {0}, cost: {1}, acc: {2}'.format(epoch + 1, cur_cost, cur_acc))
predictions = ml.softmax_regression_model(train_X, W, b)
final_cost = ml.crossentropy_cost(predictions, train_y_all)
final_acc = ml.categorical_accuracy(predictions, train_y_all)
print('training finished!')
print('train cost: {0}, acc: {1}'.format(final_cost, final_acc))
test_h = ml.softmax_regression_model(test_X, W, b)
test_cost = ml.crossentropy_cost(test_h, test_y_all)
test_acc = ml.categorical_accuracy(test_h, test_y_all)
print('test cost: {0}, acc: {1}'.format(test_cost, test_acc))
np.argmax(ml.softmax_regression_model(train_X[0:4], W, b), axis=1)
np.argmax(train_y_all[0:4], axis=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1), prepare data
Step2: 2), train
Step3: 3). try test data
Step4: so, this is only for Setosa, we want generalize binary classification to multi-classies
Step5: 2), define some utils
Step6: Classify multi-class with Softmax
Step7: let's take a look
|
11,685
|
<ASSISTANT_TASK:>
Python Code:
# try one example, figure is created by default
mlab.test_molecule()
# clear the figure then load another example
mlab.clf()
mlab.test_flow_anim()
# create a new figure
mlab.figure('mesh_example', bgcolor=(0,0,0,))
mlab.test_surf()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Mayavi has some very useful interactive controls that can be accessed from the GUI. This includes the ability to record changing of parameters.
|
11,686
|
<ASSISTANT_TASK:>
Python Code:
import json #for reading oauth info and save the results
import io
from googleplaces import GooglePlaces, types, lang
from pprint import pprint
with io.open('google_places_key.json') as cred:
creds = json.load(cred)
google_places = GooglePlaces(**creds)
query_result = google_places.nearby_search(
lat_lng = {'lat': 42.3555885, 'lng': -71.0646816}, rankby = 'distance', types = [types.TYPE_FOOD])
if query_result.raw_response:
print 'status: ' + query_result.raw_response['status']
print 'next_page_token: ' + query_result.raw_response['next_page_token']
print 'number of results: ' + str(len(query_result.raw_response['results']))
for place in query_result.places:
pprint(vars(place)) #only get geo_location, icon, id, name, place_id, rating, types, vicinty
# The following method has to make a further API call.
place.get_details() #get more details including phone_number, opening_hours, photos, reviews ... etc
pprint(vars(place))
break #Here I break when we finish the first place since 20 reesults are too long.
result = []
#Put your lantitude and longtitude pairs in the list and run the search in turns
lat_lng_list = [{'lat': 2.356357, 'lng': -71.0623345}, #Park Street Station
{'lat': 42.356357, 'lng': -71.0623345}, #China Town Station
{'lat': 42.3555885, 'lng': -71.0646816}] #Downtown Crossing Station
for pair in lat_lng_list:
query_result = google_places.nearby_search(
lat_lng = pair, rankby = 'distance', types = [types.TYPE_FOOD])
for place in query_result.places:
place.get_details()
tmp = vars(place)
results.append(tmp)
with open('my_boston_restaurants_google_places.json', 'wb') as f:
results_json = json.dumps(results, indent=4, skipkeys=True, sort_keys=True)
f.write(results_json)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get data from API
Step2: Then we check if we get any results from API and print some information to the screen.
Step3: The response from API above contains many information
Step4: Scrape the data nad save them to json files
|
11,687
|
<ASSISTANT_TASK:>
Python Code:
st = 'Print only the words that start with s in this sentence'
for word in st.split():
if word[0] == 's':
print word
range(0,11,2)
[x for x in range(1,50) if x%3 == 0]
st = 'Print every word in this sentence that has an even number of letters'
for word in st.split():
if len(word)%2 == 0:
print word+" <-- has an even length!"
for num in xrange(1,101):
if num % 5 == 0 and num % 3 == 0:
print "FizzBuzz"
elif num % 3 == 0:
print "Fizz"
elif num % 5 == 0:
print "Buzz"
else:
print num
st = 'Create a list of the first letters of every word in this string'
[word[0] for word in st.split()]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use range() to print all the even numbers from 0 to 10.
Step2: Use List comprehension to create a list of all numbers between 1 and 50 that are divisble by 3.
Step3: Go through the string below and if the length of a word is even print "even!"
Step4: Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz".
Step5: Use List Comprehension to create a list of the first letters of every word in the string below
|
11,688
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nuist', 'sandbox-3', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
11,689
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from sklearn.linear_model import LinearRegression
# Create artificial data
X = np.array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
Y = np.array([-1.1, 4, 1, 6, 4, 2, 8, 5, 12, 7])
# Create a model
lrm = LinearRegression()
lrm.fit(X.reshape(-1, 1), Y)
model_line = lrm.intercept_ + X * lrm.coef_[0]
fig, ax = plt.subplots(figsize=(8,4))
# Plot the data
ax.plot(X, Y, 'go')
# Plot the regression line
ax.plot(X, model_line, 'bo-') #
# Plot the residuals
ax.vlines(x=X, ymin=np.minimum(Y, model_line), ymax=np.maximum(Y, model_line), color='red')
from sklearn.datasets import load_boston
boston = load_boston()
boston.keys()
print(boston['DESCR'])
df = pd.DataFrame(data=boston.data, columns=boston['feature_names'])
# Add a column for the target feature
df['MEDV'] = boston['target']
df.head()
df.info()
df.describe().transpose()
df.isnull().any()
sns.heatmap(df.isnull(), cbar=False, yticklabels=False)
# Features with number of unique values
unq = {
column : df[column].nunique()
for column in df.columns
}
unq
import operator
sorted(unq.items(), key=operator.itemgetter(1))
sorted(df['RAD'].unique())
df['CHAS'] = df['CHAS'].astype('category')
df['RAD'] = df['RAD'].astype('category')
df.info()
from scipy import stats
stats.pointbiserialr(df['CHAS'], df['MEDV'])
plt.figure(figsize = (10,6))
sns.heatmap(df.corr(), annot=True, linewidths=1)
sns.lmplot(data=df, x='MEDV', y='RM')
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
X = df['RM']
y = df['MEDV']
# Partition the data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
lrm = LinearRegression()
lrm.fit(X_train.values.reshape(-1, 1), y_train)
intercept = lrm.intercept_
coefficient = lrm.coef_[0]
print("Intercept: {0}\nCoefficient: {1}".format(intercept, coefficient))
print('A unit increase in the RM feature is associated with a {0} units increase in the house price.'.format(coefficient))
predicted = lrm.predict(X_test.values.reshape(-1, 1))
fig, ax = plt.subplots()
ax.scatter(y_test, predicted)
ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', linewidth=3)
ax.set_xlabel('Observed')
ax.set_ylabel('Predicted')
sns.distplot(y_test - predicted);
from sklearn import metrics
print('Mean Absolute Error: ', metrics.mean_absolute_error(y_test, predicted))
print('Median Absolute Error: ', metrics.median_absolute_error(y_test, predicted))
print('Explained Variance Score: ', metrics.explained_variance_score(y_test, predicted))
print('Coefficient of Determination (R^2): ', metrics.r2_score(y_test, predicted))
print('Mean Squared Error: ', metrics.mean_squared_error(y_test, predicted))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, predicted)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simple Linear Regression
Step2: The green dots represent our artificial data i.e. the observed data. The red vertical lines indicate the errors or the residuals. The blue line represent the regression line that best fit the observed data. This line is found by minimising the residuals.
Step3: Check data distributions.
Step4: We can get check if we have missing values in the dataset
Step5: No missing data. If any of our features contained missing values, we could use a heatmap to get a quick overview of where there missing data are and a rough picture on how much data is missing.
Step6: Categorical features
Step7: We see that CHAS is a Boolean feature. RAD (index of accessibility to radial highways) is also categorical since it has 9 distinct values. Let us fix them
Step8: Since CHAS is a binary feature, we can use <a href="https
Step9: Feature Selection
Step10: There is a strong correlation between our target feature MEDV (the house price) and RM which is the average number of rooms per house. There are also negative correlations with LSTAT and PTRATIO. This means that as one of the variables (LSTAT or PTRATIO) increases then our dependent variable (MEDV) tends to decrease and the other way around.
Step11: From the plot, it looks like there is a good linear fit since the error bars are not that big.
Step12: It is straightforward to interpret the coefficient
Step13: Generate Predictions
Step14: Let us check how far off the predicted values are from the observed values. We can visualise this via scatter plot. We can also plot a dashed line that indicates where perfect prediction values would lie.
Step15: Create a histogram of our residuals/errors. If the residuals follow a normal distribution, then we have selected a correct model for the data set. Otherwise, we may have to reconsider the choice of linear regression model.
Step16: Evaluate the Model
|
11,690
|
<ASSISTANT_TASK:>
Python Code:
# Import the necessary packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import LeaveOneOut
from sklearn import linear_model, neighbors
%matplotlib inline
plt.style.use('ggplot')
# Where to save the figures
PROJECT_ROOT_DIR = ".."
datapath = PROJECT_ROOT_DIR + "/data/lifesat/"
# Download CSV from http://stats.oecd.org/index.aspx?DataSetCode=BLI
oecd_bli = pd.read_csv(datapath+"oecd_bli_2015.csv", thousands=',')
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
oecd_bli.columns
oecd_bli["Life satisfaction"].head()
# Load and prepare GDP per capita data
# Download data from http://goo.gl/j1MSKe (=> imf.org)
gdp_per_capita = pd.read_csv(datapath+"gdp_per_capita.csv", thousands=',', delimiter='\t',
encoding='latin1', na_values="n/a")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita, left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace="True")
_ = full_country_stats.plot("GDP per capita",'Life satisfaction',kind='scatter')
xvars = ['Self-reported health','Water quality','Quality of support network','GDP per capita']
X = np.array(full_country_stats[xvars])
y = np.array(full_country_stats['Life satisfaction'])
def loo_risk(X,y,regmod):
Construct the leave-one-out square error risk for a regression model
Input: design matrix, X, response vector, y, a regression model, regmod
Output: scalar LOO risk
loo = LeaveOneOut()
loo_losses = []
for train_index, test_index in loo.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
regmod.fit(X_train,y_train)
y_hat = regmod.predict(X_test)
loss = np.sum((y_hat - y_test)**2)
loo_losses.append(loss)
return np.mean(loo_losses)
def emp_risk(X,y,regmod):
Return the empirical risk for square error loss
Input: design matrix, X, response vector, y, a regression model, regmod
Output: scalar empirical risk
regmod.fit(X,y)
y_hat = regmod.predict(X)
return np.mean((y_hat - y)**2)
lin1 = linear_model.LinearRegression(fit_intercept=False)
print('LOO Risk: '+ str(loo_risk(X,y,lin1)))
print('Emp Risk: ' + str(emp_risk(X,y,lin1)))
# knn = neighbors.KNeighborsRegressor(n_neighbors=5)
LOOs = []
MSEs = []
K=30
Ks = range(1,K+1)
for k in Ks:
knn = neighbors.KNeighborsRegressor(n_neighbors=k)
LOOs.append(loo_risk(X,y,knn))
MSEs.append(emp_risk(X,y,knn))
plt.plot(Ks,LOOs,'r',label="LOO risk")
plt.title("Risks for kNN Regression")
plt.plot(Ks,MSEs,'b',label="Emp risk")
plt.legend()
_ = plt.xlabel('k')
X1 = np.array(full_country_stats[['Self-reported health']])
LOOs = []
MSEs = []
K=30
Ks = range(1,K+1)
for k in Ks:
knn = neighbors.KNeighborsRegressor(n_neighbors=k)
LOOs.append(loo_risk(X1,y,knn))
MSEs.append(emp_risk(X1,y,knn))
plt.plot(Ks,LOOs,'r',label="LOO risk")
plt.title("Risks for kNN Regression")
plt.plot(Ks,MSEs,'b',label="Emp risk")
plt.legend()
_ = plt.xlabel('k')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare data
Step2: Here's the full dataset, and there are other columns. I will subselect a few of them by hand.
Step5: I will define the following functions to expedite the LOO risk and the Empirical risk.
Step6: As you can see, the empirical risk is much less than the leave-one-out risk! This can happen in more dimensions.
Step7: Exercise 1 For each k from 1 to 30 compute the nearest neighbors empirical risk and LOO risk. Plot these as a function of k and reflect on the bias-variance tradeoff here. (Hint
Step8: I decided to see what the performance is for k from 1 to 30. We see that the bias does not dominate until k exceeds 17, the performance is somewhat better for k around 12. This demonstrates that you can't trust the Empirical risk, since it includes the training sample. We can compare this LOO risk to that of linear regression (0.348) and see that it outperforms linear regression.
|
11,691
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import statsmodels.api as sm
data =
x y y_err
201 592 61
244 401 25
47 583 38
287 402 15
203 495 21
58 173 15
210 479 27
202 504 14
198 510 30
158 416 16
165 393 14
201 442 25
157 317 52
131 311 16
166 400 34
160 337 31
186 423 42
125 334 26
218 533 16
146 344 22
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
data = pd.read_csv(StringIO(data), delim_whitespace=True).astype(float)
# Note: for the results we compare with the paper here, they drop the first four points
data.head()
exog = sm.add_constant(data['x'])
endog = data['y']
weights = 1. / (data['y_err'] ** 2)
wls = sm.WLS(endog, exog, weights)
results = wls.fit(cov_type='fixed scale')
print(results.summary())
# You can use `scipy.optimize.curve_fit` to get the best-fit parameters and parameter errors.
from scipy.optimize import curve_fit
def f(x, a, b):
return a * x + b
xdata = data['x']
ydata = data['y']
p0 = [0, 0] # initial parameter estimate
sigma = data['y_err']
popt, pcov = curve_fit(f, xdata, ydata, p0, sigma, absolute_sigma=True)
perr = np.sqrt(np.diag(pcov))
print('a = {0:10.3f} +- {1:10.3f}'.format(popt[0], perr[0]))
print('b = {0:10.3f} +- {1:10.3f}'.format(popt[1], perr[1]))
# You can also use `scipy.optimize.minimize` and write your own cost function.
# This doesn't give you the parameter errors though ... you'd have
# to estimate the HESSE matrix separately ...
from scipy.optimize import minimize
def chi2(pars):
Cost function.
y_model = pars[0] * data['x'] + pars[1]
chi = (data['y'] - y_model) / data['y_err']
return np.sum(chi ** 2)
result = minimize(fun=chi2, x0=[0, 0])
popt = result.x
print('a = {0:10.3f}'.format(popt[0]))
print('b = {0:10.3f}'.format(popt[1]))
# TODO: we could use the examples from here:
# http://probfit.readthedocs.org/en/latest/api.html#probfit.costfunc.Chi2Regression
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Linear models
Step3: To fit a straight line use the weighted least squares class WLS ... the parameters are called
Step4: Check against scipy.optimize.curve_fit
Step6: Check against self-written cost function
Step7: Non-linear models
|
11,692
|
<ASSISTANT_TASK:>
Python Code:
%%capture
!pip install pandas sklearn auto-sklearn kubeflow-fairing grpcio kubeflow.metadata bentoml plotly fbprophet
import uuid
from importlib import reload
import grpc
from kubeflow import fairing
from kubeflow.fairing import constants
import os
import pandas as pd
import logging
logging.basicConfig(level=logging.WARN)
logger = logging.getLogger(__name__)
# The docker registry to store images in
DOCKER_REGISTRY = "iancoffey"
# The k8s namespace to run the experiment in
k8s_namespace = "default"
# Use local bentoml storage
!bentoml config set yatai_service.url=""
from kubernetes import utils as k8s_utils
from kubernetes import client as k8s_client
from kubernetes import config as k8s_config
from kubeflow.fairing.utils import is_running_in_k8s
from kubeflow.fairing.cloud.k8s import MinioUploader
from kubeflow.fairing.builders.cluster.minio_context import MinioContextSource
if is_running_in_k8s():
k8s_config.load_incluster_config()
else:
k8s_config.load_kube_config()
api_client = k8s_client.CoreV1Api()
minio_service_endpoint = api_client.read_namespaced_service(name='minio-service', namespace='default').spec.cluster_ip
minio_endpoint = "http://"+minio_service_endpoint+":9000"
minio_username = "minio"
minio_key = "minio123"
minio_region = "us-east-1"
minio_uploader = MinioUploader(endpoint_url=minio_endpoint, minio_secret=minio_username, minio_secret_key=minio_key, region_name=minio_region)
minio_context_source = MinioContextSource(endpoint_url=minio_endpoint, minio_secret=minio_username, minio_secret_key=minio_key, region_name=minio_region)
minio_endpoint
# fairing:include-cell
data_path="covid_19_data.csv"
dframe = pd.read_csv(data_path, sep=',')
cols_of_interest = ['Confirmed', 'Province/State', 'ObservationDate']
dframe['ObservationDate'] = pd.to_datetime(dframe['ObservationDate'])
dframe.sort_index(inplace=True)
trimmed_dframe=dframe[cols_of_interest]
trimmed_dframe=trimmed_dframe.dropna()
# Note the copy() here - else we would be working on a reference
state_data = trimmed_dframe.loc[trimmed_dframe['Province/State'] == 'New York'].copy()
state_data = state_data.drop('Province/State', axis=1).sort_index()
state_data.rename(columns={'Confirmed': 'y', 'ObservationDate': 'ds'}, inplace=True)
state_data.head()
color_pal = ["#F8766D", "#D39200", "#93AA00",
"#00BA38", "#00C19F", "#00B9E3",
"#619CFF", "#DB72FB"]
_ = state_data.plot(x ='ds', y='y', kind='scatter', figsize=(15,5), title="Raw Covid19 Dataset")
split_date = "2020-05-15"
train_data = state_data[state_data['ds'] <= split_date].copy()
test_data = state_data[state_data['ds'] > split_date].copy()
len(state_data)
# Python
import pandas as pd
from fbprophet import Prophet
m = Prophet()
m.fit(train_data)
future = m.make_future_dataframe(periods=10)
future.tail()
import numpy as np
forecast = m.predict(future)
print(forecast['yhat'].tail())
fig1 = m.plot(forecast, figsize = (15, 10))
for c in forecast.columns.sort_values():
print(c)
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid', {'axes.facecolor': '.9'})
sns.set_palette(palette='deep')
sns_c = sns.color_palette(palette='deep')
threshold_date = pd.to_datetime(split_date)
fig, ax = plt.subplots(figsize = (15, 10))
sns.lineplot(x='ds', y='y', label='y_train', data=train_data, ax=ax)
sns.lineplot(x='ds', y='y', label='y_test', data=test_data, ax=ax)
sns.lineplot(x='ds', y='trend', data=forecast, ax=ax)
ax.axvline(threshold_date, color=sns_c[3], linestyle='--', label='train test split')
ax.legend(loc='upper left')
ax.set(title='Confirmed Cases', ylabel='');
%%writefile prophet_serve.py
import bentoml
from bentoml.handlers import DataframeHandler
from bentoml.artifact import PickleArtifact
import fbprophet
@bentoml.artifacts([PickleArtifact('model')])
@bentoml.env(pip_dependencies=['fbprophet'])
class ProphetServe(bentoml.BentoService):
@bentoml.api(DataframeHandler)
def predict(self, df):
return self.artifacts.model.predict(df)
import prophet_serve
import importlib
importlib.reload(prophet_serve)
from prophet_serve import ProphetServe
bento_service = ProphetServe()
bento_service.pack('model', m)
saved_path = bento_service.save()
!bentoml get ProphetServe
!bentoml get ProphetServe:20200626124946_94E973
!bentoml run ProphetServe:20200626124946_94E973 predict --input '{"ds":["2021-07-14"]}'
!ls /home/jovyan/bentoml/repository/ProphetServe/20200626124946_94E973
# Let build a docker image with builder using bentoml output
from kubeflow.fairing.preprocessors.base import BasePreProcessor
output_map = {
"/home/jovyan/bentoml/repository/ProphetServe/20200626124946_94E973/Dockerfile": "Dockerfile",
"/home/jovyan/bentoml/repository/ProphetServe/20200626124946_94E973/environment.yml": "environment.yml",
"/home/jovyan/bentoml/repository/ProphetServe/20200626124946_94E973/requirements.txt": "requirements.txt",
"/home/jovyan/bentoml/repository/ProphetServe/20200626124946_94E973/setup.py": "setup.py",
"/home/jovyan/bentoml/repository/ProphetServe/20200626124946_94E973/bentoml-init.sh": "bentoml-init.sh",
"/home/jovyan/bentoml/repository/ProphetServe/20200626124946_94E973/bentoml.yml": "bentoml.yml",
"/home/jovyan/bentoml/repository/ProphetServe/20200626124946_94E973/ProphetServe/": "ProphetServe/",
"/home/jovyan/bentoml/repository/ProphetServe/20200626124946_94E973/ProphetServe/prophet_serve.py": "ProphetServe/prophet_serve.py",
"/home/jovyan/bentoml/repository/ProphetServe/20200626124946_94E973/ProphetServe/artifacts/model.pkl": "ProphetServe/artifacts/model.pkl",
"/home/jovyan/bentoml/repository/ProphetServe/20200626124946_94E973/docker-entrypoint.sh": "docker-entrypoint.sh",
}
preprocessor = BasePreProcessor(output_map=output_map)
preprocessor.preprocess()
from kubeflow.fairing.builders import cluster
from kubeflow.fairing import constants
constants.constants.KANIKO_IMAGE = "gcr.io/kaniko-project/executor:v0.22.0"
cluster_builder = cluster.cluster.ClusterBuilder(registry=DOCKER_REGISTRY,
preprocessor=preprocessor,
dockerfile_path="Dockerfile",
context_source=minio_context_source)
print(cluster_builder.build())
from kfserving import V1alpha2EndpointSpec,V1alpha2InferenceServiceSpec, V1alpha2InferenceService, V1alpha2CustomSpec
from kfserving import KFServingClient
from kfserving import constants
containerSpec = k8s_client.V1Container(
name="prophet-model-api-container",
image=cluster_builder.image_tag,
ports=[k8s_client.V1ContainerPort(container_port=5000)])
default_custom_model_spec = V1alpha2EndpointSpec(predictor=V1alpha2PredictorSpec(custom=V1alpha2CustomSpec(container=containerSpec)))
metadata = k8s_client.V1ObjectMeta(
name="prophet-model-api", namespace="default",
)
isvc = V1alpha2InferenceService(api_version=constants.KFSERVING_GROUP + '/' + constants.KFSERVING_VERSION,
kind=constants.KFSERVING_KIND,
metadata=metadata,
spec=V1alpha2InferenceServiceSpec(default=default_custom_model_spec))
KFServing = KFServingClient()
KFServing.create(isvc)
!curl -i --header "Content-Type: application/json" -X POST http://prophet-model-api-predictor-default-7z66r-private.default.svc.cluster.local/predict --data '{"ds":["2020-07-14"]}'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Config
Step2: Minio
Step3: Start
Step4: Model Training
Step5: Prediction
Step6: Projected vs Reality
Step7: Define BentoML Service
Step8: Pack BentoML Service
Step9: BentoML Service
Step10: Explore the Bentoml service
Step11: Run Bento Service
Step12: Explore BentoML Generated Artifacts
Step13: Stitching BentoML to Kubeflow Fairing
Step14: Cluster Building
Step15: Deploy the Service
Step16: Querying the Inference Server
|
11,693
|
<ASSISTANT_TASK:>
Python Code:
len
max
print
import requests
requests.get
# And if we just wanted to use them, for some reason
n = -34
print(n, "in absolute value is", abs(n))
print("We can add after casting to int:", 55 + int("55"))
n = 4.4847
print(n, "can be rounded to", round(n))
print(n, "can also be rounded to 2 decimal points", round(n, 2))
numbers = [4, 22, 40, 54]
print("The total of the list is", sum(numbers))
def urlretrieve(url, filename=None, reporthook=None, data=None):
url_type, path = splittype(url)
with contextlib.closing(urlopen(url, data)) as fp:
headers = fp.info()
# Just return the local path and the "headers" for file://
# URLs. No sense in performing a copy unless requested.
if url_type == "file" and not filename:
return os.path.normpath(path), headers
# Handle temporary file setup.
if filename:
tfp = open(filename, 'wb')
else:
tfp = tempfile.NamedTemporaryFile(delete=False)
filename = tfp.name
_url_tempfiles.append(filename)
with tfp:
result = filename, headers
bs = 1024*8
size = -1
read = 0
blocknum = 0
if "content-length" in headers:
size = int(headers["Content-Length"])
if reporthook:
reporthook(blocknum, bs, size)
while True:
block = fp.read(bs)
if not block:
break
read += len(block)
tfp.write(block)
blocknum += 1
if reporthook:
reporthook(blocknum, bs, size)
if size >= 0 and read < size:
raise ContentTooShortError(
"retrieval incomplete: got only %i out of %i bytes"
% (read, size), result)
return result
# A function to multiply a number by two
def double(number):
bigger = number * 2
return bigger
#what happens inside the function STAYS inside the function
#unless you use return, you don't know what happens within the function
print("2 times two is", double(2))
print("10 times two is", double(10))
print("56 times two is", double(56))
age = 76
print("Double your age is", double(age))
def greet(name):
return "Hello " + name
# This one works
print(greet("Soma"))
# Overwrite the function greet with a string
greet = "blah"
# Trying the function again breaks
print(greet("Soma"))
def exclaim(potato_soup):
return potato_soup + "!!!!!!!!!!"
invitation = "I hope you can come to my wedding"
print(exclaim(invitation))
line = "I am sorry to hear you have the flu"
print(exclaim(line))
name = "Nancy"
name_length = len(name)
print("Hello", name, "your name is", name_length, "letters long")
name = "Brick"
name_length = len(name)
print("Hello", name, "your name is", name_length, "letters long")
name = "Saint Augustine"
name_length = len(name)
print("Hello", name, "your name is", name_length, "letters long")
def weird_greeting(name):
name_length = len(name)
print("Hello", name, "your name is", name_length, "letters long")
weird_greeting("Nancy")
weird_greeting("Brick")
weird_greeting("Saint Augustine")
# Our cool function
def size_comparison(a, b):
if a > b:
return "Larger"
else:
return "Smaller"
print(size_comparison(4, 5.5))
print(size_comparison(65, 2))
print(size_comparison(34.2, 33))
def to_kmh(speeed):
return round(speeed * 1.6)
mph = 40
print("You are driving", mph, "in mph")
print("You are driving", to_kmh(mph), "in kmh")
#magic numbers -- unique values with
#def to_mpm(speed):
#return speed * 26.8
#return to_kmh(speed) * 1000 / 60
def to_mpm(speed):
return round(speed * 26.8224)
mph = 40
print("You are driving", mph, "in mph")
print("You are driving", to_kmh(mph), "in kmh")
print("You are driving", to_mpm(mph), "in meters/minute")
def to_mpm(speeed):
mpm = to_kmh * 16.6667
return round(mpm)
# You have to wash ten cars on every street, along with the cars in your driveway.
# With the following list of streets, how many cars do we have?
def total(n):
return n * 10
# Here are the streets
streets = ['10th Ave', '11th Street', '45th Ave']
# Let's count them up
total = len(streets)
# And add one
count = total + 1
# And see how many we have
print(total(count))
first = { 'measurement': 3.4, 'scale': 'kilometer' }
second = { 'measurement': 9.1, 'scale': 'mile' }
third = { 'measurement': 2.0, 'scale': 'meter' }
fourth = { 'measurement': 9.0, 'scale': 'inches' }
def to_meters(measurement):
if measurement['scale'] == 'kilometer':
return measurement['measurement'] * 1000
if measurement['scale'] == 'meter':
return measurement['measurement']
if measurement['scale'] == 'miles':
return measurement['measurement'] * 1.6 == 1000
return 99
print(to_meters(first))
print(to_meters(second))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Almost everything useful is a function. Python has a ton of other built-in functions!
Step2: See? Functions make the world run.
Step3: Horrifying, right? Thank goodness for functions.
Step4: It has a handful of parts
Step5: Function Naming
Step6: Parameters
Step7: invitation and line both get renamed to potato_soup inside of the function, so you can reuse the function with any variable of any name.
Step8: Do you know how exhausted I got typing all of that out? And how it makes no sense at all? Luckily, functions save us
Step9: return
Step10: Your Turn
Step11: 1b. Driving Speed Part II
Step12: 1c. Driving Speed Part III
Step13: 2. Broken Function
Step14: 3. Data converter
|
11,694
|
<ASSISTANT_TASK:>
Python Code:
import bqplot.pyplot as plt
# first, let's create two vectors x and y to plot using a Lines mark
import numpy as np
x = np.linspace(-10, 10, 100)
y = np.sin(x)
# 1. Create the figure object
fig = plt.figure(title='Simple Line Chart')
# 2. By default axes are created with basic defaults. If you want to customize the axes create
# a dict and pass it to `axxes_options` argument in the marks
axes_opts = {'x': {'label': 'X'},
'y': {'label': 'Y'}}
# 3. Create a Lines mark by calling plt.plot function
line = plt.plot(x=x, y=y, axes_options=axes_opts) # note that custom axes options are passed here
# 4. Render the figure using plt.show()
plt.show()
# first, let's create two vectors x and y to plot a bar chart
x = list('ABCDE')
y = np.random.rand(5)
# 1. Create the figure object
fig = plt.figure(title='Simple Bar Chart')
# 2. Customize the axes options
axes_opts = {'x': {'label': 'X', 'grid_lines': 'none'},
'y': {'label': 'Y', 'tick_format': '.0%'}}
# 3. Create a Bars mark by calling plt.bar function
bar = plt.bar(x=x, y=y, padding=.2, axes_options=axes_opts)
# 4. directly display the figure object created in step 1 (note that the toolbar no longer shows up)
fig
# first, let's create two vectors x and y
import numpy as np
x = np.linspace(-10, 10, 25)
y = 3 * x + 5
y_noise = y + 10 * np.random.randn(25) # add some random noise to y
# 1. Create the figure object
fig = plt.figure(title='Scatter and Line')
# 3. Create line and scatter marks
# additional attributes (stroke_width, colors etc.) can be passed as attributes to the mark objects as needed
line = plt.plot(x=x, y=y, colors=['green'], stroke_width=3)
scatter = plt.scatter(x=x, y=y_noise, colors=['red'], stroke='black')
# setting x and y axis labels using pyplot functions. Note that these functions
# should be called only after creating the marks
plt.xlabel('X')
plt.ylabel('Y')
# 4. render the figure
fig
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Steps for building plots in pyplot
Step1: For creating other marks (like scatter, pie, bars, etc.), only step 2 needs to be changed. Lets look a simple example to create a bar chart
Step2: Mutiple marks can be rendered in a figure. It's as easy as creating marks one after another. They'll all be added to the same figure!
|
11,695
|
<ASSISTANT_TASK:>
Python Code:
import random
import os
import numpy as np
from work.dataset.activitynet import ActivityNetDataset
dataset = ActivityNetDataset(
videos_path='../dataset/videos.json',
labels_path='../dataset/labels.txt'
)
videos = dataset.get_subset_videos('validation')
videos = random.sample(videos, 8)
examples = []
for v in videos:
file_dir = os.path.join('../downloads/features/', v.features_file_name)
if not os.path.isfile(file_dir):
os.system('scp imatge:~/work/datasets/ActivityNet/v1.3/features/{} ../downloads/features/'.format(v.features_file_name))
features = np.load(file_dir)
examples.append((v, features))
from keras.layers import Input, BatchNormalization, LSTM, TimeDistributed, Dense
from keras.models import Model
input_features = Input(batch_shape=(1, 1, 4096,), name='features')
input_normalized = BatchNormalization(mode=1)(input_features)
lstm1 = LSTM(512, return_sequences=True, stateful=True, name='lstm1')(input_normalized)
lstm2 = LSTM(512, return_sequences=True, stateful=True, name='lstm2')(lstm1)
output = TimeDistributed(Dense(201, activation='softmax'), name='fc')(lstm2)
model = Model(input=input_features, output=output)
model.load_weights('../work/scripts/training/lstm_activity_classification/model_snapshot/lstm_activity_classification_02_e100.hdf5')
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
predictions = []
for v, features in examples:
nb_instances = features.shape[0]
X = features.reshape((nb_instances, 1, 4096))
model.reset_states()
prediction = model.predict(X, batch_size=1)
prediction = prediction.reshape(nb_instances, 201)
class_prediction = np.argmax(prediction, axis=1)
predictions.append((v, prediction, class_prediction))
from IPython.display import YouTubeVideo, display
for v, prediction, class_prediction in predictions:
print('Video ID: {}\t\tGround truth: {}'.format(v.video_id, v.get_activity()))
class_means = np.mean(prediction, axis=0)
top_3 = np.argsort(class_means[1:])[::-1][:3] + 1
scores = class_means[top_3]/np.sum(class_means[1:])
for index, score in zip(top_3, scores):
if score == 0.:
continue
label = dataset.labels[index][1]
print('{:.4f}\t{}'.format(score, label))
vid = YouTubeVideo(v.video_id)
display(vid)
print('\n')
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib
normalize = matplotlib.colors.Normalize(vmin=0, vmax=201)
for v, prediction, class_prediction in predictions:
v.get_video_instances(16, 0)
ground_truth = np.array([instance.output for instance in v.instances])
nb_instances = len(v.instances)
print('Video ID: {}\nMain Activity: {}'.format(v.video_id, v.get_activity()))
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(ground_truth, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Ground Truth')
plt.show()
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(class_prediction, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Prediction')
plt.show()
print('\n')
normalize = matplotlib.colors.Normalize(vmin=0, vmax=1)
for v, prediction, class_prediction in predictions:
v.get_video_instances(16, 0)
ground_truth = np.array([instance.output for instance in v.instances])
nb_instances = len(v.instances)
output_index = dataset.get_output_index(v.label)
print('Video ID: {}\nMain Activity: {}'.format(v.video_id, v.get_activity()))
class_means = np.mean(prediction, axis=0)
top_3 = np.argsort(class_means[1:])[::-1][:3] + 1
scores = class_means[top_3]/np.sum(class_means[1:])
for index, score in zip(top_3, scores):
if score == 0.:
continue
label = dataset.labels[index][1]
print('{:.4f}\t{}'.format(score, label))
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(ground_truth/output_index, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Ground Truth')
plt.show()
# print only the positions that predicted the global ground truth category
temp = np.zeros((nb_instances))
temp[class_prediction==output_index] = 1
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(temp, (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Prediction of the ground truth class')
plt.show()
plt.figure(num=None, figsize=(18, 1), dpi=100)
plt.contourf(np.broadcast_to(prediction[:,output_index], (2, nb_instances)), norm=normalize, interpolation='nearest')
plt.title('Probability for ground truth')
plt.show()
print('\n')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the trained model with its weigths
Step2: Extract the predictions for each video and print the scoring
Step3: Print the global classification results
Step4: Now show the temporal prediction for the activity happening at the video.
|
11,696
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q -U tensorflow tensorflow-hub tensorflow-addons
!pip install -q -U tflite-support
!pip install -q -U tflite-model-maker
!pip install -q -U tensorflow-text==2.10.0b2
!sudo apt-get -qq install libportaudio2 # Needed by tflite-support
import json
import math
import os
import pickle
import random
import shutil
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
import tensorflow.compat.v1 as tf1
from tensorflow.keras import layers
import tensorflow_addons as tfa
import tensorflow_hub as hub
import tensorflow_text as text
from tensorflow_text.python.ops import fast_sentencepiece_tokenizer as sentencepiece_tokenizer
# Suppressing tf.hub warnings
tf.get_logger().setLevel('ERROR')
DATASET_DIR = 'datasets'
CAPTION_URL = 'http://images.cocodataset.org/annotations/annotations_trainval2014.zip'
TRAIN_IMAGE_URL = 'http://images.cocodataset.org/zips/train2014.zip'
VALID_IMAGE_URL = 'http://images.cocodataset.org/zips/val2014.zip'
TRAIN_IMAGE_DIR = os.path.join(DATASET_DIR, 'train2014')
VALID_IMAGE_DIR = os.path.join(DATASET_DIR, 'val2014')
TRAIN_IMAGE_PREFIX = 'COCO_train2014_'
VALID_IMAGE_PREFIX = 'COCO_val2014_'
IMAGE_SIZE = (384, 384)
EFFICIENT_NET_URL = 'https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_s/feature_vector/2'
UNIVERSAL_SENTENCE_ENCODER_URL = 'https://tfhub.dev/google/universal-sentence-encoder-lite/2'
BATCH_SIZE = 256
NUM_EPOCHS = 10
SEQ_LENGTH = 128
EMB_SIZE = 128
#@title Functions for downloading and parsing annotations.
def parse_annotation_json(json_path):
# Assuming the json file is already downloaded.
with open(json_path, 'r') as f:
json_obj = json.load(f)
# Parsing out the following information from the annotation json: the COCO
# image id and their corresponding flickr post id, as well as the captions.
mapping = dict()
for caption in json_obj['annotations']:
image_id = caption['image_id']
if image_id not in mapping:
mapping[image_id] = [[]]
mapping[image_id][0].append(caption['caption'])
for image in json_obj['images']:
# The flickr url here is the CDN url. We need to split it to get the post
# id.
flickr_url = image['flickr_url']
url_parts = flickr_url.split('/')
flickr_id = url_parts[-1].split('_')[0]
mapping[image['id']].append(flickr_id)
return list(mapping.items())
def get_train_valid_captions():
# Parse and cache the annotation for train and valid
train_pickle_path = os.path.join(DATASET_DIR, 'train_captions.pickle')
valid_pickle_path = os.path.join(DATASET_DIR, 'valid_captions.pickle')
if not os.path.exists(train_pickle_path) or not os.path.exists(
valid_pickle_path):
# Parse and cache the annotations if they don't exist
annotation_zip = tf.keras.utils.get_file(
'annotations.zip',
cache_dir=os.path.abspath('.'),
cache_subdir=os.path.join(DATASET_DIR, 'tmp'),
origin=CAPTION_URL,
extract=True,
)
os.remove(annotation_zip)
train_img_cap = parse_annotation_json(
os.path.join(DATASET_DIR, 'tmp', 'annotations',
'captions_train2014.json'))
valid_img_cap = parse_annotation_json(
os.path.join(DATASET_DIR, 'tmp', 'annotations',
'captions_val2014.json'))
with open(train_pickle_path, 'wb') as f:
pickle.dump(train_img_cap, f)
with open(valid_pickle_path, 'wb') as f:
pickle.dump(valid_img_cap, f)
shutil.rmtree(os.path.join(DATASET_DIR, 'tmp'))
else:
# Load the cached annotations
with open(train_pickle_path, 'rb') as f:
train_img_cap = pickle.load(f)
with open(valid_pickle_path, 'rb') as f:
valid_img_cap = pickle.load(f)
return train_img_cap, valid_img_cap
#@title Functions for downloading the images and create the dataset.
def get_sentencepiece_tokenizer_in_tf2():
# The universal sentence encoder model from TFHub is in TF1 Module format. We
# need to directly access the asset_paths to get the sentencepiece tokenizer
# proto path.
module = hub.load(UNIVERSAL_SENTENCE_ENCODER_URL)
spm_path = module.asset_paths[0].asset_path.numpy()
with tf.io.gfile.GFile(spm_path, mode='rb') as f:
return sentencepiece_tokenizer.FastSentencepieceTokenizer(f.read())
def prepare_dataset(id_image_info_list,
image_file_prefix,
image_dir,
image_zip_url,
shuffle=False):
# Download and unzip the dataset if it's not there already.
if not os.path.exists(image_dir):
image_zip = tf.keras.utils.get_file(
'image.zip',
cache_dir=os.path.abspath('.'),
cache_subdir=os.path.join(DATASET_DIR),
origin=image_zip_url,
extract=True,
)
os.remove(image_zip)
# Convert the lists into tensors so that we can index into it in the dataset
# transformations later.
coco_ids, image_info = zip(*id_image_info_list)
captions, flickr_ids = zip(*image_info)
file_names = list(
map(
lambda id: os.path.join(image_dir, '%s%012d.jpg' %
(image_file_prefix, id)), coco_ids))
coco_ids_tensor = tf.constant(coco_ids)
captions_tensor = tf.ragged.constant(captions)
file_names_tensor = tf.constant(file_names)
flickr_ids_tensor = tf.constant(flickr_ids)
# The initial dataset only contains the index. This is to make sure the
# dataset has a known size.
dataset = tf.data.Dataset.range(len(coco_ids))
sp = get_sentencepiece_tokenizer_in_tf2()
def _load_image_and_select_caption(i):
image_id = coco_ids_tensor[i]
captions = captions_tensor[i]
image_path = file_names_tensor[i]
flickr_id = flickr_ids_tensor[i]
image = tf.image.decode_jpeg(tf.io.read_file(image_path), channels=3)
# Randomly select one caption from the many captions we have for each image
caption_idx = tf.random.uniform((1,),
minval=0,
maxval=tf.shape(captions)[0],
dtype=tf.int32)[0]
caption = captions[caption_idx]
caption = tf.sparse.from_dense(sp.tokenize(caption))
example = {
'image': image,
'image_id': image_id,
'caption': caption,
'flickr_id': flickr_id
}
return example
def _resize_image(example):
# Efficient net requires the pixels to be in range of [0, 1].
example['image'] = tf.image.resize(example['image'], size=IMAGE_SIZE) / 255
return example
dataset = (
# Load the images from disk and decode them into numpy arrays.
dataset.map(
_load_image_and_select_caption,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=not shuffle)
# Resizing image is slow. We put the stage into a separate map so that it
# could get more threads to not be the bottleneck.
.map(
_resize_image,
num_parallel_calls=tf.data.AUTOTUNE,
deterministic=not shuffle))
if shuffle:
dataset = dataset.shuffle(BATCH_SIZE * 10)
dataset = dataset.batch(BATCH_SIZE)
return dataset
# We parse the caption json files first.
train_img_cap, valid_img_cap = get_train_valid_captions()
print(f'Train number of images: {len(train_img_cap)}')
print(f'Valid number of images: {len(valid_img_cap)}')
example = train_img_cap[0]
print(f'COCO image id: {example[0]}')
print(f'Captions: {example[1][0]}')
print(f'Flickr post url: http://flickr.com/photo.gne?id={example[1][1]}')
# Shuffle both the train and validation sets
random.shuffle(valid_img_cap)
random.shuffle(train_img_cap)
# We randomly sample 5000 image-caption pairs from validation set for validation
# during training, to match the setup of
# https://www.tensorflow.org/datasets/catalog/coco_captions. However, when
# generating the retrieval database later on, we will use all the images in both
# validation and training splits.
valid_dataset = prepare_dataset(
valid_img_cap[:5000],
VALID_IMAGE_PREFIX,
VALID_IMAGE_DIR,
VALID_IMAGE_URL)
train_dataset = prepare_dataset(
train_img_cap,
TRAIN_IMAGE_PREFIX,
TRAIN_IMAGE_DIR,
TRAIN_IMAGE_URL,
shuffle=True)
def project_embeddings(embeddings, num_projection_layers, projection_dims,
dropout_rate):
projected_embeddings = layers.Dense(units=projection_dims)(embeddings)
for _ in range(num_projection_layers):
x = tf.nn.relu(projected_embeddings)
x = layers.Dense(projection_dims)(x)
x = layers.Dropout(dropout_rate)(x)
x = layers.Add()([projected_embeddings, x])
projected_embeddings = layers.LayerNormalization()(x)
# Finally we L2 normalize the embeddings. In general, L2 normalized embeddings
# are easier to retrieve.
projected_embeddings = tf.math.l2_normalize(projected_embeddings, axis=1)
return projected_embeddings
def create_image_encoder(num_projection_layers,
projection_dims,
dropout_rate,
trainable=False):
efficient_net = hub.KerasLayer(EFFICIENT_NET_URL, trainable=trainable)
inputs = layers.Input(shape=IMAGE_SIZE + (3,), name='image_input')
embeddings = efficient_net(inputs)
outputs = project_embeddings(embeddings, num_projection_layers,
projection_dims, dropout_rate)
return keras.Model(inputs, outputs, name='image_encoder')
def create_text_encoder():
encoder = hub.KerasLayer(
UNIVERSAL_SENTENCE_ENCODER_URL,
name='universal_sentence_encoder',
signature='default')
encoder.trainable = False
inputs = layers.Input(
shape=(None,), dtype=tf.int64, name='text_input', sparse=True)
embeddings = encoder(
dict(
values=inputs.values,
indices=inputs.indices,
dense_shape=inputs.dense_shape))
return keras.Model(inputs, embeddings, name='text_encoder')
def create_text_embedder_projection(input_dim, num_projection_layers,
projection_dims, dropout_rate):
inputs = layers.Input(shape=(input_dim), dtype=tf.float32, name='text_input')
outputs = project_embeddings(inputs, num_projection_layers, projection_dims,
dropout_rate)
return keras.Model(inputs, outputs, name='projection_layers')
class DualEncoder(keras.Model):
def __init__(self,
text_encoder,
text_encoder_projection,
image_encoder,
temperature,
**kwargs):
super(DualEncoder, self).__init__(**kwargs)
self.text_encoder = text_encoder
self.text_encoder_projection = text_encoder_projection
self.image_encoder = image_encoder
# Temperature controls the contrast of softmax output. In general, a low
# temperature increases the contrast and a high temperature decreases it.
self.temperature = temperature
self.loss_tracker = keras.metrics.Mean(name='loss')
@property
def metrics(self):
return [self.loss_tracker]
def call(self, features, training=False):
# If there are two GPUs present, we use one of them for image encoder and
# one for text encoder. If there's only one GPU then they will be trained on
# the same GPU.
with tf.device('/gpu:0'):
caption_embeddings = self.text_encoder(
features['caption'], training=False)
caption_embeddings = self.text_encoder_projection(
caption_embeddings, training=training)
with tf.device('/gpu:1'):
image_embeddings = self.image_encoder(
features['image'], training=training)
return caption_embeddings, image_embeddings
def compute_loss(self, caption_embeddings, image_embeddings):
# Computing the loss with dot product similarity between image and text
# embeddings.
logits = (
tf.matmul(caption_embeddings, image_embeddings, transpose_b=True) /
self.temperature)
images_similarity = tf.matmul(
image_embeddings, image_embeddings, transpose_b=True)
captions_similarity = tf.matmul(
caption_embeddings, caption_embeddings, transpose_b=True)
# The targets is the mean of the self-similarity of the captions and images.
# This is more lenient to the similar examples appeared in the same batch.
targets = keras.activations.softmax(
(captions_similarity + images_similarity) / (2 * self.temperature))
captions_loss = keras.losses.categorical_crossentropy(
y_true=targets, y_pred=logits, from_logits=True)
images_loss = keras.losses.categorical_crossentropy(
y_true=tf.transpose(targets),
y_pred=tf.transpose(logits),
from_logits=True)
return (captions_loss + images_loss) / 2
def train_step(self, features):
with tf.GradientTape() as tape:
# Forward pass
caption_embeddings, image_embeddings = self(features, training=True)
loss = self.compute_loss(caption_embeddings, image_embeddings)
# Backward pass
gradients = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
self.loss_tracker.update_state(loss)
return {'loss': self.loss_tracker.result()}
def test_step(self, features):
caption_embeddings, image_embeddings = self(features, training=False)
loss = self.compute_loss(caption_embeddings, image_embeddings)
self.loss_tracker.update_state(loss)
return {'loss': self.loss_tracker.result()}
# The text embedder consists of two models. One is the frozen base universal
# sentence encoder, and the other is the trainable projection layer. We are
# doing this instead of one model to make later TFLite model conversion easier.
text_encoder = create_text_encoder()
projection_layers = create_text_embedder_projection(
input_dim=512, # Universal sentence encoder output has 512 dimensions
num_projection_layers=1,
projection_dims=EMB_SIZE,
dropout_rate=0.1)
image_encoder = create_image_encoder(
num_projection_layers=1, projection_dims=EMB_SIZE, dropout_rate=0.1)
dual_encoder = DualEncoder(
text_encoder, projection_layers, image_encoder, temperature=0.05)
dual_encoder.compile(
optimizer=tfa.optimizers.AdamW(learning_rate=0.001, weight_decay=0.001))
# We train the first three epochs with the learning rate of 0.001 and
# decrease it exponentially later on.
def lr_scheduler(epoch, lr):
if epoch < 3:
return lr
else:
return max(lr * tf.math.exp(-0.1), lr * 0.1)
# In colab, training takes roughly 4s per step, around 24 mins per epoch
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_loss', patience=2, restore_best_weights=True)
history = dual_encoder.fit(
train_dataset,
epochs=NUM_EPOCHS,
validation_data=valid_dataset,
callbacks=[
tf.keras.callbacks.LearningRateScheduler(lr_scheduler), early_stopping
],
max_queue_size=2,
)
# Save the models. We are not going to save the text_encoder since it's frozen
# and the TF2 saved model for text_encoder has problems converting to TFLite.
print('Training completed. Saving image and text encoders.')
dual_encoder.image_encoder.save('image_encoder')
dual_encoder.text_encoder_projection.save('text_encoder_projection')
print('Models are saved.')
combined_valid_dataset = prepare_dataset(
valid_img_cap,
VALID_IMAGE_PREFIX,
VALID_IMAGE_DIR,
VALID_IMAGE_URL)
deterministic_train_dataset = prepare_dataset(
train_img_cap,
TRAIN_IMAGE_PREFIX,
TRAIN_IMAGE_DIR,
TRAIN_IMAGE_URL)
all_combined = deterministic_train_dataset.concatenate(combined_valid_dataset)
def create_metadata(image_file_prefix, image_dir):
def _create_metadata(image_info):
# This is the same way we generated the image paths in the prepare_dataset
# function above
coco_id = image_info[0]
flickr_id = image_info[1][1]
return ('%s_%s' %
(flickr_id,
os.path.join(image_dir, '%s%012d.jpg' %
(image_file_prefix, coco_id)))).encode('utf-8')
return _create_metadata
# We don't store the images in the index file, as that would be too big. We only
# store the image path and the corresponding Flickr id.
metadata = list(
map(create_metadata(TRAIN_IMAGE_PREFIX, TRAIN_IMAGE_DIR), train_img_cap))
metadata.extend(
map(create_metadata(VALID_IMAGE_PREFIX, VALID_IMAGE_DIR), valid_img_cap))
# Image encoder takes one input named `image_input` so we remove other values in
# the dataset.
image_dataset = all_combined.map(
lambda example: {'image_input': example['image']})
image_embeddings = dual_encoder.image_encoder.predict(image_dataset, verbose=1)
print(f'Embedding matrix shape: {image_embeddings.shape}')
#@title Prepare the saved model
!rm -rf converted_model
# This create a new TF1 SavedModel from 1). The tfhub USE, and 2). The
# projection layers trained and saved from TF2.
with tf1.Graph().as_default() as g:
with tf1.Session() as sess:
# Reload the Universal Sentence Encoder model from tfhub. We can't just save
# the USE in TF2 as we did for the projection layers as that causes issues
# in the TFLite converter.
module = hub.Module(UNIVERSAL_SENTENCE_ENCODER_URL)
spm_path = sess.run(module(signature='spm_path'))
with tf1.io.gfile.GFile(spm_path, mode='rb') as f:
serialized_spm = f.read()
spm_path = sess.run(module(signature='spm_path'))
input_str = tf1.placeholder(dtype=tf1.string, shape=[None])
tokenizer = sentencepiece_tokenizer.FastSentencepieceTokenizer(
model=serialized_spm)
tokenized = tf1.sparse.from_dense(tokenizer.tokenize(input_str).to_tensor())
tokenized = tf1.cast(tokenized, dtype=tf1.int64)
encodings = module(
inputs=dict(
values=tokenized.values,
indices=tokenized.indices,
dense_shape=tokenized.dense_shape))
# Then combine it with the trained projection layers
projection_layers = tf1.keras.models.load_model('text_encoder_projection')
encodings = projection_layers(encodings)
sess.run([tf1.global_variables_initializer(), tf1.tables_initializer()])
# Save with SavedModelBuilder
builder = tf1.saved_model.Builder('converted_model')
sig_def = tf1.saved_model.predict_signature_def(
inputs={'input': input_str}, outputs={'output': encodings})
builder.add_meta_graph_and_variables(
sess,
tags=['serve'],
signature_def_map={
tf1.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY: sig_def
},
clear_devices=True)
builder.save()
print('Model saved to converted_model/')
converter = tf.lite.TFLiteConverter.from_saved_model('converted_model')
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
converter.allow_custom_ops = True
converted_model_tflite = converter.convert()
with open('text_embedder.tflite', 'wb') as f:
f.write(converted_model_tflite)
import tflite_model_maker as mm
scann_options = mm.searcher.ScaNNOptions(
# We use the dot product similarity as this is how the model is trained
distance_measure='dot_product',
# Enable space partitioning with K-Means tree
tree=mm.searcher.Tree(
# How many partitions to have. A rule of thumb is the square root of the
# dataset size. In this case it's 351.
num_leaves=int(math.sqrt(len(metadata))),
# Searching 4 partitions seems to give reasonable result. Searching more
# will definitely return better results, but it's more costly to run.
num_leaves_to_search=4),
# Compress each float to int8 in the embedding. See
# https://www.tensorflow.org/lite/api_docs/python/tflite_model_maker/searcher/ScoreAH
# for details
score_ah=mm.searcher.ScoreAH(
# Using 1 dimension per quantization block.
1,
# Generally 0.2 works pretty well.
anisotropic_quantization_threshold=0.2))
data = mm.searcher.DataLoader(
embedder_path='text_embedder.tflite',
dataset=image_embeddings,
metadata=metadata)
model = mm.searcher.Searcher.create_from_data(
data=data, scann_options=scann_options)
model.export(
export_filename='searcher_model.tflite',
userinfo='',
export_format=mm.searcher.ExportFormat.TFLITE)
from tflite_support.task import text
from tflite_support.task import core
options = text.TextSearcherOptions(
base_options=core.BaseOptions(
file_name='searcher_model.tflite'))
# The searcher returns 6 results
options.search_options.max_results = 6
tflite_searcher = text.TextSearcher.create_from_options(options)
def search_image_with_text(query_str, show_images=False):
neighbors = tflite_searcher.search(query_str)
for i, neighbor in enumerate(neighbors.nearest_neighbors):
metadata = neighbor.metadata.decode('utf-8').split('_')
flickr_id = metadata[0]
print('Flickr url for %d: http://flickr.com/photo.gne?id=%s' %
(i + 1, flickr_id))
if show_images:
plt.figure(figsize=(20, 13))
for i, neighbor in enumerate(neighbors.nearest_neighbors):
ax = plt.subplot(2, 3, i + 1)
# Using negative distance since on-device ScaNN returns negative
# dot-product distance.
ax.set_title('%d: Similarity: %.05f' % (i + 1, -neighbor.distance))
metadata = neighbor.metadata.decode('utf-8').split('_')
image_path = '_'.join(metadata[1:])
image = tf.image.decode_jpeg(
tf.io.read_file(image_path), channels=3) / 255
plt.imshow(image)
plt.axis('off')
search_image_with_text('A man riding on a bike')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: On-device Text-to-Image Search with TensorFlow Lite Searcher Library
Step2: Note you might need to restart the runtime after installation.
Step3: Get COCO dataset
Step4: Download the datasets and preprocess them.
Step5: Define models
Step6: We use Universal Sentence Encoder, a SOTA sentence embedding model, as the text encoder base model. The TFHub lite version is a TF1 saved model. To make it work well in TF2 and later TFLite conversion, we create two models, one is the frozen universal sentence encoder, and the other is the trainable projection layer.
Step7: This dual encoder model is derived from this Keras post
Step8: Train the Dual Encoder model
Step9: Train the dual encoder model.
Step10: Create the text-to-image Searcher model using Model Maker
Step11: Create the metadata (image file names and the flickr post id) from the dataset. This will later be packed into the TFLite model.
Step12: Generate the embeddings for all the images we have. We do it in Tensorflow with GPU instead of Model Maker. Again, these will be packed into the TFLite model.
Step13: Convert text embedder to TFLite
Step14: Convert and save the TFLite model. Here the model only has the text encoder. We will add in the retrieval index in the following steps.
Step15: Create TFLite Searcher model
Step16: Run inference using Task Library
Step17: Configure the searcher to return 6 results per query and not to L2 normalize the query embeddings because the text encoder has already normalized them. See source code on how to configure the TextSearcher.
Step18: We will not show the image here due to copyright issues. You can set show_images=True to display them (note that you can't set it to True unless you've downloaded the images at this cell). Please check the post links for the license of each image.
|
11,697
|
<ASSISTANT_TASK:>
Python Code:
#begin by importing flopy
import os
import sys
import numpy as np
#flopypath = '../..'
#if flopypath not in sys.path:
# sys.path.append(flopypath)
import flopy
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
stress_period_data = [
[2, 3, 4, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
[2, 3, 5, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
[2, 3, 6, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
]
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data)
m.write_input()
!more 'data/test.riv'
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
dis = flopy.modflow.ModflowDis(m, nper=3)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data)
m.write_input()
!more 'data/test.riv'
riv_dtype = flopy.modflow.ModflowRiv.get_default_dtype()
print(riv_dtype)
stress_period_data = np.zeros((3), dtype=riv_dtype)
stress_period_data = stress_period_data.view(np.recarray)
print('stress_period_data: ', stress_period_data)
print('type is: ', type(stress_period_data))
stress_period_data[0] = (2, 3, 4, 10.7, 5000., -5.7)
stress_period_data[1] = (2, 3, 5, 10.7, 5000., -5.7)
stress_period_data[2] = (2, 3, 6, 10.7, 5000., -5.7)
print(stress_period_data)
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data)
m.write_input()
!more 'data/test.riv'
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
dis = flopy.modflow.ModflowDis(m, nper=3)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data)
m.write_input()
!more 'data/test.riv'
sp1 = [
[2, 3, 4, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
[2, 3, 5, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
[2, 3, 6, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
]
print(sp1)
riv_dtype = flopy.modflow.ModflowRiv.get_default_dtype()
sp5 = np.zeros((3), dtype=riv_dtype)
sp5 = sp5.view(np.recarray)
sp5[0] = (2, 3, 4, 20.7, 5000., -5.7)
sp5[1] = (2, 3, 5, 20.7, 5000., -5.7)
sp5[2] = (2, 3, 6, 20.7, 5000., -5.7)
print(sp5)
sp_dict = {0:0, 1:sp1, 2:0, 5:sp5}
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
dis = flopy.modflow.ModflowDis(m, nper=8)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=sp_dict)
m.write_input()
!more 'data/test.riv'
#create an empty array with an iface auxiliary variable at the end
riva_dtype = [('k', '<i8'), ('i', '<i8'), ('j', '<i8'),
('stage', '<f4'), ('cond', '<f4'), ('rbot', '<f4'),
('iface', '<i4'), ('boundname', object)]
riva_dtype = np.dtype(riva_dtype)
stress_period_data = np.zeros((3), dtype=riva_dtype)
stress_period_data = stress_period_data.view(np.recarray)
print('stress_period_data: ', stress_period_data)
print('type is: ', type(stress_period_data))
stress_period_data[0] = (2, 3, 4, 10.7, 5000., -5.7, 1, 'riv1')
stress_period_data[1] = (2, 3, 5, 10.7, 5000., -5.7, 2, 'riv2')
stress_period_data[2] = (2, 3, 6, 10.7, 5000., -5.7, 3, 'riv3')
print(stress_period_data)
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data, dtype=riva_dtype, options=['aux iface'])
m.write_input()
!more 'data/test.riv'
#create an empty array based on nodenumber instead of layer, row, and column
rivu_dtype = [('nodenumber', '<i8'), ('stage', '<f4'), ('cond', '<f4'), ('rbot', '<f4')]
rivu_dtype = np.dtype(rivu_dtype)
stress_period_data = np.zeros((3), dtype=rivu_dtype)
stress_period_data = stress_period_data.view(np.recarray)
print('stress_period_data: ', stress_period_data)
print('type is: ', type(stress_period_data))
stress_period_data[0] = (77, 10.7, 5000., -5.7)
stress_period_data[1] = (245, 10.7, 5000., -5.7)
stress_period_data[2] = (450034, 10.7, 5000., -5.7)
print(stress_period_data)
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data, dtype=rivu_dtype)
m.write_input()
print(workspace)
!more 'data/test.riv'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: List of Boundaries
Step2: If we look at the River Package created here, you see that the layer, row, and column numbers have been increased by one.
Step3: If this model had more than one stress period, then Flopy will assume that this boundary condition information applies until the end of the simulation
Step4: Recarray of Boundaries
Step5: Now that we know the structure of the recarray that we want to create, we can create a new one as follows.
Step6: We can then fill the recarray with our boundary conditions.
Step7: As before, if we have multiple stress periods, then this recarray will apply to all of them.
Step8: Dictionary of Boundaries
Step9: MODFLOW Auxiliary Variables
Step10: Working with Unstructured Grids
|
11,698
|
<ASSISTANT_TASK:>
Python Code:
# construct and simulate toy example: diffusive dynamics in a double-well potential
import numpy as np
import numpy.random as npr
import matplotlib.pyplot as plt
%matplotlib inline
offset = np.array([3,0])
def q(x):
''' unnormalized probability '''
return np.exp(-np.sum((x-offset)**2)) + np.exp(-np.sum((x+offset)**2))
def simulate_diffusion(x_0,q,step_size=0.01,max_steps=10000):
''' starting from x_0, simulate RW-MH '''
traj = np.zeros((max_steps+1,len(x_0)))
traj[0] = x_0
old_q = q(x_0)
for i in range(max_steps):
prop = traj[i]+npr.randn(len(x_0))*step_size
new_q = q(prop)
if new_q/old_q>npr.rand():
traj[i+1] = prop
old_q = new_q
else:
traj[i+1] = traj[i]
return traj
# collect some trajectories
npr.seed(0) # for repeatability
trajs = []
run_ids = []
for i,offset_ in enumerate([-offset,offset]): # analogous to 3 RUNs on Folding@Home
for _ in range(10): # for each RUN, collect 10 clones
trajs.append(simulate_diffusion(np.zeros(2)+offset_,q,max_steps=10000,step_size=0.1))
run_ids.append(i)
len(trajs)
# plot trajectories
r = 6
def plot_trajectories(trajs,alpha=1.0):
from matplotlib.pyplot import cm
cmap = cm.get_cmap('Spectral')
N = len(trajs)
for i,traj in enumerate(trajs):
c = cmap(float(i)/(N-1))
plt.plot(traj[:,0],traj[:,1],color=c,alpha=alpha)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Trajectories')
plt.xlim(-r,r)
plt.ylim(-r,r)
plot_trajectories(trajs)
n_bins=50
offsets = np.linspace(-r,r,n_bins)
plot_trajectories(trajs,alpha=0.3)
for offset in offsets:
plt.hlines(offset,-r,r,colors='grey')
plt.xlim(-r,r)
plt.ylim(-r,r)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Trajectories + discretization_fast')
offsets = np.linspace(-r,r,n_bins)
plot_trajectories(trajs,alpha=0.3)
for offset in offsets:
plt.vlines(offset,-r,r,colors='grey')
plt.xlim(-r,r)
plt.ylim(-r,r)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Trajectories + discretization_slow')
def axis_aligned_discretization(trajs,offsets,dim=0):
dtrajs = []
for traj in trajs:
ax = traj[:,dim]
bins = np.zeros((len(offsets)+1))
bins[0] = -np.inf
bins[1:] = offsets
dtraj = np.digitize(ax,bins)
dtrajs.append(dtraj)
return dtrajs
dtrajs_fast = axis_aligned_discretization(trajs,offsets,dim=1)
dtrajs_slow = axis_aligned_discretization(trajs,offsets,dim=0)
from msmbuilder.msm import MarkovStateModel
m = 6 # how to choose m beforehand?
msm = MarkovStateModel(n_timescales=m)
msm.fit(dtrajs_fast)
msm.score_
msm = MarkovStateModel(n_timescales=m)
msm.fit(dtrajs_slow)
msm.score_
def two_fold_cv(dtrajs,msm):
train_scores = []
test_scores = []
split = len(dtrajs)/2
A = dtrajs[:split]
B = dtrajs[split:]
msm.fit(A)
train_scores.append(msm.score_)
try:
test_scores.append(msm.score(B))
except:
test_scores.append(np.nan)
msm.fit(B)
train_scores.append(msm.score_)
try:
test_scores.append(msm.score(A))
except:
test_scores.append(np.nan)
return train_scores,test_scores
len(dtrajs_fast),len(dtrajs_slow)
train_scores_fast, test_scores_fast = two_fold_cv(dtrajs_fast,msm)
train_scores_slow, test_scores_slow = two_fold_cv(dtrajs_slow,msm)
train_scores_fast, test_scores_fast
train_scores_slow, test_scores_slow
np.mean(train_scores_fast), np.mean(test_scores_fast)
np.mean(train_scores_slow), np.mean(test_scores_slow)
def leave_one_out_gmrq(dtrajs,msm):
train_scores = []
test_scores = []
for i,test in enumerate(dtrajs):
train = dtrajs[:i]+dtrajs[i+1:]
msm.fit(train)
train_scores.append(msm.score_)
try:
test_scores.append(msm.score(test))
except:
test_scores.append(np.nan)
return train_scores,test_scores
train_scores_fast, test_scores_fast = leave_one_out_gmrq(dtrajs_fast,msm)
train_scores_slow, test_scores_slow = leave_one_out_gmrq(dtrajs_slow,msm)
np.mean(train_scores_fast), np.mean(test_scores_fast)
np.mean(train_scores_slow), np.mean(test_scores_slow)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Two candidate discretizations
Step2: Discretization_fast
Step3: Discretization_slow
Step4: Extract discrete trajectories
Step5: Cross-validation
|
11,699
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import seaborn
class Bandit:
def __init__(self):
self.arm_values = np.random.normal(0,1,10)
self.K = np.zeros(10)
self.est_values = np.zeros(10)
def get_reward(self,action):
noise = np.random.normal(0,1)
reward = self.arm_values[action] + noise
return reward
def choose_eps_greedy(self,epsilon):
rand_num = np.random.random()
if epsilon>rand_num:
return np.random.randint(10)
else:
return np.argmax(self.est_values)
def update_est(self,action,reward):
self.K[action] += 1
alpha = 1./self.K[action]
self.est_values[action] += alpha * (reward - self.est_values[action])
def experiment(bandit,Npulls,epsilon):
history = []
for i in range(Npulls):
action = bandit.choose_eps_greedy(epsilon)
R = bandit.get_reward(action)
bandit.update_est(action,R)
history.append(R)
return np.array(history)
Nexp = 2000
Npulls = 5000
avg_outcome_eps0p0 = np.zeros(Npulls)
avg_outcome_eps0p01 = np.zeros(Npulls)
avg_outcome_eps0p1 = np.zeros(Npulls)
for i in range(Nexp):
bandit = Bandit()
avg_outcome_eps0p0 += experiment(bandit,Npulls,0.0)
bandit = Bandit()
avg_outcome_eps0p01 += experiment(bandit,Npulls,0.01)
bandit = Bandit()
avg_outcome_eps0p1 += experiment(bandit,Npulls,0.1)
avg_outcome_eps0p0 /= np.float(Nexp)
avg_outcome_eps0p01 /= np.float(Nexp)
avg_outcome_eps0p1 /= np.float(Nexp)
# plot results
import matplotlib.pyplot as plt
plt.plot(avg_outcome_eps0p0,label="eps = 0.0", alpha=0.5)
plt.plot(avg_outcome_eps0p01,label="eps = 0.01", alpha=0.5)
plt.plot(avg_outcome_eps0p1,label="eps = 0.1", alpha=0.5)
plt.ylim(0,2.2)
plt.legend()
plt.gcf().set_size_inches((8,3))
plt.show()
import gym
import numpy as np
env = gym.make('FrozenLake-v0')
#Initialize table with all zeros
Q = np.zeros([env.observation_space.n,env.action_space.n])
# Set learning parameters
lr = .9
gamma = 0.95
num_episodes = 10000
#create lists to contain total rewards and steps per episode
rList = []
for i in range(num_episodes):
#Reset environment and get first new observation
s = env.reset()
rAll = 0
d = False
j = 0
#The Q-Table learning algorithm
while j < 999999:
j+=1
#Choose an action by greedily (with noise) picking from Q table
a = np.argmax(Q[s,:] + np.random.randn(1,env.action_space.n)*(1./(i+1)))
#Get new state and reward from environment
s1,r,d,_ = env.step(a)
#Update Q-Table with new knowledge
Q[s,a] = Q[s,a] + lr*(r + gamma*np.max(Q[s1,:]) - Q[s,a])
rAll += r
s = s1
if d == True:
break
rList.append(rAll)
print "Score over time: " + str(sum(rList[-100:])/100)
print "Final Q-Table Values"
print Q
import gym
import numpy as np
import random
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make('FrozenLake-v0')
tf.reset_default_graph()
#These lines establish the feed-forward part of the network used to choose actions
inputs1 = tf.placeholder(shape=[1,16],dtype=tf.float32)
W = tf.Variable(tf.random_uniform([16,4],0,0.01))
Qout = tf.matmul(inputs1,W)
predict = tf.argmax(Qout,1)
#Below we obtain the loss by taking the sum of squares difference between the target and prediction Q values.
nextQ = tf.placeholder(shape=[1,4],dtype=tf.float32)
loss = tf.reduce_sum(tf.square(nextQ - Qout))
trainer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
updateModel = trainer.minimize(loss)
init = tf.global_variables_initializer()
# Set learning parameters
y = .99
e = 0.1
num_episodes = 2000
#create lists to contain total rewards and steps per episode
jList = []
rList = []
with tf.Session() as sess:
sess.run(init)
for i in range(num_episodes):
#Reset environment and get first new observation
s = env.reset()
rAll = 0
d = False
j = 0
#The Q-Network
while j < 99:
j+=1
#Choose an action by greedily (with e chance of random action) from the Q-network
a,allQ = sess.run([predict,Qout],
feed_dict={inputs1:np.identity(16)[s:s+1]})
if np.random.rand(1) < e:
a[0] = env.action_space.sample()
#Get new state and reward from environment
s1,r,d,_ = env.step(a[0])
#Obtain the Q' values by feeding the new state through our network
Q1 = sess.run(Qout,
feed_dict={inputs1:np.identity(16)[s1:s1+1]})
#Obtain maxQ' and set our target value for chosen action.
maxQ1 = np.max(Q1)
targetQ = allQ
targetQ[0,a[0]] = r + y*maxQ1
#Train our network using target and predicted Q values
_,W1 = sess.run([updateModel,W],
feed_dict={inputs1:np.identity(16)[s:s+1],
nextQ:targetQ})
rAll += r
s = s1
if d == True:
#Reduce chance of random action as we train the model.
e = 1./((i/50) + 10)
break
jList.append(j)
rList.append(rAll)
print "Percent of succesful episodes: " + str(sum(rList[-100:])/100) + "%"
plt.plot(rList)
plt.plot(jList)
from __future__ import division
import gym
import numpy as np
import random
import tensorflow as tf
import tensorflow.contrib.slim as slim
import matplotlib.pyplot as plt
import scipy.misc
import os
%matplotlib inline
import numpy as np
import random
import itertools
import scipy.misc
import matplotlib.pyplot as plt
class gameOb():
def __init__(self,coordinates,size,intensity,channel,reward,name):
self.x = coordinates[0]
self.y = coordinates[1]
self.size = size
self.intensity = intensity
self.channel = channel
self.reward = reward
self.name = name
class gameEnv():
def __init__(self,partial,size):
self.sizeX = size
self.sizeY = size
self.actions = 4
self.objects = []
self.partial = partial
a = self.reset()
plt.imshow(a,interpolation="nearest")
def reset(self):
self.objects = []
hero = gameOb(self.newPosition(),1,1,2,None,'hero')
self.objects.append(hero)
bug = gameOb(self.newPosition(),1,1,1,1,'goal')
self.objects.append(bug)
hole = gameOb(self.newPosition(),1,1,0,-1,'fire')
self.objects.append(hole)
bug2 = gameOb(self.newPosition(),1,1,1,1,'goal')
self.objects.append(bug2)
hole2 = gameOb(self.newPosition(),1,1,0,-1,'fire')
self.objects.append(hole2)
bug3 = gameOb(self.newPosition(),1,1,1,1,'goal')
self.objects.append(bug3)
bug4 = gameOb(self.newPosition(),1,1,1,1,'goal')
self.objects.append(bug4)
state = self.renderEnv()
self.state = state
return state
def moveChar(self,direction):
# 0 - up, 1 - down, 2 - left, 3 - right
hero = self.objects[0]
heroX = hero.x
heroY = hero.y
penalize = 0.
if direction == 0 and hero.y >= 1:
hero.y -= 1
if direction == 1 and hero.y <= self.sizeY-2:
hero.y += 1
if direction == 2 and hero.x >= 1:
hero.x -= 1
if direction == 3 and hero.x <= self.sizeX-2:
hero.x += 1
if hero.x == heroX and hero.y == heroY:
penalize = 0.0
self.objects[0] = hero
return penalize
def newPosition(self):
iterables = [ range(self.sizeX), range(self.sizeY)]
points = []
for t in itertools.product(*iterables):
points.append(t)
currentPositions = []
for objectA in self.objects:
if (objectA.x,objectA.y) not in currentPositions:
currentPositions.append((objectA.x,objectA.y))
for pos in currentPositions:
points.remove(pos)
location = np.random.choice(range(len(points)),replace=False)
return points[location]
def checkGoal(self):
others = []
for obj in self.objects:
if obj.name == 'hero':
hero = obj
else:
others.append(obj)
ended = False
for other in others:
if hero.x == other.x and hero.y == other.y:
self.objects.remove(other)
if other.reward == 1:
self.objects.append(gameOb(self.newPosition(),1,1,1,1,'goal'))
else:
self.objects.append(gameOb(self.newPosition(),1,1,0,-1,'fire'))
return other.reward,False
if ended == False:
return 0.0,False
def renderEnv(self):
#a = np.zeros([self.sizeY,self.sizeX,3])
a = np.ones([self.sizeY+2,self.sizeX+2,3])
a[1:-1,1:-1,:] = 0
hero = None
for item in self.objects:
a[item.y+1:item.y+item.size+1,item.x+1:item.x+item.size+1,item.channel] = item.intensity
if item.name == 'hero':
hero = item
if self.partial == True:
a = a[hero.y:hero.y+3,hero.x:hero.x+3,:]
b = scipy.misc.imresize(a[:,:,0],[84,84,1],interp='nearest')
c = scipy.misc.imresize(a[:,:,1],[84,84,1],interp='nearest')
d = scipy.misc.imresize(a[:,:,2],[84,84,1],interp='nearest')
a = np.stack([b,c,d],axis=2)
return a
def step(self,action):
penalty = self.moveChar(action)
reward,done = self.checkGoal()
state = self.renderEnv()
if reward == None:
print(done)
print(reward)
print(penalty)
return state,(reward+penalty),done
else:
return state,(reward+penalty),done
env = gameEnv(partial=False,size=5)
class Qnetwork():
def __init__(self,h_size):
#The network recieves a frame from the game, flattened into an array.
#It then resizes it and processes it through four convolutional layers.
self.scalarInput = tf.placeholder(shape=[None,21168],dtype=tf.float32)
self.imageIn = tf.reshape(self.scalarInput,shape=[-1,84,84,3])
self.conv1 = slim.conv2d( \
inputs=self.imageIn,num_outputs=32,kernel_size=[8,8],stride=[4,4],padding='VALID', biases_initializer=None)
self.conv2 = slim.conv2d( \
inputs=self.conv1,num_outputs=64,kernel_size=[4,4],stride=[2,2],padding='VALID', biases_initializer=None)
self.conv3 = slim.conv2d( \
inputs=self.conv2,num_outputs=64,kernel_size=[3,3],stride=[1,1],padding='VALID', biases_initializer=None)
self.conv4 = slim.conv2d( \
inputs=self.conv3,num_outputs=h_size,kernel_size=[7,7],stride=[1,1],padding='VALID', biases_initializer=None)
#We take the output from the final convolutional layer and split it into separate advantage and value streams.
self.streamAC,self.streamVC = tf.split(self.conv4,2,3)
self.streamA = slim.flatten(self.streamAC)
self.streamV = slim.flatten(self.streamVC)
xavier_init = tf.contrib.layers.xavier_initializer()
self.AW = tf.Variable(xavier_init([h_size//2,env.actions]))
self.VW = tf.Variable(xavier_init([h_size//2,1]))
self.Advantage = tf.matmul(self.streamA,self.AW)
self.Value = tf.matmul(self.streamV,self.VW)
#Then combine them together to get our final Q-values.
self.Qout = self.Value + tf.subtract(self.Advantage,tf.reduce_mean(self.Advantage,axis=1,keep_dims=True))
self.predict = tf.argmax(self.Qout,1)
#Below we obtain the loss by taking the sum of squares difference between the target and prediction Q values.
self.targetQ = tf.placeholder(shape=[None],dtype=tf.float32)
self.actions = tf.placeholder(shape=[None],dtype=tf.int32)
self.actions_onehot = tf.one_hot(self.actions,env.actions,dtype=tf.float32)
self.Q = tf.reduce_sum(tf.multiply(self.Qout, self.actions_onehot), axis=1)
self.td_error = tf.square(self.targetQ - self.Q)
self.loss = tf.reduce_mean(self.td_error)
self.trainer = tf.train.AdamOptimizer(learning_rate=0.0001)
self.updateModel = self.trainer.minimize(self.loss)
class experience_buffer():
def __init__(self, buffer_size = 50000):
self.buffer = []
self.buffer_size = buffer_size
def add(self,experience):
if len(self.buffer) + len(experience) >= self.buffer_size:
self.buffer[0:(len(experience)+len(self.buffer))-self.buffer_size] = []
self.buffer.extend(experience)
def sample(self,size):
return np.reshape(np.array(random.sample(self.buffer,size)),[size,5])
def processState(states):
return np.reshape(states,[21168])
def updateTargetGraph(tfVars,tau):
total_vars = len(tfVars)
op_holder = []
for idx,var in enumerate(tfVars[0:total_vars//2]):
op_holder.append(tfVars[idx+total_vars//2].assign((var.value()*tau) + ((1-tau)*tfVars[idx+total_vars//2].value())))
return op_holder
def updateTarget(op_holder,sess):
for op in op_holder:
sess.run(op)
batch_size = 32 #How many experiences to use for each training step.
update_freq = 4 #How often to perform a training step.
y = .99 #Discount factor on the target Q-values
startE = 1 #Starting chance of random action
endE = 0.1 #Final chance of random action
anneling_steps = 10000. #How many steps of training to reduce startE to endE.
num_episodes = 10000 #How many episodes of game environment to train network with.
pre_train_steps = 10000 #How many steps of random actions before training begins.
max_epLength = 50 #The max allowed length of our episode.
load_model = False #Whether to load a saved model.
path = "./dqn" #The path to save our model to.
h_size = 512 #The size of the final convolutional layer before splitting it into Advantage and Value streams.
tau = 0.001 #Rate to update target network toward primary network
tf.reset_default_graph()
mainQN = Qnetwork(h_size)
targetQN = Qnetwork(h_size)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
trainables = tf.trainable_variables()
targetOps = updateTargetGraph(trainables,tau)
myBuffer = experience_buffer()
#Set the rate of random action decrease.
e = startE
stepDrop = (startE - endE)/anneling_steps
#create lists to contain total rewards and steps per episode
jList = []
rList = []
total_steps = 0
#Make a path for our model to be saved in.
if not os.path.exists(path):
os.makedirs(path)
with tf.Session() as sess:
sess.run(init)
if load_model == True:
print('Loading Model...')
ckpt = tf.train.get_checkpoint_state(path)
saver.restore(sess,ckpt.model_checkpoint_path)
updateTarget(targetOps,sess) #Set the target network to be equal to the primary network.
for i in range(num_episodes):
episodeBuffer = experience_buffer()
#Reset environment and get first new observation
s = env.reset()
s = processState(s)
d = False
rAll = 0
j = 0
#The Q-Network
while j < max_epLength: #If the agent takes longer than 200 moves to reach either of the blocks, end the trial.
j+=1
#Choose an action by greedily (with e chance of random action) from the Q-network
if np.random.rand(1) < e or total_steps < pre_train_steps:
a = np.random.randint(0,4)
else:
a = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:[s]})[0]
s1,r,d = env.step(a)
s1 = processState(s1)
total_steps += 1
episodeBuffer.add(np.reshape(np.array([s,a,r,s1,d]),[1,5])) #Save the experience to our episode buffer.
if total_steps > pre_train_steps:
if e > endE:
e -= stepDrop
if total_steps % (update_freq) == 0:
trainBatch = myBuffer.sample(batch_size) #Get a random batch of experiences.
#Below we perform the Double-DQN update to the target Q-values
Q1 = sess.run(mainQN.predict,feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,3])})
Q2 = sess.run(targetQN.Qout,feed_dict={targetQN.scalarInput:np.vstack(trainBatch[:,3])})
end_multiplier = -(trainBatch[:,4] - 1)
doubleQ = Q2[range(batch_size),Q1]
targetQ = trainBatch[:,2] + (y*doubleQ * end_multiplier)
#Update the network with our target values.
_ = sess.run(mainQN.updateModel, \
feed_dict={mainQN.scalarInput:np.vstack(trainBatch[:,0]),mainQN.targetQ:targetQ, mainQN.actions:trainBatch[:,1]})
updateTarget(targetOps,sess) #Set the target network to be equal to the primary network.
rAll += r
s = s1
if d == True:
break
myBuffer.add(episodeBuffer.buffer)
jList.append(j)
rList.append(rAll)
#Periodically save the model.
if i % 1000 == 0:
saver.save(sess,path+'/model-'+str(i)+'.cptk')
print("Saved Model")
if len(rList) % 10 == 0:
print(total_steps,np.mean(rList[-10:]), e)
saver.save(sess,path+'/model-'+str(i)+'.cptk')
print("Percent of succesful episodes: " + str(sum(rList)/num_episodes) + "%")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: How are we estimating the value of an action?
Step2: Let's make three different experiments
Step3: Although ε-greedy action selection is an effective and popular means of balancing exploration and exploitation in reinforcement learning, one drawback is that when it explores it chooses equally among all actions. This means that it is as likely to choose the worst-appearing action as it is to choose the next-to-best action. In tasks where the worst actions are very bad, this may
Step4: FrozenLake-v0 is considered "solved" when the agent obtains an average reward of at least 0.78 over 100 consecutive episodes.
Step5: Q-Learning with Neural Networks
Step6: We can see that the network beings to consistly reach the goal around the 1000 episode mark.
Step7: It also begins to progress through the environment for longer than chance aroudn the 1000 mark as well.
Step8: Deep Q-networks
Step9: Above is an example of a starting environment in our simple game. The agent controls the blue square, and can move up, down, left, or right. The goal is to move to the green square (for +1 reward) and avoid the red square (for -1 reward). The position of the three blocks is randomized every episode.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.