Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
8,700
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
from IPython.display import Image
from IPython.core.display import clear_output, display
from scipy.signal import convolve2d
from copy import copy
import time
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10.0, 6.0)
import matplotlib.cm as cm
from astropy.io import fits
import aplpy
#Disable astropy/aplpy logging
import logging
logger0 = logging.getLogger('astropy')
logger0.setLevel(logging.CRITICAL)
logger1 = logging.getLogger('aplpy')
logger1.setLevel(logging.CRITICAL)
HTML('../style/code_toggle.html')
fig = plt.figure(figsize=(16, 7))
gc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-dirty.fits', \
figure=fig, subplot=[0.0,0.1,0.35,0.8])
gc1.show_colorscale(vmin=-1., vmax=3.0, cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('Dirty Image')
gc1.add_colorbar()
gc2 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-psf.fits', \
figure=fig, subplot=[0.5,0.1,0.35,0.8])
gc2.show_colorscale(cmap='viridis')
gc2.hide_axis_labels()
gc2.hide_tick_labels()
plt.title('KAT-7 PSF')
gc2.add_colorbar()
fig.canvas.draw()
fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-psf.fits') #KAT-7 PSF
psf = fh[0].data[0,0]
fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-dirty.fits') #Dirty image
dirtyImg = fh[0].data[0,0]
#apply inverse filter
sampFunc = np.fft.fft2(psf)
obsVis = np.fft.fft2(dirtyImg)
trueVis = obsVis / sampFunc
trueImg = np.abs(np.fft.ifft2(trueVis))
fig = plt.figure(figsize=(16, 7))
gc1 = aplpy.FITSFigure(trueImg, figure=fig, subplot=[0.0,0.1,0.35,0.8])
gc1.show_colorscale(cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('Model (Inverse Filtered)')
gc1.add_colorbar()
gc2 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-model.fits', \
figure=fig, subplot=[0.5,0.1,0.35,0.8])
gc2.show_colorscale(vmin=-0.1, vmax=1.0, cmap='viridis')
gc2.hide_axis_labels()
gc2.hide_tick_labels()
plt.title('True Sky Model')
gc2.add_colorbar()
fig.canvas.draw()
#clip the dirty image to only select pixels above a minimum threshold, set all other pixels to zero
thresh = 2.
clippedDirtyImg = np.clip(dirtyImg, thresh, np.max(dirtyImg))
#apply inverse filter
sampFunc = np.fft.fft2(psf)
obsVis = np.fft.fft2(clippedDirtyImg)
trueVis = obsVis / sampFunc
trueImg = np.abs(np.fft.ifft2(trueVis))
fig = plt.figure(figsize=(16, 7))
gc1 = aplpy.FITSFigure(clippedDirtyImg, figure=fig, subplot=[0.0,0.1,0.35,0.8])
gc1.show_colorscale(vmin=thresh, vmax=10.0, cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('Thresholded Dirty Image')
gc1.add_colorbar()
gc2 = aplpy.FITSFigure(trueImg, figure=fig, subplot=[0.5,0.1,0.35,0.8])
gc2.show_colorscale(cmap='viridis')
gc2.hide_axis_labels()
gc2.hide_tick_labels()
plt.title('Model (Inverse Filtered)')
gc2.add_colorbar()
fig.canvas.draw()
Image(filename='../5_Imaging/figures/uvcoverage/KAT-7_6h60s_dec-30_10MHz_10chans.png')
imsize = 50
# create noise background
noise_rms = 0.1
I = noise_rms*(np.random.random([imsize,imsize])-0.5)
# add three point sources with different flux values
I[20,20] += 1
I[32,15] += 1.45
I[30,34] += 1.12
plt.imshow(I, cmap=cm.jet, interpolation='nearest')
plt.colorbar()
plt.title('$I_{true}(l,m)$');
PSFsize = 13
PSF = np.zeros([PSFsize,PSFsize])
PSFmid = (PSFsize - 1)/2
PSF[:,PSFmid] = 0.5
PSF[PSFmid,:] = 0.5
d1, d2 = np.diag_indices_from(PSF)
PSF[d1,d2] = 0.5
PSF[d1,d2[::-1]] = 0.5
PSF[PSFmid-2:PSFmid+3,PSFmid-2:PSFmid+3] = 0
PSF[PSFmid-1:PSFmid+2,PSFmid-1:PSFmid+2] = 0.75
PSF[PSFmid,PSFmid] = 1.0
plt.imshow(PSF, cmap = cm.jet, interpolation='nearest')
plt.colorbar()
plt.title('PSF(l,m)');
I_dirty = convolve2d(I,PSF,mode='same')
fig, axes = plt.subplots(figsize=(16,16))
plt.subplot(131)
plt.imshow(I, cmap=cm.jet, interpolation='nearest')
plt.title('$I_{true}(l,m)$')
plt.subplot(132)
plt.imshow(PSF, cmap=cm.jet, interpolation='nearest')
plt.xlim(-15,25)
plt.ylim(-15,25)
plt.title('PSF(l,m)')
plt.subplot(133)
plt.imshow(I_dirty, cmap=cm.jet, interpolation='nearest')
plt.title('Dirty image $I^D(l,m)$');
# ------------------------------------------------
# Step 1: copy the dirty image to a residual image
# ------------------------------------------------
I_residual = copy(I_dirty)
# set up the input parameters
# (you can change these later to see how they impact the algorithm)
gain = 0.2
niter = 100
threshold = 5.*noise_rms
plotmax = np.max(I)
plotmin = np.min(I)
model = []
# plot dirty image to compare to the residual image as we run the algorithm
f, ax = plt.subplots(1,2,figsize=[16,6])
ax[0].set_title('$I^D(l,m)$')
ax[0].imshow(I_dirty, cmap=cm.jet, vmax=plotmax, vmin=plotmin, interpolation='nearest');
for i in range(niter):
print 'Iteration {0}:'.format(i,)
# ------------------------------------------------
# Step 2. Find the strength and position of the peak in the residual image
# ------------------------------------------------
f_max = np.max(I_residual)
p_max = np.where(I_residual==f_max)
# ------------------------------------------------
# Step 3. Subtract gain*f_max*PSF centred on $p_{max}$ from the residual image
# ------------------------------------------------
p_x, p_y = p_max
I_residual[p_x-PSFmid:p_x+PSFmid+1,p_y-PSFmid:p_y+PSFmid+1] -= gain*f_max*PSF
print 'Peak: {0} Position: {1},{2}'.format(f_max,p_x[0],p_y[0])
# ------------------------------------------------
# Step 4. Record the peak position and the magnitude subtracted in the model
# ------------------------------------------------
model.append([p_x[0], p_y[0], gain*f_max])
# ------------------------------------------------
# Step 5. Repeat from (2.), unless residual image < threshold
# ------------------------------------------------
if np.max(I_residual) < threshold:
print 'Residual map peak is less than threshold {0}'.format(threshold,)
break
# plot the new residial next to the original image
ax[1].imshow(I_residual, cmap=cm.jet, vmax=plotmax, vmin=plotmin, interpolation='nearest')
ax[1].set_title('I_residual(l,m)')
# show the plot, then get ready for the next plot
plt.draw()
clear_output(wait=True)
time.sleep(0.2)
display(f)
ax[1].cla()
plt.close()
plotmax = np.max(I_residual)
plotmin = np.min(I_residual)
fig, axes = plt.subplots(figsize=(16,6))
plt.subplot(121)
plt.title('$I^D(l,m)$')
plt.imshow(I_dirty, cmap=cm.jet, vmax=plotmax, vmin=plotmin, interpolation='nearest')
plt.colorbar()
plt.subplot(122)
plt.title('$I^R(l,m)$')
plt.colorbar()
plt.imshow(I_residual, cmap=cm.jet, vmax=plotmax, vmin=plotmin, interpolation='nearest');
# now sum the accumulated point source model ("clean components") into a model image
print 'Clean components:'
print 'x y flux'
I_model = np.zeros([imsize,imsize])
for x, y, f in model:
print x, y, f
I_model[x,y] += f
plotmax = np.max(I)
plotmin = np.min(I)
fig, axes = plt.subplots(figsize=(16,6))
plt.subplot(121)
plt.title('True Sky')
plt.imshow(I, cmap=cm.jet, vmax=plotmax, vmin=plotmin, interpolation='nearest')
plt.colorbar()
plt.subplot(122)
plt.title('Deconvolved Sky')
plt.imshow(I_model, cmap=cm.jet, vmax=plotmax, vmin=plotmin, interpolation='nearest')
plt.colorbar();
# first get just the main lobe of the star shaped PSF
main_lobe = np.zeros([PSFsize,PSFsize])
main_lobe[PSFmid-1:PSFmid+2,PSFmid-1:PSFmid+2] = 0.75
main_lobe[PSFmid,PSFmid] = 1.0
fig, axes = plt.subplots(figsize=(16,6))
plt.subplot(121)
plt.imshow(PSF, cmap=cm.jet, interpolation='nearest')
plt.colorbar()
plt.title('PSF(l,m)');
plt.subplot(122)
plt.imshow(main_lobe, cmap=cm.jet, interpolation='nearest')
plt.colorbar()
plt.title('main lobe(l,m)');
# now fit a symmetric 2D gaussian to the main lobe
import scipy.optimize as opt
def gaussian2dsymmetric((x,y),A,x0,y0,sigma):
gauss2d = A*np.exp(-((x-x0)**2.0 + (y-y0)**2.0)/(2.*sigma**2.0))
return gauss2d.ravel()
x,y = np.meshgrid(range(PSFsize),range(PSFsize))
popt, pcov = opt.curve_fit(gaussian2dsymmetric,(x, y),main_lobe.ravel(), p0=[1.0,6.5,6.5,2.])
A, x0, y0, sigma = popt
print "Fit results:"
print "A: {0}, x0: {1} y0: {2} sigma: {3}".format(A,x0,y0,sigma)
# use fitted values to create CLEAN beam (or restoring beam)
# normalise by dividing through by A
clean_beam = gaussian2dsymmetric((x,y),A,x0,y0,sigma).reshape(PSFsize,PSFsize)/A
# plot the CLEAN beam
plt.imshow(clean_beam, cmap=cm.jet, interpolation='nearest')
plt.colorbar()
plt.title('CLEAN beam(l,m)');
# ------------------------------------------------
# Step 6: convolve the model with the CLEAN beam
# ------------------------------------------------
I_restored = convolve2d(I_model,clean_beam,mode='same')
# ------------------------------------------------
# Step 7: add the residuals back to the restored image
# ------------------------------------------------
I_restored = I_restored + I_residual
plotmax = np.max(I_dirty)
plotmin = np.min(I_dirty)
fig, axes = plt.subplots(figsize=(16,12))
plt.subplot(221)
plt.imshow(I, cmap=cm.jet, vmax=plotmax, vmin=plotmin, interpolation='nearest')
plt.title('$I_{true}(l,m)$')
plt.colorbar()
plt.subplot(222)
plt.colorbar()
plt.title('$I^D(l,m)$')
plt.imshow(I_dirty, cmap=cm.jet, vmax=plotmax, vmin=plotmin, interpolation='nearest');
plt.subplot(223)
plt.title('I_model(l,m)')
plt.imshow(I_model, cmap=cm.jet, vmax=plotmax, vmin=plotmin, interpolation='nearest')
plt.colorbar()
plt.subplot(224)
plt.colorbar()
plt.title('I_restored(l,m)');
plt.imshow(I_restored, cmap=cm.jet, vmax=plotmax, vmin=plotmin, interpolation='nearest');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
Step2: 6.2 Interative Deconvolution with Point Sources (CLEAN)<a id='deconv
Step3: Left
Step4: Left
Step5: Left
Step6: Figure
Step7: Now set up a fake PSF. We will just make up a star shape to be the PSF
Step8: Now we convolve our true sky image $I_{true}(l,m)$ with the PSF, to get the dirty image $I^D(l,m)$
Step9: Lets plot all three together to see what we have just done
Step10: 6.2.3 CLEAN <a id='deconv
Step11: Let us plot the original dirty image and the final residual image scaled to see the residuals
Step12: After a few iterations the residual image is beginning to look noise like, with some PSF structure remaining. As the remaining pixel value approach the noise level we need to halt deconvolution. This is a tricky decision and many issues occur when we 'over' or 'under' deconvolve an image. The resulting artefacts are further exasperated by poor calibration, but that will come later. For now though, the process of halting an iterative deconvolution requires setting either a mimimum flux threshold which halts when all pixels in the residual image are at or below the threshold, or we keep track of the number of iterations and halt after a specified number of cycles.
Step13: As we can see, many of the components are at or near the same pixel values. We should expect this as we are not removing all the flux from a pixel during an iteration, only a portion of the flux which is determined by the gain $g$ scale factor. These are not separate sources, but in the sky model they are presented as such. An additional step of source finding ($\S$ 6.5 ➞) needs to be applied to the model in order to combine these sources.
Step14: The last steps of CLEAN are somewhat optional, they are performed to produce a 'restored' image, this can be thought of an idealized image produced by an interferometric array with the same resolution as the observing array but with all spatial modes (up to the maximum resolution) fully sampled. That is, a restoring beam, which is usually taken to be a 2-D Gaussian of the same width as the main lobe of the PSF, is convoled with the point source sky model. This is done to reintroduce the the resolution limits of the array without including the PSF sidelobe structure. The restored image is the 'pretty' image we like to show off.
Step15: And now we can convolve the sky model with the restoring beam and add back in the residuals to produce the final restored image
Step16: Now let us plot the results of the CLEAN
|
8,701
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import time
import helper
source_path = 'data/letters_source.txt'
target_path = 'data/letters_target.txt'
source_sentences = helper.load_data(source_path)
target_sentences = helper.load_data(target_path)
source_sentences[:50].split('\n')
target_sentences[:50].split('\n')
def extract_character_vocab(data):
special_words = ['<PAD>', '<UNK>', '<GO>', '<EOS>']
set_words = set([character for line in data.split('\n') for character in line])
int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}
vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}
return int_to_vocab, vocab_to_int
# Build int2letter and letter2int dicts
source_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)
target_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)
# Convert characters to ids
source_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<UNK>']) for letter in line] for line in source_sentences.split('\n')]
target_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<UNK>']) for letter in line] + [target_letter_to_int['<EOS>']] for line in target_sentences.split('\n')]
print("Example source sequence")
print(source_letter_ids[:3])
print("\n")
print("Example target sequence")
print(target_letter_ids[:3])
def pad_id_sequences(source_ids, source_letter_to_int, target_ids, target_letter_to_int, sequence_length):
new_source_ids = [sentence + [source_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \
for sentence in source_ids]
new_target_ids = [sentence + [target_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \
for sentence in target_ids]
return new_source_ids, new_target_ids
# Pad all sequences up to sequence length
source_ids, target_ids = pad_id_sequences(source_letter_ids, source_letter_to_int,
target_letter_ids, target_letter_to_int, sequence_length)
print("Sequence Length")
print(sequence_length)
print("\n")
print("Input sequence example")
print(source_ids[:3])
print("\n")
print("Target sequence example")
print(target_ids[:3])
from distutils.version import LooseVersion
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Number of Epochs
epochs = 60
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 50
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 15
decoding_embedding_size = 15
# Learning Rate
learning_rate = 0.001
def get_model_inputs():
input_data = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
lr = tf.placeholder(tf.float32, name='learning_rate')
target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')
max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')
return input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length
def encoding_layer(input_data, rnn_size, num_layers,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)
# RNN cell
def make_cell(rnn_size):
enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return enc_cell
enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)
return enc_output, enc_state
# Process the input we'll feed to the decoder
def process_decoder_input(target_data, vocab_to_int, batch_size):
'''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)
return dec_input
def decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size,
target_sequence_length, max_target_sequence_length, enc_state, dec_input):
# 1. Decoder Embedding
target_vocab_size = len(target_letter_to_int)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# 2. Construct the decoder cell
def make_cell(rnn_size):
dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return dec_cell
dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
# 3. Dense layer to translate the decoder's output at each time
# step into a choice from the target vocabulary
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
# 4. Set up a training decoder and an inference decoder
# Training Decoder
with tf.variable_scope("decode"):
# Helper for the training process. Used by BasicDecoder to read inputs.
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
# Basic decoder
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
training_helper,
enc_state,
output_layer)
# Perform dynamic decoding using the decoder
training_decoder_output = tf.contrib.seq2seq.dynamic_decode(training_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)[0]
# 5. Inference Decoder
# Reuses the same parameters trained by the training process
with tf.variable_scope("decode", reuse=True):
start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')
# Helper for the inference process.
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,
start_tokens,
target_letter_to_int['<EOS>'])
# Basic decoder
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
inference_helper,
enc_state,
output_layer)
# Perform dynamic decoding using the decoder
inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)[0]
return training_decoder_output, inference_decoder_output
def seq2seq_model(input_data, targets, lr, target_sequence_length,
max_target_sequence_length, source_sequence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers):
# Pass the input data through the encoder. We'll ignore the encoder output, but use the state
_, enc_state = encoding_layer(input_data,
rnn_size,
num_layers,
source_sequence_length,
source_vocab_size,
encoding_embedding_size)
# Prepare the target sequences we'll feed to the decoder in training mode
dec_input = process_decoder_input(targets, target_letter_to_int, batch_size)
# Pass encoder state and decoder inputs to the decoders
training_decoder_output, inference_decoder_output = decoding_layer(target_letter_to_int,
decoding_embedding_size,
num_layers,
rnn_size,
target_sequence_length,
max_target_sequence_length,
enc_state,
dec_input)
return training_decoder_output, inference_decoder_output
# Build the graph
train_graph = tf.Graph()
# Set the graph to default to ensure that it is ready for training
with train_graph.as_default():
# Load the model inputs
input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length = get_model_inputs()
# Create the training and inference logits
training_decoder_output, inference_decoder_output = seq2seq_model(input_data,
targets,
lr,
target_sequence_length,
max_target_sequence_length,
source_sequence_length,
len(source_letter_to_int),
len(target_letter_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers)
# Create tensors for the training logits and inference logits
training_logits = tf.identity(training_decoder_output.rnn_output, 'logits')
inference_logits = tf.identity(inference_decoder_output.sample_id, name='predictions')
# Create the weights for sequence_loss
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(targets, sources, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_targets_batch, pad_sources_batch, pad_targets_lengths, pad_source_lengths
# Split data to training and validation sets
train_source = source_letter_ids[batch_size:]
train_target = target_letter_ids[batch_size:]
valid_source = source_letter_ids[:batch_size]
valid_target = target_letter_ids[:batch_size]
(valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = next(get_batches(valid_target, valid_source, batch_size,
source_letter_to_int['<PAD>'],
target_letter_to_int['<PAD>']))
display_step = 20 # Check training loss after every 20 batches
checkpoint = "best_model.ckpt"
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(1, epochs+1):
for batch_i, (targets_batch, sources_batch, targets_lengths, sources_lengths) in enumerate(
get_batches(train_target, train_source, batch_size,
source_letter_to_int['<PAD>'],
target_letter_to_int['<PAD>'])):
# Training step
_, loss = sess.run(
[train_op, cost],
{input_data: sources_batch,
targets: targets_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths})
# Debug message updating us on the status of the training
if batch_i % display_step == 0 and batch_i > 0:
# Calculate validation cost
validation_loss = sess.run(
[cost],
{input_data: valid_sources_batch,
targets: valid_targets_batch,
lr: learning_rate,
target_sequence_length: valid_targets_lengths,
source_sequence_length: valid_sources_lengths})
print('Epoch {:>3}/{} Batch {:>4}/{} - Loss: {:>6.3f} - Validation loss: {:>6.3f}'
.format(epoch_i,
epochs,
batch_i,
len(train_source) // batch_size,
loss,
validation_loss[0]))
# Save Model
saver = tf.train.Saver()
saver.save(sess, checkpoint)
print('Model Trained and Saved')
def source_to_seq(text):
'''Prepare the text for the model'''
sequence_length = 7
return [source_letter_to_int.get(word, source_letter_to_int['<UNK>']) for word in text]+ [source_letter_to_int['<PAD>']]*(sequence_length-len(text))
input_sentence = 'hello'
text = source_to_seq(input_sentence)
checkpoint = "./best_model.ckpt"
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(checkpoint + '.meta')
loader.restore(sess, checkpoint)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
#Multiply by batch_size to match the model's input parameters
answer_logits = sess.run(logits, {input_data: [text]*batch_size,
target_sequence_length: [len(text)]*batch_size,
source_sequence_length: [len(text)]*batch_size})[0]
pad = source_letter_to_int["<PAD>"]
print('Original Text:', input_sentence)
print('\nSource')
print(' Word Ids: {}'.format([i for i in text]))
print(' Input Words: {}'.format(" ".join([source_int_to_letter[i] for i in text])))
print('\nTarget')
print(' Word Ids: {}'.format([i for i in answer_logits if i != pad]))
print(' Response Words: {}'.format(" ".join([target_int_to_letter[i] for i in answer_logits if i != pad])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.
Step2: target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line.
Step3: Preprocess
Step4: The last step in the preprocessing stage is to determine the the longest sequence size in the dataset we'll be using, then pad all the sequences to that length.
Step5: This is the final shape we need them to be in. We can now proceed to building the model.
Step6: Hyperparameters
Step7: Input
Step8: Sequence to Sequence Model
Step9: 2.2 Decoder
Step10: Set up the decoder components
Step11: 2.3 Seq2seq model
Step12: Model outputs training_decoder_output and inference_decoder_output both contain a 'rnn_output' logits tensor that looks like this
Step15: Get Batches
Step16: Train
Step17: Prediction
|
8,702
|
<ASSISTANT_TASK:>
Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
ls *.csv
import shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column. Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
## Build a Keras wide-and-deep model using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Helper function to handle categorical columns
def categorical_fc(name, values):
orig = tf.feature_column.categorical_column_with_vocabulary_list(name, values)
wrapped = tf.feature_column.indicator_column(orig)
return orig, wrapped
def build_wd_model(dnn_hidden_units = [64, 32], nembeds = 3):
# input layer
deep_inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in ['mother_age', 'gestation_weeks']
}
wide_inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in ['is_male', 'plurality']
}
inputs = {**wide_inputs, **deep_inputs}
# feature columns from inputs
deep_fc = {
colname : tf.feature_column.numeric_column(colname)
for colname in ['mother_age', 'gestation_weeks']
}
wide_fc = {}
is_male, wide_fc['is_male'] = categorical_fc('is_male', ['True', 'False', 'Unknown'])
plurality, wide_fc['plurality'] = categorical_fc('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)'])
# TODO bucketize the float fields. This makes them wide
# https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column
age_buckets = tf.feature_column.bucketized_column() # TODO
wide_fc['age_buckets'] = tf.feature_column.indicator_column(age_buckets)
gestation_buckets = tf.feature_column.bucketized_column() # TODO
wide_fc['gestation_buckets'] = tf.feature_column.indicator_column(gestation_buckets)
# cross all the wide columns. We have to do the crossing before we one-hot encode
crossed = tf.feature_column.crossed_column(
[is_male, plurality, age_buckets, gestation_buckets], hash_bucket_size=20000)
deep_fc['crossed_embeds'] = tf.feature_column.embedding_column(crossed, nembeds)
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
wide_inputs = tf.keras.layers.DenseFeatures() # TODO
deep_inputs = tf.keras.layers.DenseFeatures() # TODO
# hidden layers for the deep side
layers = [int(x) for x in dnn_hidden_units]
deep = deep_inputs
for layerno, numnodes in enumerate(layers):
deep = tf.keras.layers.Dense(numnodes, activation='relu', name='dnn_{}'.format(layerno+1))(deep)
deep_out = deep
# linear model for the wide side
wide_out = tf.keras.layers.Dense(10, activation='relu', name='linear')(wide_inputs)
# concatenate the two sides
both = tf.keras.layers.concatenate([deep_out, wide_out], name='both')
# final output is a linear activation because this is regression
output = tf.keras.layers.Dense(1, activation='linear', name='weight')(both)
model = tf.keras.models.Model(inputs, output)
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
print("Here is our Wide-and-Deep architecture so far:\n")
model = build_wd_model()
print(model.summary())
tf.keras.utils.plot_model(model, 'wd_model.png', show_shapes=False, rankdir='LR')
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 5 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('eval*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
import shutil, os, datetime
OUTPUT_DIR = 'babyweight_trained'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Keras model
Step2: Next, define the feature columns. mother_age and gestation_weeks should be numeric.
Step3: We can visualize the DNN using the Keras plot_model utility.
Step4: Train and evaluate
Step5: Visualize loss curve
Step6: Save the model
|
8,703
|
<ASSISTANT_TASK:>
Python Code:
! #complete
! #complete
%%sh
#complete
! #complete
%cd -0 #complete
!mkdir #complete only if you didn't do 0c, or want a different name for your code directory
%%file <yourdirectory>/code.py
def do_something():
# complete
print(something)# this will make it much easier in future problems to see that something is actually happening
%run <yourdirectory>/code.py # complete
do_something()
%cd # complete
!git init
!git add code.py
!git commit -m #complete
!git remote add <yourgithubusername> <the url github shows you on the repo web page> #complete
!git push <yourgithubusername> master -u
%%file README.md
# complete
!git #complete
!git #complete
# Don't forget to do this cd or something like it... otherwise you'll clone *inside* your repo
%cd -0
!git clone <url from github>#complete
%cd <reponame>#complete
!git branch <name-of-branch>#complete
!git add <files modified>#complete
!git commit -m ""#complete
!git push origin <name-of-branch>#complete
!git #complete
!git remote add <neighbors-username> <url-from-neighbors-github-repo> #complete
!git fetch <neighbors-username> #complete
!git branch --set-upstream-to=<neighbors-username>/master master
!git checkout master
!git pull
!mkdir <yourpkgname>#complete
!git mv code.py <yourpkgname>#complete
#The "touch" unix command simply creates an empty file if there isn't one already.
#You could also use an editor to create an empty file if you prefer.
!touch <yourpkgname>/__init__.py#complete
from <yourpkgname> import code#complete
#if your code.py has a function called `do_something` as in the example above, you can now run it like:
code.do_something()
%%file <yourpkgname>/__init__.py
#complete
import <yourpkgname>#complete
<yourpkgname>.do_something()#complete
from importlib import reload
reload(<yourpkgname>)#complete
<yourpkgname>.do_something()#complete
%%file setup.py
#!/usr/bin/env python
from distutils.core import setup
setup(name='<yourpkgname>',
version='0.1dev',
description='<a description>',
author='<your name>',
author_email='<youremail>',
packages=['<yourpkgname>'],
) #complete
!python setup.py build
%%sh
cd build/lib.X-Y-Z #complete
python -c "import <yourpkgname>;<yourpkgname>.do_something()" #complete
%%sh
conda create -n test_<yourpkgname> anaconda #complete
source activate test_<yourpkgname> #complete
python setup.py install
%%sh
cd $HOME
source activate test_<yourpkgname> #complete
python -c "import <yourpkgname>;<yourpkgname>.do_something()" #complete
!git #complete
%%file -a ~/.pypirc
[distutils]
index-servers = pypi
[pypi]
repository = https://test.pypi.org/legacy/
username = <your user name goes here>
password = <your password goes here>
!python setup.py sdist
!twine upload dist/<yourpackage>-<version>
%%sh
conda create -n test_pypi_<yourpkgname> anaconda #complete
source activate test_pypi_<yourpkgname> #complete
pip install -i https://testpypi.python.org/pypi <yourpkgname>
%%sh
cd $HOME
source activate test_pypi_<yourpkgname> #complete
python -c "import <yourpkgname>;<yourpkgname>.do_something()" #complete
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0b
Step2: 0c
Step3: 0d
Step4: Final note
Step5: If you want to test-run your code
Step6: 1b
Step7: 1c
Step8: The -u is a convenience that means from then on you can use just git push and git pull to send your code to and from github.
Step9: Now add it to the repository via git commit, and push up to github...
Step10: 1f
Step11: Problem 2
Step12: 2c
Step13: 2c
Step14: and push it up (to a branch on your github fork).
Step15: 2d
Step16: Hopefully they are now satisfied and are willing to hit the merge button.
Step17: Now if you look at the local repo, it should include your changes.
Step18: 3b
Step19: 3c
Step20: Now the following should work.
Step21: BUT you will probably get an error here. That's because Python is smart about imports
Step22: 3d
Step23: 3e
Step24: To test that it built sucessfully, the easiest thing to do is cd into the build/lib.X-Y-Z directory ("X-Y-Z" here is OS and machine-specific). Then you should be able to import <yourpkgname>. It's usually best to do this as a completely independent process in python. That way you can be sure you aren't accidentally using an old import as we saw above.
Step25: 3f
Step26: Now we can try running the package from anywhere (not just the source code directory), as long as we're in the same environment that we installed the package in.
Step27: 3g
Step28: Problem 4
Step29: 4b
Step30: Verify that there is a <yourpkg>-<version>.tar.gz file in the dist directory. It should have all of the source code necessary for your package.
Step31: 4d
|
8,704
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from matplotlib.pyplot import plot
from matplotlib.pyplot import show
# 首先读入两只股票的收盘价,并计算收益率
bhp_cp = np.loadtxt('BHP.csv', delimiter=',', usecols=(6,), unpack=True)
vale_cp = np.loadtxt('VALE.csv', delimiter=',', usecols=(6,), unpack=True)
bhp_returns = np.diff(bhp_cp) / bhp_cp[:-1]
vale_returns = np.diff(vale_cp) / vale_cp[:-1]
covariance = np.cov(bhp_returns, vale_returns)
print 'Covariance:\n', covariance
# 查看协方差矩阵对角线的元素
print 'Covariance diagonal:\n', covariance.diagonal()
# 计算矩阵的迹,即对角线之和
print 'Covariance trace:\n', covariance.trace()
# 计算相关系数,相关系数是协方差除以各自标准差的乘积
print 'Correlation coefficient:\n', covariance / (bhp_returns.std() * vale_returns.std())
# 使用corrcoef计算更加精确
print 'Correlation coefficient:\n', np.corrcoef(bhp_returns, vale_returns)
difference = bhp_cp - vale_cp
avg = np.mean(difference)
dev = np.std(difference)
# 检查最后一次收盘价是否在同步状态
print "Out of sync : ", np.abs(difference[-1] - avg) > 2*dev
# 绘制收益率曲线
t = np.arange(len(bhp_returns))
plot(t, bhp_returns, lw=1)
plot(t, vale_returns, lw=2)
show()
# 用三次多项式去拟合两只股票收盘价的差价
t = np.arange(len(bhp_cp))
poly = np.polyfit(t, bhp_cp-vale_cp, 3)
print "Polynomial fit\n", poly
# 用刚才得到的多项式对象,推断下一个值
print "Next value: ", np.polyval(poly, t[-1]+1)
print "Roots: ", np.roots(poly)
# 极值位于导数为0的点
der = np.polyder(poly)
print "Dervative:\n", der
# 得到多项式导函数的系数
# 求出导函数的根,即找出原多项式函数的极值点
print "Extremas: ", np.roots(der)
# 通过argmax和argmin函数找到最大最小值点来检查结果
vals = np.polyval(poly, t)
print "Maximum index: ", np.argmax(vals)
print "Minimum index: ", np.argmin(vals)
plot(t, bhp_cp-vale_cp)
plot(t, vals)
show()
cp, volume = np.loadtxt('BHP.csv', delimiter=',', usecols=(6,7), unpack=True)
change = np.diff(cp)
print "Change:", change
signs = np.sign(change)
print "Signs:\n", signs
pieces = np.piecewise(change, [change<0, change>0], [-1,1])
print "Pieces:\n", pieces
# 检查两次输出是否一致
print "Arrays equal?", np.array_equal(signs, pieces)
# OBV值的计算依赖于前一日的收盘价
print "On balance volume: \n", volume[1:]*signs
# 读入数据
# op is opening price,hp is the highest price
# lp is the lowest price, cp is closing price
op, hp, lp, cp = np.loadtxt('BHP.csv', delimiter=',', usecols=(3,4,5,6), unpack=True)
def calc_profit(op, high, low, close):
# 以开盘价买入,这里不考虑买入多少股
buy = op
if low < buy < high:
return (close-buy) / buy
else:
return 0
# 矢量化一个函数,这样可以避免使用循环
func = np.vectorize(calc_profit)
profits = func(op, hp, lp, cp)
print 'Profits:\n', profits
# 我们选择非零利润的交易日并计算平均值
real_trades = profits[profits != 0]
print 'Number of trades:\n', len(real_trades), round(100.0 * len(real_trades)/len(cp), 2),"%"
print "Average profit/loss % :", round(np.mean(real_trades) * 100, 2)
# 选择正盈利的交易日并计算平均利润
winning_trades = profits[profits > 0]
print "Number of winning trades", len(winning_trades), round(100.0*len(winning_trades)/len(cp),2),"%"
print "Average profit %", round(np.mean(winning_trades) *100, 2)
# 选择负盈利的交易日并计算平均损失
losing_trades = profits[profits < 0]
print "Number of winning trades", len(losing_trades), round(100.0*len(losing_trades)/len(cp),2),"%"
print "Average profit %", round(np.mean(losing_trades) *100, 2)
# 调用hanning函数计算权重,生成一个长度为N的窗
# 这里N为8
N = 8
weights = np.hanning(N)
print "Weights:\n", weights
# 首先读入两只股票的收盘价,并计算收益率
bhp_cp = np.loadtxt('BHP.csv', delimiter=',', usecols=(6,), unpack=True)
vale_cp = np.loadtxt('VALE.csv', delimiter=',', usecols=(6,), unpack=True)
bhp_returns = np.diff(bhp_cp) / bhp_cp[:-1]
vale_returns = np.diff(vale_cp) / vale_cp[:-1]
# convolve函数,离散线性卷积运算
smooth_bhp = np.convolve(weights/weights.sum(), bhp_returns)[N-1 : -N+1]
smooth_vale = np.convolve(weights/weights.sum(), vale_returns)[N-1 : -N+1]
from matplotlib.pyplot import legend
t = np.arange(N-1, len(bhp_returns))
plot(t, bhp_returns[N-1:], lw=1.0, label='bhp returns')
plot(t, smooth_bhp, lw=2.0, label='smooth bhp')
plot(t, vale_returns[N-1:], lw=1.0, label='vale returns')
plot(t, smooth_vale, lw=2.0, label='smooth vale')
legend(loc='best')
show()
import matplotlib.pyplot as plt
# 使用多项式拟合平滑后的数据
K = 5
t = np.arange(N-1, len(bhp_returns))
poly_bhp = np.polyfit(t, smooth_bhp, K)
poly_vale = np.polyfit(t, smooth_vale, K)
fig = plt.figure()
ax1 = fig.add_subplot(211)
ax1.plot(t, smooth_bhp, label="smooth bhp")
poly_bhp_value = np.polyval(poly_bhp, t)
ax1.plot(t, poly_bhp_value, label='poly bhp')
plt.legend()
ax2 = fig.add_subplot(212)
ax2.plot(t, smooth_vale, label="smooth vale")
poly_vale_value = np.polyval(poly_vale, t)
ax2.plot(t, poly_vale_value, label='poly vale')
plt.legend()
show()
# 得到交叉点的x坐标
# 通过求多项式函数差,再求根
poly_sub = np.polysub(poly_bhp, poly_vale)
xpoints = np.roots(poly_sub)
print "Intersection points:", xpoints
# 判断是否为实数
# select选出实数
reals = np.isreal(xpoints)
print "Real number?",reals
xpoints = np.select([reals], [xpoints])
xpoints = xpoints.real
print "Real intersection points:", xpoints
# 去除0元素
# trim_zeros函数可以去掉一维数组中开头和末尾为0的元素
print "Sans 0s", np.trim_zeros(xpoints)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. 股票相关性分析
Step2: 协方差描述的是两个变量共同变化的趋势,其实就是归一化前的相关系数。
Step3: 用相关系数来度量两只股票的相关程度。相关系数的取值范围在-1到1之间,一组数据域自身的相关系数为1.使用corrcoef函数计算相关系数。
Step4: 相关系数矩阵关于对角线对称,BHP与VALE的相关系数等于VALE和BHP的相关系数。看起来0.68的相关系数表示他们的相关程度似乎不是很强。
Step5: 这说明,最后一次收盘价不再同步状态,我们暂时不能进行交易
Step6: 2. 多项式拟合
Step7: 理想情况下,BHP和VALE股票收盘价的差价越小越好。在极限情况下,差值可以在某个点为0。用roots函数找到拟合多项式函数在什么时候达到0。
Step8: 求极值
Step9: 绘制拟合曲线
Step10: 3. 计算净额成交量
Step11: 使用NumPy的sign函数返回每个元素的正负号。
Step12: 使用Numpy的piecewise函数获取数组元素的正负。piecewise(分段),可以根据给定取值,得到分段。
Step13: 4. 模拟交易过程
Step14: 我们尝试以比开盘价稍低一点的价格买入股票。如果这个价格不在当日的股价范围内,则尝试买入失败,没有获利,也没有亏损,我们返回0。否则,我们将以当日收盘价卖出,所获得的利润即买入卖出的差价。
Step15: 5. 数据平滑
Step16: 图中折线有交叉,这些交叉点可能是股价趋势的转折点,至少可以表明BHP和VALE之间的股价关系发生了变化。这些转折点可能会经常出现,我们可以利用他们预测未来的股价走势。
|
8,705
|
<ASSISTANT_TASK:>
Python Code:
print ("hello world!")
port = 22
message = "SSH Server v2"
print ("Search for {} in port {}".format(message,port))
message = "SSH Server v2"
type(message)
port = 22
type(port)
portlist = [22,21,80,5000]
type(portlist)
portOpen = True
type(portOpen)
message= "SSH Server v2"
print (message.upper())
print (message.lower())
print (message.split('r'))
print(message.replace('Server','Client'))
print (message.find('SSH'))
portList = []
portList.append(22)
portList.append(80)
portList.append(5000)
portList.append(25)
print (portList)
portList.sort()
print (portList)
print ("{} portas ativas encontradas!".format(len(portList)))
services = {'ftp':21, 'ssh':22, 'smtp': 25, 'http':80, 'https':443}
services.keys()
services.items()
services.get('ssh')
services['ssh']
print (" Falha encontrada no SSH na porta {}".format(services['ssh']))
del services['ftp']
Falar do range
Falar do break e do continue
def ip_example(ip,port):
return 'IP : {} and PORT {}'.format(ip,port)
print (ip_example('192.168.0.1',80))
print (2/0)
try:
print (2/0)
except ZeroDivisionError:
print ("Divisao por zero, -100 de xp")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Variáveis
Step2: Em Python, o tipo da variável não precisa ser declarado explicitamente, o interpretador verifica qual o tipo de variável e o valor que ocupa na memória
Step3: Strings
Step4: Listas
Step5: Dicionários
Step6: Estruturas Básicas
Step7: ```
Step8: Funções
Step9: Tratamento de Exceções
Step10: try
|
8,706
|
<ASSISTANT_TASK:>
Python Code:
!gsutil cp -r $MODEL_PATH/* gs://$BUCKET/taxifare/model/
%%writefile predictor.py
import tensorflow as tf
from google.cloud import bigquery
PROJECT_ID = 'will_be_replaced'
class TaxifarePredictor(object):
def __init__(self, predict_fn):
self.predict_fn = predict_fn
def predict(self, instances, **kwargs):
bq = bigquery.Client(PROJECT_ID)
query_string =
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 1
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instances['trips_last_5min'] = [trips for _ in range(len(list(instances.items())[0][1]))]
predictions = self.predict_fn(instances)
return predictions['predictions'].tolist() # convert to list so it is JSON serialiable (requirement)
@classmethod
def from_path(cls, model_dir):
predict_fn = tf.contrib.predictor.from_saved_model(model_dir,'predict')
return cls(predict_fn)
!sed -i -e 's/will_be_replaced/{PROJECT_ID}/g' predictor.py
import predictor
instances = {'dayofweek' : [6,5],
'hourofday' : [12,11],
'pickuplon' : [-73.99,-73.99],
'pickuplat' : [40.758,40.758],
'dropofflat' : [40.742,40.758],
'dropofflon' : [-73.97,-73.97]}
predictor = predictor.TaxifarePredictor.from_path(MODEL_PATH)
predictor.predict(instances)
%%writefile setup.py
from setuptools import setup
setup(
name='taxifare_custom_predict_code',
version='0.1',
scripts=['predictor.py'],
install_requires=[
'google-cloud-bigquery==1.16.0',
])
!python setup.py sdist --formats=gztar
!gsutil cp dist/taxifare_custom_predict_code-0.1.tar.gz gs://$BUCKET/taxifare/predict_code/
!gcloud beta ai-platform models create $MODEL_NAME --regions us-central1 --enable-logging --enable-console-logging
#!gcloud ai-platform versions delete $VERSION_NAME --model taxifare --quiet
!gcloud beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--origin gs://$BUCKET/taxifare/model \
--service-account $(gcloud projects list --filter="$PROJECT_ID" --format="value(PROJECT_NUMBER)")-compute@developer.gserviceaccount.com \
--runtime-version 1.14 \
--python-version 3.5 \
--package-uris gs://$BUCKET/taxifare/predict_code/taxifare_custom_predict_code-0.1.tar.gz \
--prediction-class predictor.TaxifarePredictor
import googleapiclient.discovery
instances = {'dayofweek' : [6],
'hourofday' : [12],
'pickuplon' : [-73.99],
'pickuplat' : [40.758],
'dropofflat' : [40.742],
'dropofflon' : [-73.97]}
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, VERSION_NAME)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(response['predictions'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 2. Implement Predictor Interface
Step3: Test Predictor Class Works Locally
Step4: 3. Package Predictor Class and Dependencies
Step5: 4. Deploy
Step6: 5. Invoke API
|
8,707
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='white', font_scale=1.1, palette='Set2')
from desitarget.mock.mockmaker import QSOMaker, LYAMaker, LRGMaker, ELGMaker
for Maker in (QSOMaker, LYAMaker, LRGMaker, ELGMaker):
M = Maker()
data = M.read()
M.qamock_sky(data)
from desitarget.mock.mockmaker import BGSMaker
M = BGSMaker()
data = M.read(only_coords=True)
M.qamock_sky(data)
from desitarget.mock.mockmaker import MWS_NEARBYMaker, WDMaker
for Maker in (MWS_NEARBYMaker, WDMaker):
M = Maker()
data = M.read()
M.qamock_sky(data, nozhist=True)
from desitarget.mock.mockmaker import SKYMaker
M = SKYMaker()
data = M.read()
M.qamock_sky(data, nozhist=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dark-time targets
Step2: Bright-time extragalactic targets
Step3: Bright-time stellar targets
Step4: Sky targets
|
8,708
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'canesm5', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
8,709
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries
import numpy as np
import pandas as pd
from time import time
from sklearn.metrics import f1_score
# Read student data
student_data = pd.read_csv("student-data.csv")
print "Student data read successfully!"
# TODO: Calculate number of students - DONE
n_students = student_data.shape[0]
# TODO: Calculate number of features - DONE
n_features = student_data.shape[1]-1 # not counting passed column
# TODO: Calculate passing students - DONE
n_passed = student_data[student_data['passed'] == 'yes'].shape[0]
# TODO: Calculate failing students - DONE
n_failed = student_data[student_data['passed'] == 'no'].shape[0]
# TODO: Calculate graduation rate - DONE
grad_rate = float(float(n_passed)/float(n_students))*100
# Print the results
print "Total number of students: {}".format(n_students)
print "Number of features: {}".format(n_features)
print "Number of students who passed: {}".format(n_passed)
print "Number of students who failed: {}".format(n_failed)
print "Graduation rate of the class: {:.2f}%".format(grad_rate)
# Extract feature columns
feature_cols = list(student_data.columns[:-1])
# Extract target column 'passed'
target_col = student_data.columns[-1]
# Show the list of columns
print "Feature columns:\n{}".format(feature_cols)
print "\nTarget column: {}".format(target_col)
# Separate the data into feature data and target data (X_all and y_all, respectively)
X_all = student_data[feature_cols]
y_all = student_data[target_col]
# Show the feature information by printing the first five rows
print "\nFeature values:"
print X_all.head()
def preprocess_features(X):
''' Preprocesses the student data and converts non-numeric binary variables into
binary (0/1) variables. Converts categorical variables into dummy variables. '''
# Initialize new output DataFrame
output = pd.DataFrame(index = X.index)
# Investigate each feature column for the data
for col, col_data in X.iteritems():
# If data type is non-numeric, replace all yes/no values with 1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# If data type is categorical, convert to dummy variables
if col_data.dtype == object:
# Example: 'school' => 'school_GP' and 'school_MS'
col_data = pd.get_dummies(col_data, prefix = col)
# Collect the revised columns
output = output.join(col_data)
return output
X_all = preprocess_features(X_all)
print "Processed feature columns ({} total features):\n{}".format(len(X_all.columns), list(X_all.columns))
# TODO: Import any additional functionality you may need here
# TODO: Set the number of training points
num_train = 300
# Set the number of testing points
num_test = X_all.shape[0] - num_train
# TODO: Shuffle and split the dataset into the number of training and testing points above
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all,test_size=num_test,random_state=1)
# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
def train_classifier(clf, X_train, y_train):
''' Fits a classifier to the training data. '''
# Start the clock, train the classifier, then stop the clock
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print "Trained model in {:.4f} seconds".format(end - start)
def predict_labels(clf, features, target):
''' Makes predictions using a fit classifier based on F1 score. '''
# Start the clock, make predictions, then stop the clock
start = time()
y_pred = clf.predict(features)
end = time()
# Print and return results
print "Made predictions in {:.4f} seconds.".format(end - start)
return f1_score(target.values, y_pred, pos_label='yes')
def train_predict(clf, X_train, y_train, X_test, y_test):
''' Train and predict using a classifer based on F1 score. '''
# Indicate the classifier and the training set size
print "Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train))
# Train the classifier
train_classifier(clf, X_train, y_train)
# Print the results of prediction for both training and testing
print "F1 score for training set: {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "F1 score for test set: {:.4f}.".format(predict_labels(clf, X_test, y_test))
# TODO: Import the three supervised learning models from sklearn - DONE
# from sklearn import model_A
from sklearn.neighbors import KNeighborsClassifier
# from sklearn import model_B
from sklearn import tree
# from sklearn import model_C
from sklearn.naive_bayes import GaussianNB
# additional models
from sklearn import ensemble
# TODO: Initialize the three models
clf_A = KNeighborsClassifier()
clf_B = tree.DecisionTreeClassifier(random_state = 42)
clf_C = GaussianNB()
# additional models
clf_D = ensemble.AdaBoostClassifier(random_state = 42)
clf_E = ensemble.RandomForestClassifier(random_state = 42)
classifier_names = ["KNN", "Decision Tree", "Naive Bayes Classifier", "Ada Boost Classifier", " Random Forest Classifier"]
# TODO: Set up the training set sizes - DONE
X_train_100 = X_train[:100]
y_train_100 = y_train[:100]
X_train_200 = X_train[:200]
y_train_200 = y_train[:200]
X_train_300 = X_train[:300]
y_train_300 = y_train[:300]
# TODO: Execute the 'train_predict' function for each classifier and each training set size - DONE
# train_predict(clf, X_train, y_train, X_test, y_test)
count = 0
for clf in [clf_A, clf_B, clf_C, clf_D, clf_E]:
print classifier_names[count]
count += 1
for n in [100, 200, 300]:
train_predict(clf, X_train[:n], y_train[:n], X_test, y_test)
# TODO: Import 'GridSearchCV' and 'make_scorer' - DONE
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import make_scorer
# TODO: Create the parameters list you wish to tune - DONE. 10% of total data size
parameters = {'n_neighbors':(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30),'weights':('uniform','distance')}
# TODO: Initialize the classifier - DONE
clf = KNeighborsClassifier()
# TODO: Make an f1 scoring function using 'make_scorer' - DONE
f1_scorer = make_scorer(f1_score,pos_label='yes')
# TODO: Perform grid search on the classifier using the f1_scorer as the scoring method - DONE
grid_obj = GridSearchCV(clf,param_grid = parameters, scoring = f1_scorer)
# TODO: Fit the grid search object to the training data and find the optimal parameters - DONE
grid_obj = grid_obj.fit(X_train,y_train)
# Get the estimator
clf = grid_obj.best_estimator_
# Report the final F1 score for training and testing after parameter tuning
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementation
Step2: Preparing the Data
Step3: Preprocess Feature Columns
Step4: Implementation
Step5: Training and Evaluating Models
Step6: Implementation
Step7: Tabular Results
|
8,710
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
Image('fermidist.png')
def fermidist(energy, mu, kT):
Compute the Fermi distribution at energy, mu and kT.
e=energy
m=mu
t=kT
f=1/(np.exp((e-m)/t)+1)
return f
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
def plot_fermidist(mu, kT):
plt.plot(fermidist(np.linspace(0,10,11),mu,kT),'k')
plt.xlabel('Energy')
plt.ylabel('F($\epsilon$)')
#plt.tick_params #ran out of time
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
interactive(plot_fermidist,mu=(0.0,5.0,.1),kT=(.1,10.0,.1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exploring the Fermi distribution
Step3: In this equation
Step4: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Step5: Use interact with plot_fermidist to explore the distribution
|
8,711
|
<ASSISTANT_TASK:>
Python Code:
# sphinx_gallery_thumbnail_number = 9
# Authors: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import find_events, fit_dipole
from mne.datasets.brainstorm import bst_phantom_elekta
from mne.io import read_raw_fif
from mayavi import mlab
print(__doc__)
data_path = bst_phantom_elekta.data_path(verbose=True)
raw_fname = op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')
raw = read_raw_fif(raw_fname)
events = find_events(raw, 'STI201')
raw.plot(events=events)
raw.info['bads'] = ['MEG1933', 'MEG2421']
raw.plot_psd(tmax=30., average=False)
raw.plot(events=events)
tmin, tmax = -0.1, 0.1
bmax = -0.05 # Avoid capture filter ringing into baseline
event_id = list(range(1, 33))
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, bmax),
preload=False)
epochs['1'].average().plot(time_unit='s')
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=0.08)
mne.viz.plot_alignment(epochs.info, subject='sample', show_axes=True,
bem=sphere, dig=True, surfaces='inner_skull')
# here we can get away with using method='oas' for speed (faster than "shrunk")
# but in general "shrunk" is usually better
cov = mne.compute_covariance(epochs, tmax=bmax)
mne.viz.plot_evoked_white(epochs['1'].average(), cov)
data = []
t_peak = 0.036 # true for Elekta phantom
for ii in event_id:
# Avoid the first and last trials -- can contain dipole-switching artifacts
evoked = epochs[str(ii)][1:-1].average().crop(t_peak, t_peak)
data.append(evoked.data[:, 0])
evoked = mne.EvokedArray(np.array(data).T, evoked.info, tmin=0.)
del epochs
dip, residual = fit_dipole(evoked, cov, sphere, n_jobs=1)
fig, axes = plt.subplots(2, 1)
evoked.plot(axes=axes)
for ax in axes:
ax.texts = []
for line in ax.lines:
line.set_color('#98df81')
residual.plot(axes=axes)
actual_pos, actual_ori = mne.dipole.get_phantom_dipoles()
actual_amp = 100. # nAm
fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, figsize=(6, 7))
diffs = 1000 * np.sqrt(np.sum((dip.pos - actual_pos) ** 2, axis=-1))
print('mean(position error) = %0.1f mm' % (np.mean(diffs),))
ax1.bar(event_id, diffs)
ax1.set_xlabel('Dipole index')
ax1.set_ylabel('Loc. error (mm)')
angles = np.rad2deg(np.arccos(np.abs(np.sum(dip.ori * actual_ori, axis=1))))
print(u'mean(angle error) = %0.1f°' % (np.mean(angles),))
ax2.bar(event_id, angles)
ax2.set_xlabel('Dipole index')
ax2.set_ylabel(u'Angle error (°)')
amps = actual_amp - dip.amplitude / 1e-9
print('mean(abs amplitude error) = %0.1f nAm' % (np.mean(np.abs(amps)),))
ax3.bar(event_id, amps)
ax3.set_xlabel('Dipole index')
ax3.set_ylabel('Amplitude error (nAm)')
fig.tight_layout()
plt.show()
def plot_pos_ori(pos, ori, color=(0., 0., 0.), opacity=1.):
Plot dipole positions and orientations in 3D.
x, y, z = pos.T
u, v, w = ori.T
mlab.points3d(x, y, z, scale_factor=0.005, opacity=opacity, color=color)
q = mlab.quiver3d(x, y, z, u, v, w,
scale_factor=0.03, opacity=opacity,
color=color, mode='arrow')
q.glyph.glyph_source.glyph_source.shaft_radius = 0.02
q.glyph.glyph_source.glyph_source.tip_length = 0.1
q.glyph.glyph_source.glyph_source.tip_radius = 0.05
mne.viz.plot_alignment(evoked.info, bem=sphere, surfaces='inner_skull',
coord_frame='head', meg='helmet', show_axes=True)
# Plot the position and the orientation of the actual dipole
plot_pos_ori(actual_pos, actual_ori, color=(0., 0., 0.), opacity=0.5)
# Plot the position and the orientation of the estimated dipole
plot_pos_ori(dip.pos, dip.ori, color=(0.2, 1., 0.5))
mlab.view(70, 80, distance=0.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data were collected with an Elekta Neuromag VectorView system at 1000 Hz
Step2: Data channel array consisted of 204 MEG planor gradiometers,
Step3: The data have strong line frequency (60 Hz and harmonics) and cHPI coil
Step4: Our phantom produces sinusoidal bursts at 20 Hz
Step5: Now we epoch our data, average it, and look at the first dipole response.
Step6: Let's use a sphere head geometry model <ch_forward_spherical_model>
Step7: Let's do some dipole fits. We first compute the noise covariance,
Step8: Do a quick visualization of how much variance we explained, putting the
Step9: Now we can compare to the actual locations, taking the difference in mm
Step11: Let's plot the positions and the orientations of the actual and the estimated
|
8,712
|
<ASSISTANT_TASK:>
Python Code:
import sys
from gastrodon import *
from rdflib import *
import pandas as pd
pd.options.display.width=120
pd.options.display.max_colwidth=100
boros=inline(r
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix : <http://example.com/> .
:New_York_City
:boro :Manhattan,:Queens,:Brooklyn,:Bronx,:Staten_Island .
)
list(boros.graph)
len(boros.graph)
boros.select(
SELECT ?boro { :New_York_City :boro ?boro}
)
boros.select(
SELECT ?boro
{ :New_York_City :boro ?boro}
ORDER BY ?boro
)
boros=inline(r
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix : <http://example.com/> .
:New_York_City
:boro :Manhattan,:Manhattan,:Manhattan .
)
len(boros.graph)
boros.select(
SELECT ?boro { :New_York_City :boro ?boro}
)
sequence=inline(r
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
[] a rdf:Seq ;
rdf:_1 "Beginning" ;
rdf:_2 "Middle" ;
rdf:_3 "End" .
).graph
len(sequence)
RDF.Seq,RDF.type
lhs=one(sequence[:RDF.type:RDF.Seq])
lhs
endpoint=LocalEndpoint(sequence)
endpoint.decollect(lhs)
len(endpoint.decollect(lhs))
endpoint.select('''
SELECT (COUNT(*) AS ?cnt) {
?s ?p ?o .
}
''',bindings=dict(s=lhs))
endpoint.select('''
SELECT (COUNT(*) AS ?cnt) {
?s ?p ?o .
MINUS {?s a ?o}
}
''')
endpoint.select('''
SELECT (COUNT(*) AS ?cnt) {
?s ?p ?o .
FILTER(STRSTARTS(STR(?p),"http://www.w3.org/1999/02/22-rdf-syntax-ns#_"))
}
''')
duo=inline(r
@prefix : <http://example.com/> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
:simple a rdf:Seq ;
rdf:_1 "uno" ;
rdf:_2 "dos" ;
rdf:_3 3 ;
rdf:_4 <http://dbpedia.org/resource/4> .
:complex a rdf:Seq ;
rdf:_1 [
a rdf:Seq ;
rdf:_1 33 ;
rdf:_2 91 ;
rdf:_3 15
] ;
rdf:_2 [
a rdf:Seq ;
rdf:_1 541 ;
rdf:_2 3
].
)
duo.select(
SELECT ?s (COUNT(*) AS ?cnt) {
?s ?p ?o .
FILTER(STRSTARTS(STR(?p),"http://www.w3.org/1999/02/22-rdf-syntax-ns#_"))
} GROUP BY ?s
)
duo.select(
SELECT ?s {
?s ?p 3 .
FILTER(STRSTARTS(STR(?p),"http://www.w3.org/1999/02/22-rdf-syntax-ns#_"))
}
)
duo.select(
SELECT ?member {
:complex ?p1 ?innerList .
?innerList ?p2 ?member .
FILTER(STRSTARTS(STR(?p1),"http://www.w3.org/1999/02/22-rdf-syntax-ns#_"))
FILTER(STRSTARTS(STR(?p2),"http://www.w3.org/1999/02/22-rdf-syntax-ns#_"))
}
)
duo.decollect(URIRef("http://example.com/simple"))
sequence_11=inline(r
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix : <http://example.com/> .
:s11 a rdf:Seq ;
rdf:_1 "one" ;
rdf:_2 "two" ;
rdf:_3 "three" ;
rdf:_4 "four" ;
rdf:_5 "five" ;
rdf:_6 "six" ;
rdf:_7 "seven" ;
rdf:_8 "eight" ;
rdf:_9 "nine" ;
rdf:_10 "ten" ;
rdf:_11 "eleven" .
)
goes_to_eleven=sequence_11.decollect(URIRef("http://example.com/s11"))
goes_to_eleven
assert goes_to_eleven[0]=="one"
assert goes_to_eleven[1]=="two"
assert goes_to_eleven[10]=="eleven"
assert len(goes_to_eleven)
sequence_11.select(
SELECT ?member {
:s11 ?index ?member
FILTER(STRSTARTS(STR(?index),"http://www.w3.org/1999/02/22-rdf-syntax-ns#_"))
} ORDER BY(?index)
)
sequence_11.select(
SELECT ?member {
:s11 ?index ?member
FILTER(STRSTARTS(STR(?index),"http://www.w3.org/1999/02/22-rdf-syntax-ns#_"))
BIND(xsd:integer(SUBSTR(STR(?index),45)) AS ?number)
} ORDER BY(?number)
)
laurie=inline(r
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix : <http://example.com/> .
:from_the_air a rdf:Bag ;
rdf:_1 "this" ;
rdf:_2 "is" ;
rdf:_3 "the" ;
rdf:_4 "time" ;
rdf:_5 "and" ;
rdf:_6 "this" ;
rdf:_7 "is" ;
rdf:_8 "the" ;
rdf:_9 "record" ;
rdf:_10 "of" ;
rdf:_11 "the" ;
rdf:_12 "time" .
)
laurie.decollect(URIRef("http://example.com/from_the_air"))
x=["first","second","third"]
x[0]
x=pd.DataFrame([25,"or",6,2,4])
x
x.at[0,0]
member(0)
idx=2 # the third word!
sequence_11.select(
SELECT ?word { :s11 ?index ?word . }
,bindings=dict(index=member(idx)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Representing Sets
Step3: Note that the comma is a shorthand notation that lets me write a number of statements that share the same predicate and object. A Graph implements __iter__, so I can get all of the facts in it like so
Step4: Just as there are five boroughs, there are five facts.
Step6: Now I make a LocalEndpoint which will re|nder SPARQL query results as pandas DataFrame(s)
Step8: Note that the order that the facts come back in the SPARQL query is random, because RDF doesn't remember the order in which statements were made. This is the right behavior in this case, because the boroughs do not come in any particular order, although we can order them alphabetically, by population, or some other metric, so long as we have the data in the graph and write the right SPARQL query
Step11: Another characteristic of a set is that a given topic can only be listed once. For instance, if we repeat the same fact over and over again, RDF will only capture it once
Step13: A simple sequence example
Step14: rdflib comes with definitions for classes and predicates in common namespaces such as <http
Step15: I used a blank node to 'name' the list, so I need a reference to the list to work with.
Step16: Sometimes you might want to turn an RDF Container into a Python list so you can work on it with Python. You can do this with the decollect function.
Step17: Once you've converted a list to Python, you can take the length with the len function
Step18: What if we want to write a SPARQL query to get the length? There isn't a SPARQL function to get the length of a list, but we can write our own. One thing I might try is counting the statements for which the container is the subject.
Step19: Close, but no cigar. I got four instead of three because it counted the statement that
Step20: The above query works in this case, and works no matter how many types are associated with the container. Nothing stops people from adding more statements where the container is the subject, and in that case we'd get a count that's too high. The following query is better, because it selects exactly for predicates of the form rdf
Step22: A more complex case
Step24: RDF and SPARQL let you look at lists in a different way from most languages. For instance, the following query finds all of the lists in the model and counts how many members each have. Two of the lists have URI names, the other two are the containers inside
Step26: Another kind of query you can write looks for all the containers that contain a certain value, for instance, the number 3.
Step28: It starts getting ugly though, if you want to write a query that involves more than one list, say, lists that are nested. For instance, to list the items in
Step29: Decollecting a single list from the model is simple; the decollect method automatically converts RDF terms into native Python data types (strings and integers)
Step31: Counting to Eleven
Step33: Just to see how you could get it wrong, the following query gives the wrong answer because RDF resources sort in alphabetical order
Step35: Don't be that guy!
Step37: Bags
Step38: Python has a built-in collection type called Counter which intended to represent bags, so decollect converts a bag to a Counter, giving us a nice example of a "bag of words".
Step39: Array Index Offsets
Step40: nothing prevents a library in a language like Python from indexing lists any way it wants, but Pandas frames behave like Python lists, both in how they are displayed
Step41: in how in they are accessed
Step42: It's an obvious and very possible mistake that you could want to access (say) the third item of a list, and get confused as to it being a Python list where one would write
Step44: Which can be used as follows
|
8,713
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# Required imports
from wikitools import wiki
from wikitools import category
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
import gensim
import numpy as np
import lda
import lda.datasets
import matplotlib.pyplot as plt
from test_helper import Test
site = wiki.Wiki("https://en.wikipedia.org/w/api.php")
# Select a category with a reasonable number of articles (>100)
cat = "Economics"
# cat = "Pseudoscience"
print cat
# Loading category data. This may take a while
print "Loading category data. This may take a while..."
cat_data = category.Category(site, cat)
corpus_titles = []
corpus_text = []
for n, page in enumerate(cat_data.getAllMembersGen()):
print "\r Loading article {0}".format(n + 1),
corpus_titles.append(page.title)
corpus_text.append(page.getWikiText())
n_art = len(corpus_titles)
print "\nLoaded " + str(n_art) + " articles from category " + cat
# n = 5
# print corpus_titles[n]
# print corpus_text[n]
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "punkt"
# nltk.download()
corpus_tokens = []
for n, art in enumerate(corpus_text):
print "\rTokenizing article {0} out of {1}".format(n + 1, n_art),
# This is to make sure that all characters have the appropriate encoding.
art = art.decode('utf-8')
# Tokenize each text entry.
# scode: tokens = <FILL IN>
# Add the new token list as a new element to corpus_tokens (that will be a list of lists)
# scode: <FILL IN>
print "\n The corpus has been tokenized. Let's check some portion of the first article:"
print corpus_tokens[0][0:30]
Test.assertEquals(len(corpus_tokens), n_art, "The number of articles has changed unexpectedly")
Test.assertTrue(len(corpus_tokens) >= 100,
"Your corpus_tokens has less than 100 articles. Consider using a larger dataset")
# Select stemmer.
stemmer = nltk.stem.SnowballStemmer('english')
corpus_filtered = []
for n, token_list in enumerate(corpus_tokens):
print "\rFiltering article {0} out of {1}".format(n + 1, n_art),
# Convert all tokens in token_list to lowercase, remove non alfanumeric tokens and stem.
# Store the result in a new token list, clean_tokens.
# scode: filtered_tokens = <FILL IN>
# Add art to corpus_filtered
# scode: <FILL IN>
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_filtered[0][0:30]
Test.assertTrue(all([c==c.lower() for c in corpus_filtered[23]]), 'Capital letters have not been removed')
Test.assertTrue(all([c.isalnum() for c in corpus_filtered[13]]), 'Non alphanumeric characters have not been removed')
# Select stemmer.
stemmer = nltk.stem.SnowballStemmer('english')
corpus_stemmed = []
for n, token_list in enumerate(corpus_filtered):
print "\rStemming article {0} out of {1}".format(n + 1, n_art),
# Apply stemming to all tokens in token_list and save them in stemmed_tokens
# scode: stemmed_tokens = <FILL IN>
# Add stemmed_tokens to the stemmed corpus
# scode: <FILL IN>
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_stemmed[0][0:30]
Test.assertTrue((len([c for c in corpus_stemmed[0] if c!=stemmer.stem(c)]) < 0.1*len(corpus_stemmed[0])),
'It seems that stemming has not been applied properly')
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "wordnet"
# nltk.download()
wnl = WordNetLemmatizer()
# Select stemmer.
corpus_lemmat = []
for n, token_list in enumerate(corpus_filtered):
print "\rLemmatizing article {0} out of {1}".format(n + 1, n_art),
# scode: lemmat_tokens = <FILL IN>
# Add art to the stemmed corpus
# scode: <FILL IN>
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_lemmat[0][0:30]
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "stopwords"
# nltk.download()
corpus_clean = []
stopwords_en = stopwords.words('english')
n = 0
for token_list in corpus_stemmed:
n += 1
print "\rRemoving stopwords from article {0} out of {1}".format(n, n_art),
# Remove all tokens in the stopwords list and append the result to corpus_clean
# scode: clean_tokens = <FILL IN>
# scode: <FILL IN>
print "\n Let's check tokens after cleaning:"
print corpus_clean[0][0:30]
Test.assertTrue(len(corpus_clean) == n_art, 'List corpus_clean does not contain the expected number of articles')
Test.assertTrue(len([c for c in corpus_clean[0] if c in stopwords_en])==0, 'Stopwords have not been removed')
# Create dictionary of tokens
D = gensim.corpora.Dictionary(corpus_clean)
n_tokens = len(D)
print "The dictionary contains {0} tokens".format(n_tokens)
print "First tokens in the dictionary: "
for n in range(10):
print str(n) + ": " + D[n]
# Transform token lists into sparse vectors on the D-space
# scode: corpus_bow = <FILL IN>
Test.assertTrue(len(corpus_bow)==n_art, 'corpus_bow has not the appropriate size')
print "Original article (after cleaning): "
print corpus_clean[0][0:30]
print "Sparse vector representation (first 30 components):"
print corpus_bow[0][0:30]
print "The first component, {0} from document 0, states that token 0 ({1}) appears {2} times".format(
corpus_bow[0][0], D[0], corpus_bow[0][0][1])
print "{0} tokens".format(len(D))
print "{0} Wikipedia articles".format(len(corpus_bow))
# SORTED TOKEN FREQUENCIES (I):
# Create a "flat" corpus with all tuples in a single list
corpus_bow_flat = [item for sublist in corpus_bow for item in sublist]
# Initialize a numpy array that we will use to cont tokens.
# token_count[n] should store the number of ocurrences of the n-th token, D[n]
token_count = np.zeros(n_tokens)
# Count the number of occurrences of each token.
for x in corpus_bow_flat:
# Update the proper element in token_count
# scode: <FILL IN>
# Sort by decreasing number of occurences
ids_sorted = np.argsort(- token_count)
tf_sorted = token_count[ids_sorted]
print D[ids_sorted[0]]
print "{0} times in the whole corpus".format(tf_sorted[0])
# SORTED TOKEN FREQUENCIES (II):
plt.rcdefaults()
# Example data
n_bins = 25
hot_tokens = [D[i] for i in ids_sorted[n_bins-1::-1]]
y_pos = np.arange(len(hot_tokens))
z = tf_sorted[n_bins-1::-1]/n_art
plt.barh(y_pos, z, align='center', alpha=0.4)
plt.yticks(y_pos, hot_tokens)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
# SORTED TOKEN FREQUENCIES:
# Example data
plt.semilogy(tf_sorted)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
# scode: cold_tokens = <FILL IN>
print "There are {0} cold tokens, which represent {1}% of the total number of tokens in the dictionary".format(
len(cold_tokens), float(len(cold_tokens))/n_tokens*100)
# scode: <WRITE YOUR CODE HERE>
# scode: <WRITE YOUR CODE HERE>
# scode: <WRITE YOUR CODE HERE>
# scode: <WRITE YOUR CODE HERE>
# scode: <WRITE YOUR CODE HERE>
# Check the code below to see how ngrams works, and adapt it to solve the exercise.
# from nltk.util import ngrams
# sentence = 'this is a foo bar sentences and i want to ngramize it'
# sixgrams = ngrams(sentence.split(), 2)
# for grams in sixgrams:
# print grams
import pickle
data = {}
data['D'] = D
data['corpus_bow'] = corpus_bow
pickle.dump(data, open("wikiresults.p", "wb"))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Corpus acquisition.
Step2: You can try with any other categories. Take into account that the behavior of topic modelling algorithms may depend on the amount of documents available for the analysis. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https
Step3: Now, we have stored the whole text collection in two lists
Step4: 2. Corpus Processing
Step5: Task
Step6: 2.2. Homogeneization
Step7: 2.2.2. Stemming vs Lemmatization
Step8: Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk
Step9: Task
Step10: One of the advantages of the lemmatizer method is that the result of lemmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.
Step11: Task
Step12: 2.4. Vectorization
Step13: In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transform any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list.
Step14: At this point, it is good to make sure to understand what has happened. In corpus_clean we had a list of token lists. With it, we have constructed a Dictionary, D, which assign an integer identifier to each token in the corpus.
Step15: Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples
Step16: and a bow representation of a corpus with
Step17: Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus.
Step18: ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is
Step19: which appears
Step20: In the following we plot the most frequent terms in the corpus.
Step21: Exercise
Step22: Exercise
Step23: Exercise
Step24: Exercise (All in one)
Step25: Exercise (Visualizing categories)
Step26: Exercise (bigrams)
Step27: 2.4. Saving results
|
8,714
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'sandbox-1', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
8,715
|
<ASSISTANT_TASK:>
Python Code:
cat ./poi_names.txt
enron_data = pickle.load(open("./final_project_dataset.pkl"))
enron_data.iteritems().next()
# Replace "Nan" with NaN
for columns in enron_data.itervalues():
for k,v in columns.iteritems():
if type(v) is str and v.lower() == "nan":
columns[k] = np.nan
enron_df = pd.DataFrame.from_dict(enron_data, orient="index")
enron_df
# Omit the TOTAL index
enron_df.drop('TOTAL', inplace=True)
enron_df.loc[:, ['salary',
'deferral_payments',
'total_payments',
'loan_advances',
'bonus',
'restricted_stock_deferred',
'deferred_income',]].describe()
enron_df.loc[:, ['total_stock_value',
'expenses',
'exercised_stock_options',
'other',
'long_term_incentive',
'restricted_stock',
'director_fees']].describe()
enron_df.loc[:, ['to_messages',
'email_address',
'from_poi_to_this_person',
'from_messages',
'from_this_person_to_poi',
'shared_receipt_with_poi']].describe()
enron_poi = enron_df[enron_df['poi']==True]
print("Number of POI's: " + str(len(enron_poi)))
enron_poi
enron_df.isnull().sum()
sum(enron_df.isnull().sum())
enron_df.fillna(0, inplace=True)
# Drop email_address column
enron_df.drop('email_address', axis=1, inplace=True)
enron_df.drop("THE TRAVEL AGENCY IN THE PARK", inplace=True)
enron_df.drop("LOCKHART EUGENE E", inplace=True)
enron_df['from_poi_ratio'] = enron_df['from_poi_to_this_person'] / enron_df['from_messages']
enron_df.fillna(0, inplace=True)
enron_df['to_poi_ratio'] = enron_df['from_this_person_to_poi'] / enron_df['to_messages']
enron_df.fillna(0, inplace=True)
enron_df['bonus_ratio'] = enron_df['bonus'] / enron_df['salary']
enron_df[['poi','bonus_ratio']]
enron_df.fillna(0, inplace=True)
# Separate labels and features
enron_df_labels = enron_df['poi']
enron_df_features = enron_df[enron_df.columns.difference(['poi'])]
pipeline = Pipeline([
('kbest', SelectKBest()),
('gnb', GaussianNB())])
folds = 100
cv = StratifiedShuffleSplit(enron_df_labels, n_iter= folds, random_state = 42, test_size=0.20)
parameters = {"kbest__k": [1, 2, 3, 5, 8, 13, 19], "kbest__score_func": [f_classif]}
clf = GridSearchCV(pipeline, param_grid=parameters, cv=cv, scoring='f1')
clf.fit(enron_df_features, enron_df_labels)
kbest = clf.best_estimator_.steps[0][1]
kbest.get_support()
features = sorted(zip(enron_df_features.columns, kbest.scores_, kbest.get_support()), key=lambda x: x[1])
my_list = [x[0] for x in features if x[2] == True]
my_list = ['poi'] + my_list
my_list
data = enron_df[my_list].transpose().to_dict()
dump_classifier_and_data(GaussianNB(), data, my_list)
tester.main()
pipeline = Pipeline([
("scale", preprocessing.StandardScaler()),
('pca', PCA()),
('gnb', GaussianNB())])
folds = 100
cv = StratifiedShuffleSplit(enron_df_labels, n_iter= folds, random_state = 42, test_size=0.20)
parameters = {
"pca__n_components": [2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 15, 16, 17, 18, 19],
}
clf = GridSearchCV(pipeline, param_grid=parameters, cv=cv, scoring='f1')
clf.fit(enron_df_features, enron_df_labels)
pca = clf.best_estimator_.steps[1][1]
pca = clf.best_estimator_.steps[1][1]
pca.n_components
pca_nb = Pipeline([
("scale", preprocessing.StandardScaler()),
('pca', PCA(n_components=pca.n_components)),
('gnb', GaussianNB())])
features_list = list(enron_df.columns)
features_list.remove('poi')
features_list = ['poi'] + features_list
dump_classifier_and_data(pca_nb, enron_df.transpose().to_dict(), features_list)
tester.main()
pipeline = Pipeline([
('kbest', SelectKBest()),
('dt', DecisionTreeClassifier())])
folds = 100
cv = StratifiedShuffleSplit(enron_df_labels, n_iter= folds, random_state = 42, test_size=0.20)
parameters = {"kbest__k": [1, 2, 3, 5, 8, 13, 19], 'dt__max_features': [None, 'auto', 'log2'],
'dt__criterion': ['gini', 'entropy']}
clf = GridSearchCV(pipeline, param_grid=parameters, cv=cv, scoring='f1')
clf.fit(enron_df_features, enron_df_labels)
kbest = clf.best_estimator_.steps[0][1]
kbest.get_support()
features = sorted(zip(enron_df_features.columns, kbest.scores_, kbest.get_support()), key=lambda x: x[1])
my_list = [x[0] for x in features if x[2] == True]
my_list = ['poi'] + my_list
my_list
clf.best_estimator_.steps[1][1]
data = enron_df[my_list].transpose().to_dict()
dump_classifier_and_data(clf.best_estimator_.steps[1][1], data, my_list)
tester.main()
pipeline = Pipeline([
('dt', DecisionTreeClassifier())])
folds = 100
cv = StratifiedShuffleSplit(enron_df_labels, n_iter= folds, random_state = 42, test_size=0.20)
parameters = {'dt__max_features': [1, 2, 3, 5, 8, 13, 19],
'dt__criterion': ['gini', 'entropy']}
clf = GridSearchCV(pipeline, param_grid=parameters, cv=cv, scoring='f1')
clf.fit(enron_df_features, enron_df_labels)
data = enron_df.transpose().to_dict()
dump_classifier_and_data(clf.best_estimator_.steps[0][1], data, features_list)
tester.main()
pipeline = Pipeline([
("scale", preprocessing.StandardScaler()),
('pca', PCA()),
('dt', DecisionTreeClassifier())])
folds = 100
cv = StratifiedShuffleSplit(enron_df_labels, n_iter= folds, random_state = 42, test_size=0.20)
parameters = {"pca__n_components": [2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 15, 16, 17, 18, 19],
'dt__max_features': [None, 'auto', 'log2'],
'dt__criterion': ['gini', 'entropy']}
clf = GridSearchCV(pipeline, param_grid=parameters, cv=cv, scoring='f1')
clf.fit(enron_df_features, enron_df_labels)
pca = clf.best_estimator_.steps[1][1]
pca.n_components
pca_dt = Pipeline([
("scale", preprocessing.StandardScaler()),
('pca', PCA(n_components=pca.n_components)),
('dt', clf.best_estimator_.steps[2][1])])
dump_classifier_and_data(pca_dt, enron_df.transpose().to_dict(), features_list)
tester.main()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This file contains a list of 35 people who were a person of interest in the Enron scandal. A POI is defined as someone who was
Step2: Features in the dataset
Step3: There is a row with the name, "TOTAL". This row should be removed.
Step4: Financial Features
Step5: Email Features
Step6: Persons of Interest
Step7: Number of NaN's
Step8: Data Exploration Findings
Step9: Outlier Investigation
Step10: Indexes Removed
Step11: Do POI's write more emails to other POI's compared to non POI's?
Step12: Do POI's have a bigger bonus to salary ratio?
Step13: For NaN values, the labels are more POI than not so these values will be filled with 0, since there seems to be a weak correlation between POI's and a large bonus_ratio.
Step14: Feature Engineering Conclusion
Step15: Baseline Classifier
Step16: Using PCA instead of selectKBest
Step17: PCA in our case performs poorly when compared to selectKBest. This indicates that variance is needed in the dataset.
Step18: Without SelectKBest Feature Selection
Step19: With PCA
|
8,716
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-1', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
8,717
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from preamble import *
plt.rcParams['savefig.dpi'] = 100 # This controls the size of your figures
# Comment out and restart notebook if you only want the last output of each cell.
InteractiveShell.ast_node_interactivity = "all"
# This is a temporary read-only OpenML key. Replace with your own key later.
oml.config.apikey = '11e82c8d91c5abece86f424369c71590'
X, y = make_blobs(centers=2, n_samples=1000, random_state=0)
robot_data = oml.datasets.get_dataset(1497) # Download Robot data
# Get the predictors X and the labels y
X, y = robot_data.get_data(target=robot_data.default_target_attribute);
ram_prices = pd.read_csv('data/ram_price.csv')
plt.semilogy(ram_prices.date, ram_prices.price)
plt.xlabel("Year")
plt.ylabel("Price in $/Mbyte");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Kernel selection (4 points (1+2+1))
Step2: Robots and SVMs (4 points (2+1+1))
Step3: A benchmark study (3 points (2+1))
|
8,718
|
<ASSISTANT_TASK:>
Python Code:
import auto_martini as am
import numpy as np
from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
from IPython.display import Image
import rdkit
from rdkit.Chem import Draw
from rdkit.Chem import AllChem
from rdkit.Chem import rdDepictor
from rdkit.Chem.Draw import rdMolDraw2D
print(rdkit.__version__)
smiles = "N=Cc1ccccc1"
mol = Chem.MolFromSmiles(smiles)
Chem.AddHs(mol)
AllChem.EmbedMolecule(mol)
mol
# Load the molecule in auto_martini and coarse-grain it
mol_am, _ = am.topology.gen_molecule_smi(smiles)
cg = am.solver.Cg_molecule(mol_am, "PHM")
print("list_heavyatom_names:",cg.list_heavyatom_names)
print("cg_bead_names: ",cg.cg_bead_names)
print("atom_partitioning: ",cg.atom_partitioning)
colors = [(0.121, 0.466, 0.705), (1.0, 0.498, 0.0549), (0.172, 0.627, 0.172)]
hit_ats = list(cg.atom_partitioning.keys())
atom_cols = {}
for i, at in enumerate(hit_ats):
atom_cols[at] = colors[cg.atom_partitioning[i]%4]
hit_bonds = []
bond_cols = {}
for i, at in enumerate(hit_ats):
for j, att in enumerate(hit_ats):
if i > j and atom_cols[i] == atom_cols[j] and mol.GetBondBetweenAtoms(i,j):
b_idx = mol.GetBondBetweenAtoms(i,j).GetIdx()
hit_bonds.append(b_idx)
bond_cols[b_idx] = colors[cg.atom_partitioning[i]%4]
d = rdMolDraw2D.MolDraw2DCairo(400,400)
rdMolDraw2D.PrepareAndDrawMolecule(d, mol,
highlightAtoms=hit_ats,
highlightAtomColors=atom_cols,
highlightBonds=hit_bonds,
highlightBondColors=bond_cols)
d.DrawMolecule(mol)
d.FinishDrawing()
d.WriteDrawingText('phm_highlight.png')
Image('phm_highlight.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Parametrization (recap of Tutorial 1)
Step2: Highlighting atoms and CG beads
Step3: We'll want to color atoms according to beads, so let's define a color palette
Step4: rdkit offers convenient rendering functions to highlight particular atoms and bonds. We will color heavy atoms involved in a particular CG bead. First let's focus on atoms
Step5: Same for bonds. rdkit here comes to the rescue, because it keeps track of which heavy atoms are connected by bonds.
Step6: We now have all the necessary ingredients to render a molecule with the appropriate atom and bond highlighting. Here we use the rdMolDraw2D from rdkit. We'll save this to a file, and render the file in the notebook
|
8,719
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import glob
import skbio
import re
mer = 6
path_glob = '/Users/luke/singlecell/jellyfish/*_%smer.fa' % mer
df = pd.DataFrame(index=[x.split('/')[-1] for x in glob.glob(path_glob)])
for path in glob.glob(path_glob):
fasta = skbio.io.read(path, format='fasta')
for x in fasta:
kmer = str(x)
count = x.metadata['id']
df.loc[path.split('/')[-1], kmer] = count
df.fillna(0, inplace=True)
df.index = [re.sub('_%smer.fa' % mer, '', x) for x in df.index]
df_genome_metadata = pd.read_csv('/Users/luke/singlecell/notebooks/genome_metadata.tsv', sep='\t', index_col=0)
code_pro = list(df_genome_metadata['jellyfish'][(df_genome_metadata['genus'] == 'Prochlorococcus') & df_genome_metadata['jellyfish'].notnull()])
df_pro = df.loc[code_pro]
df_pro.to_csv('/Users/luke/singlecell/notebooks/jellyfish_proch_%smer.csv' % mer)
code_pel = list(df_genome_metadata['jellyfish'][(df_genome_metadata['genus'] == 'Pelagibacter') & df_genome_metadata['jellyfish'].notnull()])
df_pel = df.loc[code_pel]
df_pel.to_csv('/Users/luke/singlecell/notebooks/jellyfish_pelag_%smer.csv' % mer)
code_combined = list(df_genome_metadata['jellyfish'][((df_genome_metadata['genus'] == 'Pelagibacter') | (df_genome_metadata['genus'] == 'Prochlorococcus')) & df_genome_metadata['jellyfish'].notnull()])
df_combined = df.loc[code_combined]
df_combined.to_csv('/Users/luke/singlecell/notebooks/jellyfish_combined_%smer.csv' % mer)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataframe of merged jellyfish results
Step2: Genome metadata (want to know if it's Prochlorococcus or Pelagibacter)
Step3: Write combined and individual files for Prochlorococcus and Pelagibacter
|
8,720
|
<ASSISTANT_TASK:>
Python Code:
# I sometimes need to choose PyTorch...
import inspect
import sys
#sys.path.insert(0, '/home/tv/pytorch/pytorch/build/lib.linux-x86_64-3.8//')
import torch
import torch.utils.dlpack
# import TVM
import sys
import os
tvm_root = '/home/tv/rocm/tvm/tvm/'
tvm_paths = [os.path.join(tvm_root, p) for p in ['python', 'topi/python', 'nnvm/python']]
os.environ['PYTHONPATH'] = ':'.join([os.environ.get('PYTHONPATH', '')] + tvm_paths)
for p in tvm_paths:
sys.path.insert(0, p)
import tvm
import tvm.relay
torch.cuda.get_device_name()
import transformers
from transformers import BertModel, BertTokenizer, BertConfig
import numpy
import torch
enc = BertTokenizer.from_pretrained("bert-base-uncased")
# Tokenizing input text
text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"
tokenized_text = enc.tokenize(text)
# Masking one of the input tokens
masked_index = 8
tokenized_text[masked_index] = '[MASK]'
indexed_tokens = enc.convert_tokens_to_ids(tokenized_text)
segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]
# Creating a dummy input
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
dummy_input = [tokens_tensor, segments_tensors]
# If you are instantiating the model with `from_pretrained` you can also easily set the TorchScript flag
model = BertModel.from_pretrained("bert-base-uncased", torchscript=True)
model.eval()
for p in model.parameters():
p.requires_grad_(False)
transformers.__version__
# Creating the trace
traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
traced_model.eval()
for p in traced_model.parameters():
p.requires_grad_(False)
model.cuda()
tt_c = tokens_tensor.cuda()
st_c = segments_tensors.cuda()
res_pt = model(tt_c, st_c)
torch.cuda.synchronize()
def y():
for i in range(100):
model(tt_c, st_c)
torch.cuda.synchronize()
y()
%timeit y()
shape_list = [(i.debugName().split('.')[0], i.type().sizes()) for i in list(traced_model.graph.inputs())[1:]]
shape_list
mod_bert, params_bert = tvm.relay.frontend.pytorch.from_pytorch(traced_model,
shape_list, default_dtype="float32")
target = tvm.target.rocm(model='gfx906')
ctx = tvm.context(target.id.name)
target_host = 'llvm'
tt_a = tvm.nd.array(tokens_tensor.numpy(), ctx)
st_a = tvm.nd.array(segments_tensors.numpy(), ctx)
tvm.relay.backend.compile_engine.get().clear() # just to be sure, see https://github.com/apache/incubator-tvm/pull/5724
with tvm.transform.PassContext(opt_level=3):
graph, lib, params = tvm.relay.build(mod_bert,
target=target,
target_host=target_host,
params=params_bert)
module = tvm.contrib.graph_runtime.create(graph, lib, ctx)
module.set_input("input_ids", tt_a)
module.set_input("attention_mask", st_a)
module.set_input(**params)
module.run()
o0 = module.get_output(0)
o1 = module.get_output(1)
(numpy.abs((res_pt[0].cpu().numpy() - o0.asnumpy())).max(),
numpy.abs((res_pt[1].cpu().numpy() - o1.asnumpy())).max())
def x():
for i in range(100):
module.run()
ctx.sync()
x()
%timeit x()
tasks = tvm.autotvm.task.extract_from_program(mod_bert["main"], target=target, params=params)
tasks
log_filename = 'bert-tuning.stage1.log'
n_trial = 20 # for real tuning, make this 2000!
def do_tune(tasks, log_filename):
tmp_log_file = log_filename + ".tmp"
for i, tsk in enumerate(reversed(tasks)):
prefix = "[Task %2d/%2d] " %(i+1, len(tasks))
# we use threading and tornado here to work around TVM and Jupyter colliding over IOLoops
# In a regular python command line, you should be able to just call the tuner...
import threading
import tornado
# create tuner
tuner = tvm.autotvm.tuner.XGBTuner(tsk, loss_type='rank')
if os.path.isfile(tmp_log_file):
tuner.load_history(tvm.autotvm.record.load_from_file(tmp_log_file))
# do tuning
tsk_trial = min(n_trial, len(tsk.config_space))
def tune_task_fn():
iol = tornado.ioloop.IOLoop() # we need an event loop
tuner.tune(
n_trial=n_trial,
early_stopping=600,
measure_option=tvm.autotvm.measure_option(
builder=tvm.autotvm.LocalBuilder(timeout=10),
runner=tvm.autotvm.LocalRunner(number=20, repeat=3, timeout=4, min_repeat_ms=150)),
callbacks=[
tvm.autotvm.callback.progress_bar(tsk_trial, prefix=prefix),
tvm.autotvm.callback.log_to_file(tmp_log_file)
])
tuning_thread = threading.Thread(target=tune_task_fn) # create a thread start it and wait on it
tuning_thread.start()
tuning_thread.join()
# done tuning, on to the next task
# pick best records to a cache file
tvm.autotvm.record.pick_best(tmp_log_file, log_filename)
#do_tune(tasks, log_filename)
tvm.relay.backend.compile_engine.get().clear()
with tvm.autotvm.apply_history_best(log_filename):
with tvm.transform.PassContext(opt_level=3):
graph, lib, params = tvm.relay.build(mod_bert,
target=target,
target_host=target_host,
params=params_bert)
module = tvm.contrib.graph_runtime.create(graph, lib, ctx)
module.set_input("input_ids", tt_a)
module.set_input("attention_mask", st_a)
module.set_input(**params)
module.run()
o0 = module.get_output(0)
o1 = module.get_output(1)
(numpy.abs((res_pt[0].cpu().numpy() - o0.asnumpy())).max(),
numpy.abs((res_pt[1].cpu().numpy() - o1.asnumpy())).max())
def x():
for i in range(100):
module.run()
ctx.sync()
x()
%timeit x()
model.cpu()
model.eval()
model.float()
for p in model.parameters():
p.requires_grad_(False)
class DebugWrap(torch.nn.Module):
def __init__(self, root, target_qn):
super().__init__()
self.root = (root,) # Hide from PyTorch
parent, = self.root
target_qn = target_qn.split('.')
self.target_basename = target_qn[-1]
for nc in target_qn[:-1]:
parent = getattr(parent, nc)
self.parent = (parent,)
target = getattr(parent, self.target_basename)
self.wrapped = target
setattr(parent, self.target_basename, self)
def remove(self):
parent, = self.parent
setattr(parent, self.target_basename, self.wrapped)
self.root = None
def forward(self, *inp, **kwinp):
assert self.root is not None
self.DEBUG_INP = inp
self.DEBUG_KWINP = kwinp
out = self.wrapped(*inp, **kwinp)
self.DEBUG_OUT = out
return out
try:
debug_wrap = DebugWrap(model, "encoder.layer.0.attention.self")
tt = tokens_tensor.cpu()
st = segments_tensors.cpu()
model(tt, st)
finally:
debug_wrap.remove()
inp = debug_wrap.DEBUG_INP[:2]
traced_module = torch.jit.trace(debug_wrap.wrapped, inp)
shape_list = [(i.debugName().split('.')[0], i.type().sizes()) for i in list(traced_module.graph.inputs())[1:]]
shape_list
mod, params = tvm.relay.frontend.pytorch.from_pytorch(traced_module, shape_list, default_dtype="float32")
import graphviz
def visualize(expr, collapse_small=True, node_attr_dict = {}):
def collect_ops(node):
ops = set()
def visitor(e):
if isinstance(e, tvm.ir.Op):
ops.add(e.name)
tvm.relay.analysis.post_order_visit(node, visitor)
return ops
# node_dict maps a Relay node to an index (node ID)
def _traverse_expr(node, node_dict):
if node in node_dict:
return
node_dict[node] = len(node_dict)
node_dict = {}
tvm.relay.analysis.post_order_visit(expr, lambda x: _traverse_expr(x, node_dict))
relayviz_nodes = []
dot = graphviz.Digraph(format='svg', )
dot.attr('node', shape = 'box')
def to_str(node):
if isinstance(node, tvm.relay.Constant):
return repr(node).lstrip('Constant(')[:-1]
else:
raise NotImplementedError("to_str:" + repr(node))
def is_small_const(c):
if not (collapse_small and isinstance(c, tvm.relay.Constant)):
return False
if isinstance(c.data, tvm.runtime.ndarray.NDArray):
return numpy.prod(c.data.shape) < 10
return True
# Sort by node ID
for node, node_id in sorted(node_dict.items(), key=lambda x: x[1]):
if isinstance(node, tvm.relay.Function):
dot.node(str(node_id), 'Function', **node_attr_dict.get(node, {}))
dot.edge(str(node_dict[node.body]), str(node_id))
elif isinstance(node, tvm.relay.Var):
if node.type_annotation is not None:
if hasattr(node.type_annotation, 'shape'):
shape = tuple([int(x) for x in node.type_annotation.shape])
dtype = node.type_annotation.dtype
typstr = 'Tensor[{}, {}]'.format(shape, dtype)
else:
typstr = str(node.type_annotation)
else:
typstr = '?'
d = dict(shape = 'ellipse')
d.update(node_attr_dict.get(node, {}))
dot.node(str(node_id),
'{}: {}'.format(
node.name_hint, typstr
), **d)
elif isinstance(node, tvm.relay.Tuple):
dot.node(str(node_id), 'Tuple[...])', **node_attr_dict.get(node, {}))
for field in node.fields:
dot.edge(str(node_dict[field]), str(node_id))
elif isinstance(node, tvm.relay.Constant):
if not is_small_const(node): # small consts are shown in ops
dot.node(str(node_id), 'Constant({}, {})'.format(node.data.shape, node.data.dtype),
**node_attr_dict.get(node, {}))
elif isinstance(node, tvm.relay.Call):
args_with_edge = []
arg_str_list = []
for arg in node.args:
if is_small_const(arg):
arg_str_list.append(to_str(arg))
else:
arg_str_list.append('·')
args_with_edge.append(arg)
arg_str = ', '.join(arg_str_list)
if isinstance(node.op, tvm.ir.Op):
name = node.op.name
attrs = {k:getattr(node.attrs, k) for k in node.attrs.keys()} if hasattr(node.attrs, 'keys') else {}
#attrs = inspect.getmembers(node.attrs)
attr_str_list = [k+'='+(str(v) if len(str(v))<20 else "...") for k, v in attrs.items()]
if attr_str_list:
attr_str = '| '+ ', '.join(attr_str_list)
else:
attr_str = ''
else:
ops = collect_ops(node)
if ops:
name = '_'.join(ops)
else:
name = '...'
attr_str = ''
s = f'{name}({arg_str}{attr_str})'
dot.node(str(node_id), s, **node_attr_dict.get(node, {}))
for arg in args_with_edge:
dot.edge(str(node_dict[arg]), str(node_id))
elif isinstance(node, tvm.ir.Op):
# dot.node(str(node_id), 'Op {}'.format(node.name))
pass # covered in call
elif isinstance(node, tvm.relay.TupleGetItem):
dot.node(str(node_id), 'TupleGetItem(idx={})'.format(node.index), **node_attr_dict.get(node, {}))
dot.edge(str(node_dict[node.tuple_value]), str(node_id))
elif isinstance(node, tvm.relay.Let):
dot.node(str(node_id), 'Let(XX)', **node_attr_dict.get(node, {}))
dot.edge(str(node_dict[node.value]), str(node_id))
dot.edge(str(node_id), str(node_dict[node.var]))
else:
raise RuntimeError(
'Unknown node type. node_id: {}, node: {}'.format(node_id, type(node)))
return dot
visualize(mod['main'])
tvm.relay.backend.compile_engine.get().clear()
with tvm.autotvm.apply_history_best(log_filename):
with tvm.transform.PassContext(opt_level=3):
graph, lib, params = tvm.relay.build(mod,
target=target,
target_host=target_host,
params=params)
compiled_module = tvm.contrib.graph_runtime.create(graph, lib, ctx)
visualize(mod['main'])
inp_tvm = [tvm.nd.array(i.numpy(), ctx) for i in inp[:2]]
for (n, _), i in zip(shape_list, inp_tvm):
compiled_module.set_input(n, i)
compiled_module.set_input(**params)
compiled_module.run()
traced_module.cpu()
numpy.abs(compiled_module.get_output(0).asnumpy()-traced_module(*inp[:2])[0].numpy()).max()
traced_module.cuda()
inp_cuda = [i.cuda() for i in inp[:2]]
def x():
for i in range(100):
traced_module(*inp_cuda)
torch.cuda.synchronize()
x()
%timeit x()
def y():
for i in range(100):
compiled_module.run()
ctx.sync()
y()
%timeit y()
mod, params = tvm.relay.frontend.pytorch.from_pytorch(traced_module, shape_list, default_dtype="float32")
new_mod = tvm.relay.transform.EliminateCommonSubexpr()(mod)
visualize(new_mod['main'])
class ShapeConstDedupMutator(tvm.relay.ExprMutator):
def __init__(self):
super().__init__()
self.shape_consts = {}
def visit_call(self, call):
if (isinstance(call.op, tvm.ir.Op) and call.op.name == "reshape"
and (len(call.args) == 1 or isinstance(call.args[1], tvm.relay.Constant))):
if len(call.args) > 1:
assert list(call.attrs.newshape) == list(call.args[1].data.asnumpy())
new_fn = self.visit(call.op)
new_args = [self.visit(arg) for arg in call.args]
return tvm.relay.Call(new_fn, new_args[:1], call.attrs)
return super().visit_call(call)
@tvm.relay.transform.function_pass(opt_level=1)
def ShapeConstDedup(fn, mod, ctx):
return ShapeConstDedupMutator().visit(fn)
new_mod = ShapeConstDedup(new_mod)
new_mod = tvm.relay.transform.EliminateCommonSubexpr()(new_mod)
visualize(new_mod["main"])
BindPass = tvm.relay.transform.function_pass(lambda fn, new_mod, ctx:
tvm.relay.build_module.bind_params_by_name(fn, params),
opt_level=1)
new_mod = BindPass(new_mod)
visualize(new_mod["main"])
new_mod = tvm.relay.transform.FoldConstant()(new_mod)
visualize(new_mod["main"])
new_mod = tvm.relay.transform.CombineParallelBatchMatmul()(new_mod)
new_mod = tvm.relay.transform.FoldConstant()(new_mod)
visualize(new_mod["main"])
tvm.relay.backend.compile_engine.get().clear()
with tvm.autotvm.apply_history_best(log_filename):
with tvm.transform.PassContext(opt_level=3):
graph, lib, params = tvm.relay.build(new_mod,
target=target,
target_host=target_host,
params=params)
compiled_module = tvm.contrib.graph_runtime.create(graph, lib, ctx)
for (n, _), i in zip(shape_list, inp_tvm):
compiled_module.set_input(n, i)
compiled_module.set_input(**params)
compiled_module.run()
traced_module.cpu()
numpy.abs(compiled_module.get_output(0).asnumpy()-traced_module(*inp[:2])[0].numpy()).max()
def y():
for i in range(100):
compiled_module.run()
ctx.sync()
y()
%timeit y()
tasks = tvm.autotvm.task.extract_from_program(new_mod["main"], target=target, params=params)
tasks
log_filename = 'bert-tuning.stage2.log'
#do_tune(tasks, log_filename)
tvm.relay.backend.compile_engine.get().clear()
target = 'rocm -model=gfx906'
target_host = 'llvm'
ctx = tvm.context(target)
with tvm.autotvm.apply_history_best(log_filename):
with tvm.transform.PassContext(opt_level=3):
graph, lib, params = tvm.relay.build(new_mod,
target=target,
target_host=target_host,
params=params)
compiled_module = tvm.contrib.graph_runtime.create(graph, lib, ctx)
for (n, _), i in zip(shape_list, inp_tvm):
compiled_module.set_input(n, i)
compiled_module.set_input(**params)
compiled_module.run()
traced_module.cpu()
numpy.abs(compiled_module.get_output(0).asnumpy()-traced_module(*inp[:2])[0].numpy()).max()
def y():
for i in range(100):
compiled_module.run()
ctx.sync()
y()
%timeit y()
def run_passes(mod, params):
#new_mod = ShapeConstDedup(mod)
new_mod = mod
new_mod = tvm.relay.transform.EliminateCommonSubexpr()(new_mod)
BindPass = tvm.relay.transform.function_pass(lambda fn, new_mod, ctx:
tvm.relay.build_module.bind_params_by_name(fn, params),
opt_level=1)
new_mod = BindPass(new_mod)
new_mod = tvm.relay.transform.FoldConstant()(new_mod)
new_mod = tvm.relay.transform.CombineParallelBatchMatmul()(new_mod)
new_mod = tvm.relay.transform.FoldConstant()(new_mod)
new_mod = tvm.relay.transform.SimplifyInference()(new_mod) # remove dropout
return new_mod
shape_list = [(i.debugName().split('.')[0], i.type().sizes()) for i in list(traced_model.graph.inputs())[1:]]
shape_list
new_mod = run_passes(mod_bert, params_bert)
log_filename = './bert-tuning.full.log'
tasks = tvm.autotvm.task.extract_from_program(new_mod["main"], target=target, params=params)
print(tasks)
#do_tune(tasks, log_filename)
tvm.relay.backend.compile_engine.get().clear()
with tvm.autotvm.apply_history_best(log_filename):
with tvm.transform.PassContext(opt_level=3):
graph, lib, params = tvm.relay.build(new_mod,
target=target,
target_host=target_host,
params=params)
module = tvm.contrib.graph_runtime.create(graph, lib, ctx)
module.set_input("input_ids", tt_a)
module.set_input("attention_mask", st_a)
module.set_input(**params)
module.run()
o0 = module.get_output(0)
o1 = module.get_output(1)
(numpy.abs((res_pt[0].cpu().numpy() - o0.asnumpy())).max(),
numpy.abs((res_pt[1].cpu().numpy() - o1.asnumpy())).max())
def x():
for i in range(100):
module.run()
ctx.sync()
x()
%timeit x()
inp_double = [i.to(torch.double) for i in debug_wrap.DEBUG_INP[:2]]
debug_wrap.wrapped.to(device="cpu", dtype=torch.double)
traced_module = torch.jit.trace(debug_wrap.wrapped, inp_double).to(dtype=torch.double)
# debug_wrap.wrapped.to(device="cpu", dtype=torch.float) -- careful, this will also modify the traced module's parameterS?!
pt_out_double = traced_module(*inp_double)
shape_list = [(i.debugName().split('.')[0], i.type().sizes()) for i in list(traced_module.graph.inputs())[1:]]
mod, params = tvm.relay.frontend.pytorch.from_pytorch(traced_module, shape_list, default_dtype="float64")
tvm.relay.backend.compile_engine.get().clear()
with tvm.transform.PassContext(opt_level=3):
graph, lib, params = tvm.relay.build(mod,
target=target,
target_host=target_host,
params=params)
compiled_module = tvm.contrib.graph_runtime.create(graph, lib, ctx)
for (n, _), i in zip(shape_list, inp_double):
compiled_module.set_input(n, tvm.nd.array(i.numpy(), ctx=ctx))
compiled_module.set_input(**params)
compiled_module.run()
numpy.abs(compiled_module.get_output(0).asnumpy()-pt_out_double[0].numpy()).max()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Helpfully, transformers supports tracing their model with the PyTorch JIT. We use their tutorial on it, the following is copied straight from the tutorial
Step2: Now we can trace our model. As we want to do inference, we impose evaluation mode and not requiring gradients for the parameters.
Step3: Let us run try our traced model on the GPU
Step4: It worked, but is it fast? Let's run it 100 times and see.
Step5: Around 0.65-0.7 seconds for 100 runs means 6.5-7ms per run. That's not too bad.
Step6: That went well! (Be sure to use the TVM model from my git branch.) We can now build and run it. Building follows the standard TVM recipe.
Step7: Uh oh, may bring great performance regression. Let's see. We run the module
Step8: Looks good. Remember that we're computing in float32, so $10^{-6}$ish is a good result. Now that we know it gets the correct result, let us see what the speed is
Step9: Ouch, 65ms per run of the model. That's slow indeed. But the warning said that is was because it could not find (tuned) configurations. Let us then tune the tasks.
Step10: OK, so we have are our tasks that we need to be able to perform fast.
Step11: After this, we can again build the model, this time with the new configuration. This time we should see no comments about missing configurations.
Step12: Let's see if the speed improved
Step13: Now it's in the region of 6.5-7ms per run. That's a similar to PyTorch. This is what we get from this very elementary optimization of our operators. We can push it a little further, though.
Step14: Now we can define a little wrapper module that just saves inputs and outputs of the wrapped module.
Step15: Now we can apply it to our layer. Note that the indexing into the module list works via a string getattr.
Step16: Turns out this wasn't the module, that had kwargs. But now you have something you can also wrap around the encoder. We need the first wo positional parameters.
Step17: Just like before, we convert to TVM and run it.
Step18: To look at the TVM module, we define a little visualization helper (loosely based on TVM PR#4370).
Step19: Let's run that on our main function. For some reason (well, to be fully general, probably) the PyTorch converter will convert Linear layers to batch_matmul rather than just dense. We'll get back to this in a bit. As TVM's batch_matmul has the contraction axis last on both operands (unlike PyTorch), there are quite a few transpose operations, too.
Step20: In addition to our named inputs, we see a number of unnamed (numbered) variables. These are the neural network parameters.
Step21: One curious thing is that compiling the model will change it in-place. (Which is a bug we hope to fix.) As we see in the figure below, all the parameter variables constants. This is done inplace, but the subsequent optimization steps are done out-of-place, i.e. not reflected in our copy of the module.
Step22: Just like the full model, we can run and time our submodule. Let's first check accuracy.
Step23: And now the timing.
Step24: The back of the envelope calculation here is that with PyTorch we're spending about 0.2ms in this layer, so about 2.4ms on 12 layers - a sizeable part of the 6-7ms overall runtime. Let's compare to TVM.
Step25: So here we are also roughly on par with PyTorch.
Step26: The problem - not apparent form the picture because I merged the small shape tensors into the reshape - is that the three shape tensor inputs to reshape are actually distinct.
Step27: Ha, now the reshapes have been fused and the three matrix multiplications have a common argument. But the parameters that are then reshaped and transposed. Can we get rid of that, too?
Step28: With the Foldconstant pass, we can propagate the constants through the transposes and reshapes to move them closer to the matmuls.
Step29: And now comes an interesting trick. It is more efficient to merge the three batch matmuls with the same input into a single batch_matmul. We implemented a pass doing this in TVM PR 5791. So let's call it and also have another constant-folding pass.
Step30: Awesome. Let's run it and see whether we still get the same result.
Step31: Now it works, but it's slow again. Oh yeah, that's because we got the missing configuration warnings. So let's get back to tuning.
Step32: So we went from about 0.2ms to about 0.13-0.15ms, a nice speedup. By our handwavy calculation, this should cut 0.6-0.8ms from the total runtime, or somewhere between 5%-10%. Let's check.
Step33: So yay, we went from 6.5-7ms in PyTorch to ~6.2ms in TVM. This is a 5%-10% speedup. Note that we have only taking a particular, not very large shape. A more serious analysis would consider more problem shapes.
Step34: Running the module and comparing to PyTorch should now have 1e-14 or so deviation.
|
8,721
|
<ASSISTANT_TASK:>
Python Code:
data = list(csv.DictReader(open('data/columbia_crime.csv', 'r').readlines()))
# This part just splits out the latitude and longitude coordinate fields for each incident, which we need for mapping.
coords = [(float(d['lat']), float(d['lng'])) for d in data if len(d['lat']) > 0]
print coords[:10]
# And this creates a matching array of incident types
types = [d['ExtNatureDisplayName'] for d in data]
print types[:10]
number_of_clusters = 3
kmeans = KMeans(n_clusters=number_of_clusters)
kmeans.fit(coords)
clusters_to_csv(kmeans.labels_, types, coords)
number_of_clusters = 10
kmeans = KMeans(n_clusters=number_of_clusters)
kmeans.fit(coords)
clusters_to_csv(kmeans.labels_, types, coords)
# We're dealing in unprojected coordinates, so this basically refers to a fraction of a degree of lat/lng.
EPS = 0.02
distance_matrix = distance.squareform(distance.pdist(coords))
print distance_matrix.shape
print distance_matrix
# Fit DBSCAN in the same way we fit K-Means, using the EPS parameter and distance matrix established above
db = DBSCAN(eps=EPS)
db.fit(distance_matrix)
# Now print the results
clusters_to_csv(db.labels_, types, coords)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: K-means clustering
Step2: The data comes out in the format of cluster_id,incident_type,lat,lng. If we save it to a csv file, we can load it into Google's simple map viewer tool to see how it looks.
Step3: These clusters are arguably more useful, but it's also clear that k-means might not be the best tool for figuring out our density-based crime clusters. Let's try another approach.
Step4: DBSCAN requires a pre-processing step that K-Means doesn't
Step5: Each entry in the matrix shows how far each of our 1,995 points is from each of the other points in the dataset, in the unit of latitude and longitude degrees. The distances between points are key for DBSCAN to compute densities. But this also exposes one of its weaknesses
|
8,722
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from numpy import *
Eq = np.array([[1, 1, -1, 9],[0, 1, 3, 3],[-1, 0, -2, 2]])
A = Eq[:,0:3] # As
b = Eq[:,3] # Resultados 9, 3, 2
# Las soluciones son: [0.666666666666667, 7.0, -1.3333333333333333]
U,s,V = linalg.svd(A) # descomposición SVD de A
# inversa usando pinv
pinv = linalg.pinv(A)
# inversa usando descomposición SVD
pinv_svd = dot(dot(V.T,linalg.inv(diag(s))),U.T)
xPinv = dot(pinv_svd, b) # Resolviendo Ax=b con x = A^-1*b
xPinv.T
# Resolviendo Ax=b usando ecuación
c = dot(U.T,b) # c = U^t*b
w = linalg.solve(diag(s),c) # w = V^t*c
xSVD = dot(V.T,w) # x = V*w
xSVD.T
Eq = np.array([[1, 1, 4],[0, 0, 0]])
A = Eq[:,0:2] # As
b = Eq[:,2] # Resultados 9, 3, 2
U,s,V = linalg.svd(A)
c = dot(U.T,b) # c = U^t*b
w = linalg.solve(diag(s),c) # w = V^t*c
xSVD = dot(V.T,w) # x = V*w
xSVD.T
Eq = np.array([[1, 1, 44],[0, 1e-32, 5]])
A = Eq[:,0:2] # As
b = Eq[:,2] #
U,s,V = linalg.svd(A)
pinv_svd = dot(dot(V.T,linalg.inv(diag(s))),U.T)
xPinv = dot(pinv_svd, b) # Resolviendo Ax=b con x = A^-1*b
xPinv.T
Eq = np.array([[1, 1, 0],[0, 0, 1]])
A = Eq[:,0:2] # As
b = Eq[:,2] #
U,s,V = linalg.svd(A)
pinv_svd = dot(dot(V.T,linalg.inv(diag(s))),U.T)
xPinv = dot(pinv_svd, b) # Resolviendo Ax=b con x = A^-1*b
xPinv.T
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
sat = pd.read_csv("study_vs_sat.csv")
sat.insert(0, "col_unos", 1)
sat
X = sat[['col_unos', 'study_hours']]
y = sat[['sat_score']]
Th = np.dot(np.dot(linalg.inv(np.dot(X.T,X)),X.T),y)
Th
sat.insert(2, "hipotesis", np.dot(X,Th))
sat
import matplotlib.pyplot as plt
import matplotlib
from matplotlib import interactive
from sklearn.preprocessing import MinMaxScaler
interactive(True)
matplotlib.style.use('ggplot')
scaler = MinMaxScaler()
sat[['study_hours', 'sat_score', 'hipotesis']] = scaler.fit_transform(sat[['study_hours', 'sat_score', 'hipotesis']])
sat.plot()
sat
# m es el número de filas
def descensoGradiente(X, y, theta, alpha, m, numIterations):
xTrans = X.transpose()
for i in range(0, numIterations):
hypothesis = np.dot(X, theta)
loss = hypothesis - y
# avg cost per example (the 2 in 2*m doesn't really matter here.
# But to be consistent with the gradient, I include it)
cost = np.sum(loss ** 2) / (2 * m)
#print("Iteration %d | Cost: %f" % (i, cost))
# avg gradient per example
gradient = np.dot(xTrans, loss) / m
# update
theta = theta - alpha * gradient
return theta
m, n = np.shape(X)
numIterations= 10000
alpha = 0.003
theta = np.ones(n)
theta = descensoGradiente(X, y['sat_score'], theta, alpha, m, numIterations)
print(theta)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Resolviendo caso A=[[1,1],[0,0]]
Step2: El experimento anerior nos da error por ser una matriz singular, ahora intentamos con A=[[1,1],[0,1e-32]]
Step3: Intentando con b en imagen de A
Step4: Vemos que tampoco funciona por que da el error de "matriz singular".
Step5: Definamos la función de <b>minimización lineal</b> como
Step6: Por lo tanto tenemos que <b>θ</b> es una linea solución óptima que se intersecta el orígen en <b>α</b> = <b>353.16</b> y que tiene una pendiente de <b>β</b> = <b>25.326</b>. De tal modo que
Step7: Descenso de gradiente
|
8,723
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
plt.xticks?
def plot_sin1(a,b):
x=np.linspace(0,4*np.pi,300)
plt.figure(figsize=(12,5))
plt.plot(x,np.sin((a*x)+b),'g-')
plt.xlim(0,4*np.pi)
plt.tick_params(direction='out')
plt.xticks([np.pi,2*np.pi,3*np.pi,4*np.pi])
plot_sin1(5., 3.4)
interact(plot_sin1,a=(0,5.0),b=(-5.0,5.0))
assert True # leave this for grading the plot_sine1 exercise
def plot_sine2(a,b,style):
x=np.linspace(0,4*np.pi,300)
plt.figure(figsize=(12,5))
plt.plot(x,np.sin((a*x)+b),style)
plt.xlim(0,4*np.pi)
plt.tick_params(direction='out')
plt.xticks([np.pi,2*np.pi,3*np.pi,4*np.pi])
raise NotImplementedError()
plot_sine2(4.0, -1.0, 'r--')
interact(plot_sine2,a=(0,5.0),b=(-5.0,5.0),style=('b.','ko','r^'))
assert True # leave this for grading the plot_sine2 exercise
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plotting with parameters
Step2: Then use interact to create a user interface for exploring your function
Step3: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument
Step4: Use interact to create a UI for plot_sine2.
|
8,724
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def readucr(filename):
data = np.loadtxt(filename, delimiter="\t")
y = data[:, 0]
x = data[:, 1:]
return x, y.astype(int)
root_url = "https://raw.githubusercontent.com/hfawaz/cd-diagram/master/FordA/"
x_train, y_train = readucr(root_url + "FordA_TRAIN.tsv")
x_test, y_test = readucr(root_url + "FordA_TEST.tsv")
x_train = x_train.reshape((x_train.shape[0], x_train.shape[1], 1))
x_test = x_test.reshape((x_test.shape[0], x_test.shape[1], 1))
n_classes = len(np.unique(y_train))
idx = np.random.permutation(len(x_train))
x_train = x_train[idx]
y_train = y_train[idx]
y_train[y_train == -1] = 0
y_test[y_test == -1] = 0
from tensorflow import keras
from tensorflow.keras import layers
def transformer_encoder(inputs, head_size, num_heads, ff_dim, dropout=0):
# Normalization and Attention
x = layers.LayerNormalization(epsilon=1e-6)(inputs)
x = layers.MultiHeadAttention(
key_dim=head_size, num_heads=num_heads, dropout=dropout
)(x, x)
x = layers.Dropout(dropout)(x)
res = x + inputs
# Feed Forward Part
x = layers.LayerNormalization(epsilon=1e-6)(res)
x = layers.Conv1D(filters=ff_dim, kernel_size=1, activation="relu")(x)
x = layers.Dropout(dropout)(x)
x = layers.Conv1D(filters=inputs.shape[-1], kernel_size=1)(x)
return x + res
def build_model(
input_shape,
head_size,
num_heads,
ff_dim,
num_transformer_blocks,
mlp_units,
dropout=0,
mlp_dropout=0,
):
inputs = keras.Input(shape=input_shape)
x = inputs
for _ in range(num_transformer_blocks):
x = transformer_encoder(x, head_size, num_heads, ff_dim, dropout)
x = layers.GlobalAveragePooling1D(data_format="channels_first")(x)
for dim in mlp_units:
x = layers.Dense(dim, activation="relu")(x)
x = layers.Dropout(mlp_dropout)(x)
outputs = layers.Dense(n_classes, activation="softmax")(x)
return keras.Model(inputs, outputs)
input_shape = x_train.shape[1:]
model = build_model(
input_shape,
head_size=256,
num_heads=4,
ff_dim=4,
num_transformer_blocks=4,
mlp_units=[128],
mlp_dropout=0.4,
dropout=0.25,
)
model.compile(
loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.Adam(learning_rate=1e-4),
metrics=["sparse_categorical_accuracy"],
)
model.summary()
callbacks = [keras.callbacks.EarlyStopping(patience=10, restore_best_weights=True)]
model.fit(
x_train,
y_train,
validation_split=0.2,
epochs=200,
batch_size=64,
callbacks=callbacks,
)
model.evaluate(x_test, y_test, verbose=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build the model
Step2: We include residual connections, layer normalization, and dropout.
Step3: The main part of our model is now complete. We can stack multiple of those
Step4: Train and evaluate
|
8,725
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
import io
# downloading R may takes a few minutes (80Mo)
try:
import urllib.request as urllib2 # Python 3
except:
import urllib2 # Python 2
# specify R binary and (md5, sha1) hash
# R-3.4.3:
r_url = "https://cran.r-project.org/bin/windows/base/R-3.4.3-win.exe"
hashes=("0ff087acbae677d7255af19b0a9df27f","aabf0b671ae1dca741c3df9dee976a7d4b584f80")
# specify target location
r_installer = os.environ["WINPYDIR"]+"\\..\\tools\\"+os.path.basename(r_url)
os.environ["r_installer"] = r_installer
# Download
g = urllib2.urlopen(r_url)
with io.open(r_installer, 'wb') as f:
f.write(g.read())
g.close
g = None
#checking it's there
!dir %r_installer%
# checking it's the official R
import hashlib
def give_hash(of_file, with_this):
with io.open(r_installer, 'rb') as f:
return with_this(f.read()).hexdigest()
print (" "*12+"MD5"+" "*(32-12-3)+" "+" "*15+"SHA-1"+" "*(40-15-5)+"\n"+"-"*32+" "+"-"*40)
print ("%s %s %s" % (give_hash(r_installer, hashlib.md5) , give_hash(r_installer, hashlib.sha1),r_installer))
if give_hash(r_installer, hashlib.md5) == hashes[0] and give_hash(r_installer, hashlib.sha1) == hashes[1]:
print("looks good!")
else:
print("problem ! please check")
assert give_hash(r_installer, hashlib.md5) == hashes[0]
assert give_hash(r_installer, hashlib.sha1) == hashes[1]
# preparing Dos variables
os.environ["R_HOME"] = os.environ["WINPYDIR"]+ "\\..\\tools\\R\\"
os.environ["R_HOMEbin"]=os.environ["R_HOME"] + "bin"
# for installation we need this
os.environ["tmp_Rbase"]=os.path.join(os.path.split(os.environ["WINPYDIR"])[0] , 'tools','R' )
if 'amd64' in sys.version.lower():
r_comp ='/COMPONENTS="main,x64,translations'
else:
r_comp ='/COMPONENTS="main,i386,translations'
os.environ["tmp_R_comp"]=r_comp
# let's install it, if hashes do match
assert give_hash(r_installer, hashlib.md5) == hashes[0]
assert give_hash(r_installer, hashlib.sha1) == hashes[1]
# If you are "USB life style", or multi-winpython
# ==> CLICK the OPTION "Don't create a StartMenuFolder' <== (when it will show up)
!start cmd /C %r_installer% /DIR=%tmp_Rbase% %tmp_R_comp%
import os
import sys
import io
# let's create a R launcher
r_launcher = r
@echo off
call %~dp0env.bat
rscript %*
r_launcher_bat = os.environ["WINPYDIR"]+"\\..\\scripts\\R_launcher.bat"
# let's create a R init script
# in manual command line, you can use repos = c('http://irkernel.github.io/', getOption('repos'))
r_initialization = r
install.packages(c('repr', 'IRdisplay', 'stringr', 'crayon', 'pbdZMQ', 'devtools'), repos = c('http://cran.rstudio.com/', 'http://cran.rstudio.com/'))
devtools::install_github('IRkernel/IRkernel')
library('pbdZMQ')
library('repr')
library('IRkernel')
library('IRdisplay')
library('crayon')
library('stringr')
IRkernel::installspec()
r_initialization_r = os.path.normpath(os.environ["WINPYDIR"]+"\\..\\scripts\\R_initialization.r")
for i in [(r_launcher,r_launcher_bat), (r_initialization, r_initialization_r)]:
with io.open(i[1], 'w', encoding = sys.getdefaultencoding() ) as f:
for line in i[0].splitlines():
f.write('%s\n' % line )
#check what we are going to do
print ("!start cmd /C %WINPYDIR%\\..\\scripts\\R_launcher.bat --no-restore --no-save " + r_initialization_r)
# Launch Rkernel setup
os.environ["r_initialization_r"] = r_initialization_r
!start cmd /C %WINPYDIR%\\..\\scripts\\R_launcher.bat --no-restore --no-save %r_initialization_r%
# make RKernel a movable installation with the rest of WinPython
from winpython import utils
base_winpython = os.path.dirname(os.path.normpath(os.environ["WINPYDIR"]))
rkernel_json=(base_winpython+"\\settings\\kernels\\ir\\kernel.json")
# so we get "argv": ["{prefix}/../tools/R/bin/x64/R"
utils.patch_sourcefile(rkernel_json, base_winpython.replace("\\","/"), r'{prefix}/..', silent_mode=False)
%load_ext rpy2.ipython
#vitals: 'dplyr', 'R.utils', 'nycflights13'
# installation takes 2 minutes
%R install.packages(c('dplyr','R.utils', 'nycflights13'), repos='http://cran.rstudio.com/')
%load_ext rpy2.ipython
%%R
library('dplyr')
library('nycflights13')
write.csv(flights, "flights.csv")
%R head(flights)
%R airports %>% mutate(dest = faa) %>% semi_join(flights) %>% head
# essentials: 'tidyr', 'shiny', 'ggplot2', 'caret' , 'nnet'
# remaining of Hadley Wickahm "stack" (https://github.com/rstudio)
%R install.packages(c('tidyr', 'ggplot2', 'shiny','caret' , 'nnet'), repos='https://cran.rstudio.com/')
%R install.packages(c('knitr', 'purrr', 'readr', 'readxl'), repos='https://cran.rstudio.com/')
%R install.packages(c('rvest', 'lubridate', 'ggvis', 'readr','base64enc'), repos='https://cran.rstudio.com/')
# TRAINING = online training book http://r4ds.had.co.nz/ (or https://github.com/hadley/r4ds)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2 - checking and Installing R binary in the right place
Step4: During Installation (if you wan't to move the R installation after)
Step5: 4- Install a R package via a IPython Kernel
Step6: 5- Small demo via R magic
Step7: 6 - Installing the very best of R pakages (optional, you will start to get a really big directory)
|
8,726
|
<ASSISTANT_TASK:>
Python Code:
n = int(input())
for i in range(1,n + 1, 3):
print(i)
n = int(input())
for i in range(n, 0, -1):
print(i)
n = int(input())
for i in range(0, n + 1, 1):
print(pow(2,i))
n = int(input())
for i in range(0, n + 1, 1):
if i % 2 == 0:
print(pow(2,i))
n = int(input())
result = 1
while result <= n:
print(result)
result = result * 2 + 1
n = int(input('Еnter a number in the range [1...100]: '))
while 1 > n or n > 100:
print("Invalid number!")
n = int(input('Еnter a number in the range [1...100]: '))
print("The number is: "+ str(n))
a = int(input())
b = int(input())
while b != 0:
save = a % b
a = b
b = save
print(a)
n = int(input())
result = 1
for i in range(1, n + 1):
result *= i
print(result)
n = input()
sum = 0
for i in n:
sum += int(i)
print(sum)
import math
n = int(input())
is_prime = True
if n < 2:
print('Not Prime')
else:
end = int(math.sqrt(n))
for i in range(2, end + 1):
if n / i == n//i:
is_prime = False
if is_prime:
print('Prime')
else:
print('Not Prime')
n = float(input('Enter even number: '))
while n % 2 != 0:
print('Invalid number!')
n = int(input('Enter even number: '))
n = print('Even number entered: ' + str(n))
n = int(input())
a = 1
b = 2
for i in range(1, n):
new_b = a + b
a = b
b = new_b
print(a)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h2>02. Numbers N...1</h2>
Step2: <h2>03. Powers of Two</h2>
Step3: <h2>04. Even Powers of 2</h2>
Step4: <h2>05. Sequence 2k+1</h2>
Step5: <h2>06.Number in Range [1...100]</h2>
Step6: <h2>07.Greatest Common Divisor (CGD)</h2>
Step7: <h2>08.Factorial</h2>
Step8: <h2>09. Sum Digits</h2>
Step9: <h2>10.Check Prime</h2>
Step10: <h2>11.Enter Even Number</h2>
Step11: <h2>12.Fibonacci</h2>
|
8,727
|
<ASSISTANT_TASK:>
Python Code:
import networkx as nx
g = nx.Graph()
g.add_node(1)
g.add_node(2)
g.add_node(3)
g.add_node(4)
#oder schneller
g.add_nodes_from([1,2,3,4])
#Hinzufügen von Kanten
g.add_edge(1,2)
g.add_edge(1,3)
g.add_edge(1,4)
#oder schneller
g.add_edges_from([(1,2),(1,3),(1,4)])
nr_nodes = len(g.nodes())
print("Graph hat " + str(nr_nodes) + " Knoten.")
nr_edges = len(g.edges())
print("Graph hat " + str(nr_edges) + " Kanten.")
import matplotlib.pyplot as plt
nx.draw(g)
plt.show()
nx.draw_networkx(g)
plt.show()
g = nx.Graph()
g.add_nodes_from(["Goethe", "Schiller", "Humboldt", "Zelter"])
g.add_edges_from([("Goethe","Schiller"),("Goethe","Humboldt"),("Goethe","Zelter")])
nx.draw_networkx(g)
plt.show()
g = nx.Graph()
g.add_nodes_from(["Goethe", "Schiller", "Humboldt", "Zelter"])
g.add_edges_from([("Goethe","Schiller", dict(weight=12)),("Goethe","Humboldt", dict(weight=1)),("Goethe","Zelter", dict(weight=3))])
#auch hier gibt es eine convenience function, die die Eingabe vereinfacht:
g.add_weighted_edges_from([("Goethe","Schiller",12),("Goethe","Humboldt", 1),("Goethe","Zelter", 3)])
nx.draw_networkx(g)
plt.show()
g.remove_edge("Goethe","Zelter")
g.remove_node("Zelter")
nx.draw_networkx(g)
plt.show()
import matplotlib.pyplot as plt
dg=nx.DiGraph()
dg.add_edge(1,2)
dg.add_edge(2,3)
dg.add_edge(3,1)
nx.draw_networkx(dg)
plt.show()
mg = nx.MultiGraph()
#ab hier werden die gleichen Methoden verwendet:
mg.add_edge(1,2)
mg.add_edge(1,2)
mg.add_edge(2,3)
mg.add_edge(3,1)
nx.draw_networkx(mg)
plt.show()
import networkx as nx
g = nx.Graph()
g.add_edges_from([("A","B"),("A","C"),("A","D"),("D","E")])
g.neighbors("A")
g["A"]
nx.is_connected(g)
import matplotlib.pyplot as plt
g.remove_node("A")
nx.draw_networkx(g)
plt.show()
nx.is_connected(g)
#einen Graphen erstellen
import networkx as nx
g = nx.Graph()
g.add_edges_from([("A","B"),("A","C"),("A","D"),("C","D"),("D","E")])
#suchen des kürzesten Pfads von B nach E:
nx.shortest_path(g, "B","E")
#hier verwenden wir die Funktion von networkx
nx.degree_centrality(g)
#Funktion in networkx:
nx.closeness_centrality(g)
nx.betweenness_centrality(g)
import networkx as nx
g = nx.Graph()
g.add_edges_from([("A","B"),("A","C"),("A","D"),("C","D"),("D","E"),("E","F")])
nx.draw_networkx(g)
plt.show()
print(g.nodes())
g.adjacency_list()
nx.write_adjlist(g, "file.txt")
x = nx.read_adjlist("file.txt")
print(x.edges())
nx.write_edgelist(g, "file2.txt")
with open("file2.txt", encoding="utf8") as fin:
for l in fin:
print(l, end="")
g = nx.Graph()
g.add_weighted_edges_from([("Goethe","Schiller",12),("Goethe","Humboldt", 1),("Goethe","Zelter", 3)])
nx.write_gexf(g, "file3.txt")
with open("file3.txt", encoding="utf8") as fin:
print(fin.read(-1))
import matplotlib.pyplot as plt
import networkx as nx
g = nx.krackhardt_kite_graph()
#plotting starts here
nx.draw_networkx(g)
plt.show()
#spectral layout
pos = nx.spectral_layout(g)
nx.draw_networkx(g)
plt.show()
#shell layout
pos = nx.shell_layout(g)
nx.draw_networkx(g)
plt.show()
#pos enthält ein dictionary mit den Positionen der Knoten und damit implizit auch der Kanten.
pos
nx.draw_networkx_nodes(g, pos, [9], node_color="b")
plt.show()
#jetzt müssen wir auch hier die Positionen übergeben, sonst werden diese noch einmal berechnet
pos = nx.spring_layout(g)
nx.draw_networkx(g, pos)
#Sie müssen die Reihenfolge beachten. Die Zeichenbefehle werden nacheinander ausgeführt.
nx.draw_networkx_nodes(g, pos, [9], node_color="b")
plt.show()
nx.draw_networkx(g, pos)
nx.draw_networkx_edges(g, pos, [(7,8),(8,9)], width=3, edge_color="g", style="dashed")
plt.show()
nx.draw_networkx(g, pos, node_color="yellow")
nx.draw_networkx_nodes(g, pos, [9], node_color="orange")
nx.draw_networkx_nodes(g, pos, [8], node_color="b")
nx.draw_networkx_edges(g, pos, [(8,9)], width=3, edge_color="g", style="dashed")
nx.draw_networkx_edges(g, pos, [(7,8)], width=3, edge_color="b")
plt.show()
#wenn Sie alle Labels setzen wollen, können Sie das auch direkt in der Funktion draw_networkx machen
nx.draw_networkx(g, pos, node_color="yellow", with_labels=False)
labels = {0:0,1:1,2:2,3:"Center",4:4,5:5,6:6,7:7,8:8,9:"Outsider",}
nx.draw_networkx_labels(g, pos, labels=labels, font_size=12, font_color='b')
plt.show()
nx.draw_networkx(g, pos, node_color="yellow")
nx.draw_networkx_nodes(g, pos, [9], node_color="blue")
nx.draw_networkx_edges(g, pos, [(8,9)], width=3, edge_color="b", style="dashed")x.draw_networkx_edge_labels(g, pos, edge_labels={(8,9):"outsider"})
plt.show()
class Author ():
def __init__(self, name, yob, yod, works):
self.name = name
self.yob = yob
self.yod = yod
self.works = works
a = Author("Goethe", 1749, 1832, ["Werther", "Faust"])
b = Author("Schiller", 1759, 1805, ["Die Räuber", "Wallenstein"])
c = Author("Lessing", 1729, 1781, ["Minna von Barnhelm", "Nathan der Weise"])
g = nx.Graph()
g.add_edges_from([(a,b),(a,c),(b,c)])
nx.draw_networkx(g)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Wir können uns die Informationen über einen Graphen schnell ausgeben lassen
Step2: Oft ist es noch besser, den Graphen zu visualisieren
Step3: Wenn wir etwas mehr Informationen als nur die Grundstruktur des Graphen sehen wollen, verwenden wir die Methode draw_networkx. Hier werden nun etwa die Labels der Knoten angezeigt.
Step4: Durch die labels können wir die - im Prinzip ganz abstrakte Datenstruktur - für ganz beliebige Informationen verwenden, z.B. Städte und deren Zugverbindung, Namen in Texten und deren gemeinsames Vorkommen, Briefpartner usw.
Step5: Wir können zumdem die Kanten verwenden, um Informationen über das Netzwerk abzuspeichern. Das macht man, indem man den Kanten ein Gewicht gibt. Wenn die Knoten etwa Namen in einem Briefnetzwerk repräsentieren, können wir die Anzahl der Briefe an den Kanten notieren. Die Visualisierung zeigt die größere Menge der Briefe als größere Nähe an
Step6: Sie können Graphen auch verändern, indem Sie Knoten und Kanten wieder entfernen
Step7: Aufgabe
Step8: In Multigraphen können zwei Knoten durch mehrere Kanten verbunden sein
Step9: Aufgabe
Step10: oder einfach auch (einschließlich evtl. Gewichte)
Step11: <p>Ein <b>Weg (walk)</b> ist die Verbindung einer Sequenz von Knoten durch ein Reihe von Kanten. (Wenn der Anfangsknoten zugleich der Endknoten ist, spricht man von einem <b>closed walk</b>. Sind die Kanten eines Wegs distinkt, d.h. keine Kante wird wiederholt verwendet, spricht man von einem <b>trail</b>. Sind außerdem die Knoten des Wegs distinkt, spricht man von einem <b>Pfad (path)</b>. Ein geschlossener Pfad, also ein Pfad, dessen Anfangspunkt identisch mit dem Endpunkt ist, nennt man einen <b>Zyklus, bzw. Kreis (cycle)</b> (Kreis). </p>
Step12: <p>Wenn zwei Knoten verbunden sind, dann gibt es mindestens einen Pfad, der den kürzesten Weg zwischen den Knoten darstellt; diesen Pad nennt man die <b>Distanz</b>, manchmal auch <b>geodesische Distanz</b> oder auch einfach den <b>kürzesten Pfad</b>. Die folgende Grafik zeigt den kürzesten Pfad von B nach E (über A und D)
Step13: Die Suche nach dem kürzesten Pfad stellt übrigens ein interessantes Problem da. Schauen Sie sich mal den bekannten <a href="http
Step14: A und D sind mit diesem Maß die wichtigsten Knoten. Wenn ein Knoten den degree centrality - Wert 1.0 hat, weiß man, dass es sich um den Mittelpunkt eines sternförmigen Netzwerks handeln muss.
Step15: <b>betweenness centrality</b> Intuition
Step16: Aufgabe
Step17: Mit read_adjlist(file) können Sie die Liste wieder einlesen. Die Syntax ist in allen folgenden Befehlen gleich; ersetzen Sie einfach 'read' durch 'write', um die Funktion zum Einlesen der Daten zu erhalten.
Step18: Es gibt eine ganze Reihe von unterschiedlichen Formaten, z.B. read_edgelist (+ read_weighted_edgelist),
Step19: Auch für komplexe Daten und Strukturen gut geeignet ist das Format GEFX (Graph Exchange XML Format), das z.B. auch von Gephi unterstützt wird.
Step20: Das Visualisieren von Graphen
Step21: Es gibt eine ganze Reihe von verschiedenen Layout-Algorithmen für Graphen. Voreingestellt ist das spring-layout. Aber Sie können auch andere verwenden
Step22: Wir können jeden einzelnen Aspekt eines Graphen, also jeden Knoten, jede Kante gezielt manipulieren und Label, Farbe, Größe usw. festlegen. Dabei müssen wir eines beachten
Step23: Ok. Aber hier fehlt noch der Rest des Graphen. Den können wir einfach mit dem generischen Befehl malen
Step24: Nun bearbeiten wir die Kanten. Hier die Dokumentation des Befehls
Step25: Man kann diese Befehle beliebig kombinieren
Step26: Außerdem gibt es noch eine Funktion, um die Labels der Knoten festzulegen
Step27: Und zuletzt die Funktion, die es Ihnen erlaubt, die Labels an den Kanten festzulegen
Step28: Graphen als Datenstruktur
|
8,728
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import bruges as bg
w, top, base, ref = bg.models.wedge()
plt.imshow(w, interpolation='none')
plt.axvline(ref, color='k', ls='--')
plt.plot(top, 'r-', lw=4)
plt.plot(base, 'r-', lw=4)
plt.show()
import numpy as np
vps = np.array([2320, 2350, 2350])
rhos = np.array([2650, 2600, 2620])
vp = vps[w]
rho = rhos[w]
vp.shape
vp[:5, :5]
rc = bg.reflection.acoustic_reflectivity(vp, rho)
ricker, _ = bg.filters.ricker(duration=0.064, dt=0.001, f=40)
syn = bg.filters.convolve(rc, ricker)
syn.shape
fig, axs = plt.subplots(figsize=(17, 4), ncols=5,
gridspec_kw={'width_ratios': (4, 4, 4, 1, 4)})
axs[0].imshow(w)
axs[0].set_title('Wedge model')
axs[1].imshow(vp * rho)
axs[1].set_title('Impedance')
axs[2].imshow(rc)
axs[2].set_title('Reflectivity')
axs[3].plot(ricker, np.arange(ricker.size))
axs[3].axis('off')
axs[3].set_title('Wavelet')
axs[4].imshow(syn)
axs[4].set_title('Synthetic')
axs[4].plot(top, 'w', alpha=0.5)
axs[4].plot(base, 'w', alpha=0.5)
plt.show()
vps = np.array([2320, 2350, 2350])
rhos = np.array([2650, 2600, 2620])
impedances = vps * rhos
w, top, base, ref = bg.models.wedge(strat=impedances)
plt.imshow(w, interpolation='none')
plt.axvline(ref, color='k', ls='--')
plt.plot(top, 'r-', lw=4)
plt.plot(base, 'r-', lw=4)
plt.colorbar()
plt.show()
vps = np.array([2320, 2350, 2350])
vss = np.array([1150, 1250, 1200])
rhos = np.array([2650, 2600, 2620])
w, top, base, ref = bg.models.wedge()
vp = vps[w]
vs = vss[w]
rho = rhos[w]
rc = bg.reflection.reflectivity(vp, vs, rho, theta=range(46))
rc.shape
plt.imshow(rc.real[:, :, 50].T)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can then use this integer model to index into an array of rock properties
Step2: We can use these to make vp and rho earth models. We can use NumPy’s fancy indexing by passing our array of indicies to access the rock properties (in this case acoustic impedance) for every element at once.
Step3: Each of these new arrays is the shape of the model, but is filled with a rock property
Step4: Now we can create the reflectivity profile
Step5: Then make a wavelet and convolve it with the reflectivities
Step6: The easiest way to check everything worked is probably to plot it.
Step7: Alternative workflow
Step8: And look at the result
Step9: Now the wedge contains rock properties, not integer labels.
Step10: We need the model with integers like 0, 1, 2 again
Step11: Index to get the property models
Step12: Compute the reflectivity for angles up to 45 degrees
Step13: The result is three-dimensional
|
8,729
|
<ASSISTANT_TASK:>
Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
import sqlite3
import random
import time
import datetime
# Criando uma conexão
conn = sqlite3.connect('dsa.db')
# Criando um cursor
c = conn.cursor()
# Função para criar uma tabela
def create_table():
c.execute('CREATE TABLE IF NOT EXISTS produtos(id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, date TEXT, '\
'prod_name TEXT, valor REAL)')
# Função para inserir uma linha
def data_insert():
c.execute("INSERT INTO produtos VALUES('2020-05-02 12:34:45', 'Teclado', 130.00 )")
conn.commit()
c.close()
conn.close()
# Usando variáveis para inserir dados
def data_insert_var():
new_date = datetime.datetime.now()
new_prod_name = 'Monitor'
new_valor = random.randrange(50,100)
c.execute("INSERT INTO produtos (date, prod_name, valor) VALUES (?, ?, ?)", (new_date, new_prod_name, new_valor))
conn.commit()
# Gerando valores e inserindo na tabela
for i in range(10):
data_insert_var()
time.sleep(1)
# Encerrando a conexão
c.close()
conn.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Inserindo Dados com Variáveis
|
8,730
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
pd.set_option('display.max_colwidth', -1)
df = pd.read_csv('../../data/processed/complaints-3-29-scrape.csv')
df.count()[0]
df[df['public']=='offline'].count()[0]
df[df['public']=='online'].count()[0]
df[df['public']=='offline'].count()[0]/df.count()[0]*100
df[(df['outcome']=='Exposed to Potential Harm') | (df['outcome']=='No Negative Outcome')].count()[0]
df[(df['outcome']=='Exposed to Potential Harm') |
(df['outcome']=='No Negative Outcome')].count()[0]/df[df['public']=='offline'].count()[0]*100
totals = df.groupby(['omg_outcome','public']).count()['abuse_number'].unstack().reset_index()
totals.fillna(0, inplace = True)
totals['total'] = totals['online']+totals['offline']
totals['pct_offline'] = round(totals['offline']/totals['total']*100)
totals.sort_values('pct_offline',ascending=False)
df['outcome_notes'].fillna('', inplace = True)
df[(df['outcome_notes'].str.contains('constitute neglect|constitutes neglect|constitute abuse|constitutes abuse|constitutes exploitation|constitutes financial exploitation')) & (df['public']=='offline')].count()[0]
df[(df['omg_outcome']=='Potential harm') & (df['fine']>0) & (df['public']=='offline')].count()[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h3>How many total complaints are there?</h3>
Step2: <h3>How many complaints do not appear in the state's public database?</h3>
Step3: <h3>How many complaints do appear in the state's public database?</h3>
Step4: <h3>What percent of complaints are missing?</h3>
Step5: <h3>How many complaints were labelled 'Exposed to potential harm' or 'No negative outcome?'</h3>
Step6: <h3>Of all missing complaints, what percent are in the above two categories?</h3>
Step7: <h3>What's the online/offline breakdown by outcome?</h3>
Step8: <h3>How many offline complaints in the database were found to have "abuse," "neglect" or "exploitation?"</h3>
Step9: "The state fined the facilities in hundreds of those cases."
|
8,731
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%reload_ext XTIPython
vImaris.GetVersion()
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
%imaris_screenshot
nx = vDataSet.GetSizeX()
ny = vDataSet.GetSizeY()
nz = vDataSet.GetSizeZ()
dtype = BridgeLib.GetType(vDataSet)
print nz,ny,nx,dtype
import time
t = time.time()
arr = BridgeLib.GetDataVolume(vDataSet,0,0)
print "Operation took %.1fs"%(time.time()-t),arr.shape,arr.dtype
vmin,vmax = vDataSet.GetChannelRangeMin(0),vDataSet.GetChannelRangeMax(0)
p = plt.imshow(np.max(arr,axis=0),origin='lower',vmin=vmin,vmax=vmax)
arr = arr.astype(np.float32)
vDataSet.SetSizeC(2)
t = time.time()
BridgeLib.SetDataVolume(vDataSet,arr,1,0)
print "Operation took %.1fs"%(time.time()-t)
%imaris_sync
nx = vDataSet.GetSizeX()
ny = vDataSet.GetSizeY()
nz = vDataSet.GetSizeZ()
dtype = BridgeLib.GetType(vDataSet)
print nz,ny,nx,dtype
import time
t = time.time()
arr = BridgeLib.GetDataVolume(vDataSet,0,0)
print "Operation took %.1fs"%(time.time()-t),arr.shape,arr.dtype
vmin,vmax = vDataSet.GetChannelRangeMin(0),vDataSet.GetChannelRangeMax(0)
p = plt.imshow(np.max(arr,axis=0),origin='lower',vmin=vmin,vmax=vmax)
arr = arr.astype(np.float32)
vDataSet.SetSizeC(2)
t = time.time()
BridgeLib.SetDataVolume(vDataSet,arr,1,0)
print "Operation took %.1fs"%(time.time()-t)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For info, this is what the dataset looks like (fly embryo).
Step2: 8 bit transfer
Step3: Let's fetch the data volume and check how long it takes
Step4: This next step tests type conversion in BridgeLib.SetDataVolume()
Step5: Testing data transfer with a 16 bit dataset
Step6: Let's fetch the data volume and check how long it takes
Step7: This next step tests type conversion in BridgeLib.SetDataVolume()
|
8,732
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# Ivezic, Figure 8.1
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from astroML.plotting.mcmc import convert_to_stdev
#------------------------------------------------------------
# Set up the data and errors
np.random.seed(13)
a = 1
b = 0
#x = np.array([-1, 0.44, -0.16])
x = np.array([-1, 0.44, -0.16, 1.0])
y = a * x + b
#dy = np.array([0.25, 0.22, 0.2])
#dy = np.array([0.01, 0.01, 0.01])
dy = np.array([0.01, 0.01, 0.01, 0.01])
y = np.random.normal(y, dy)
# add a fourth point which is a lower bound
x5 = 1.0
y5 = a * x5 + b + 0.0
#------------------------------------------------------------
# Compute the likelihoods for each point
a_range = np.linspace(0, 2, 80)
b_range = np.linspace(-1, 1, 80)
logL = -((a_range[:, None, None] * x + b_range[None, :, None] - y) / dy) ** 2
sigma = [convert_to_stdev(logL[:, :, i]) for i in range(4)]
# compute best-fit from first three points
logL_together = logL.sum(-1)
i, j = np.where(logL_together == np.max(logL_together))
amax = a_range[i[0]]
bmax = b_range[j[0]]
#------------------------------------------------------------
# Plot the first figure: the points and errorbars
fig1 = plt.figure(figsize=(6, 4))
ax1 = fig1.add_subplot(111)
# Draw the true and best-fit lines
xfit = np.array([-1.5, 1.5])
ax1.plot(xfit, a * xfit + b, ':k', label='True fit')
ax1.plot(xfit, amax * xfit + bmax, '--k', label='fit to $\{x_1, x_2, x_3\}$')
ax1.legend(loc=2)
ax1.errorbar(x, y, dy, fmt='ok')
ax1.errorbar([x5], [y5], [[0.5], [0]], fmt='_k', uplims=True)
for i in range(4):
ax1.text(x[i] + 0.05, y[i] - 0.3, "$x_{%i}$" % (i + 1))
ax1.text(x5 + 0.05, y5 - 0.5, "$x_4$")
ax1.set_xlabel('$x$')
ax1.set_ylabel('$y$')
ax1.set_xlim(-1.5, 1.5)
ax1.set_ylim(-2, 2)
#------------------------------------------------------------
# Plot the second figure: likelihoods for each point
fig2 = plt.figure(figsize=(6, 6))
fig2.subplots_adjust(hspace=0.05, wspace=0.05)
# plot likelihood contours
for i in range(5):
ax = fig2.add_subplot(321 + i)
for j in range(min(i + 1, 4)):
ax.contourf(a_range, b_range, sigma[j].T,
levels=(0, 0.683, 0.955, 0.997),
cmap=plt.cm.binary, alpha=0.5)
# plot the excluded area from the fourth point
axpb = a_range[:, None] * x5 + b_range[None, :]
mask = y5 < axpb
fig2.axes[4].fill_between(a_range, y5 - x5 * a_range, 2, color='k', alpha=0.5)
# Label and adjust axes
for i in range(5):
ax = fig2.axes[i]
ax.text(1.98, -0.98, "$x_{%i}$" % (i + 1), ha='right', va='bottom')
ax.plot([0, 2], [0, 0], ':k', lw=1)
ax.plot([1, 1], [-1, 1], ':k', lw=1)
ax.set_xlim(0.001, 2)
ax.set_ylim(-0.999, 1)
if i in (1, 3):
ax.yaxis.set_major_formatter(plt.NullFormatter())
if i in (0, 1):
ax.xaxis.set_major_formatter(plt.NullFormatter())
if i in (0, 2):
ax.set_ylabel(r'$\theta_0$')
if i in (2, 3):
ax.set_xlabel(r'$\theta_1$')
plt.show()
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import lognorm
from astroML.cosmology import Cosmology
from astroML.datasets import generate_mu_z
from astroML.linear_model import LinearRegression, PolynomialRegression, BasisFunctionRegression, NadarayaWatson
#------------------------------------------------------------
# Generate data: redshift, distance modulus and error on the distance modulus
z_sample, mu_sample, dmu = generate_mu_z(100, random_state=0)
cosmo = Cosmology()
z = np.linspace(0.01, 2, 1000) # "x" values
mu_true = np.asarray(map(cosmo.mu, z)) # Ground truth y values
n_constraints = 2
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(6, 6))
fig.subplots_adjust(left=0.1, right=0.95, bottom=0.1, top=0.95, hspace=0.05, wspace=0.05)
#fit data using the design matrix formalism
C = np.identity(len(z_sample))*(dmu*dmu)
M = np.column_stack((np.ones(len(z_sample)),z_sample))
A = np.dot(np.dot(M.transpose(),np.linalg.pinv(C)),M)
B = np.dot(np.dot(M.transpose(),np.linalg.pinv(C)),mu_sample)
theta = np.dot(np.linalg.pinv(A),B)
mu_out = theta[0] + theta[1]*z
#fit data using standard package
LRmodel = LinearRegression()
LRmodel.fit(z_sample[:, None], mu_sample, dmu)
mu_fit = LRmodel.predict(z[:, None])
mu_sample_fit = LRmodel.predict(z_sample[:, None])
chi2_dof = (np.sum(((mu_sample_fit - mu_sample)/dmu)**2)/(len(mu_sample) - n_constraints))
#plot the data
ax = fig.add_subplot(111)
ax.plot(z, mu_fit, '-k')
ax.plot(z, mu_true, '--', c='gray')
ax.errorbar(z_sample, mu_sample, dmu, fmt='.k', ecolor='gray', lw=1)
ax.text(0.5, 0.05, r"$\chi^2_{\rm dof} = %.2f$" % chi2_dof,
ha='center', va='bottom', transform=ax.transAxes, fontsize=14)
ax.set_xlim(0.01, 1.8)
ax.set_ylim(36.01, 48)
ax.text(0.05, 0.95, 'Linear regression', ha='left', va='top',
transform=ax.transAxes)
ax.set_ylabel(r'$\mu$')
ax.set_xlabel(r'$z$')
ax.plot(z, mu_out, '-k', color='red')
plt.show()
import numpy as np
from astroML.linear_model import LinearRegression
X = np.random.random((100,2)) # 100 points in 2D
dy = np.random.random(100) # heteroscedastic errors
y = np.random.normal(X[:,0] + X[:,1],dy)
model = LinearRegression()
model.fit(X,y,dy)
y_pred = model.predict(X)
#Typical call
import numpy as np
from astroML.linear_model import PolynomialRegression
X = np.random.random((100,2))
y = X[:,0]**2 + X[:,1]**3
order = 4
model = PolynomialRegression(order) # fit 3rd order polynomial
model.fit(X,y)
y_pred = model.predict(X)
n_constraints = order+1
#fit data using standard package
order = # Complete
n_constraints = order+1
poly = PolynomialRegression( # Complete
poly.fit(# Complete
# Complete the rest
fig = plt.figure(figsize=(6, 6))
#plot the data
ax = fig.add_subplot(111)
ax.plot(z, mu_fit, '-k')
ax.plot(z, mu_true, '--', c='gray')
ax.errorbar(z_sample, mu_sample, dmu, fmt='.k', ecolor='gray', lw=1)
ax.text(0.5, 0.05, r"$\chi^2_{\rm dof} = %.2f$" % chi2_dof,
ha='center', va='bottom', transform=ax.transAxes, fontsize=14)
ax.set_xlim(0.01, 1.8)
ax.set_ylim(36.01, 48)
ax.text(0.05, 0.95, 'Polynomial regression', ha='left', va='top',
transform=ax.transAxes)
ax.set_ylabel(r'$\mu$')
ax.set_xlabel(r'$z$')
ax.plot(z, mu_out, '-k', color='red')
plt.show()
Can you make the same code do linear regression?
#Basis function regression looks like this
import numpy as np
from astroML.linear_model import BasisFunctionRegression
X = np.random.random((100,1))
y = np.random.normal(X[:,0],dy)
mu = np.linspace(0,1,10)[:, None]
sigma = 0.1
model = BasisFunctionRegression('gaussian', mu=mu, sigma=sigma)
model.fit(X,y,dy)
y_pred = model.predict(X)
#------------------------------------------------------------
# Define our Gaussians
nGaussians = 10
basis_mu = np.linspace(0,2,nGaussians)[:, None]
basis_sigma = 1.0 * (basis_mu[1] - basis_mu[0])
n_constraints = nGaussians+1
#fit data using gaussian-based basis function regression
bfr = BasisFunctionRegression('gaussian', mu=basis_mu, sigma=basis_sigma)
bfr.fit(z_sample[:, None], mu_sample, dmu)
mu_fit = bfr.predict(z[:, None])
mu_sample_fit = bfr.predict(z_sample[:, None])
chi2_dof = (np.sum(((mu_sample_fit - mu_sample) / dmu) ** 2) / (len(mu_sample) - n_constraints))
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(6, 6))
ax = fig.add_subplot(111)
ax.plot(z, mu_fit, '-k')
ax.plot(z, mu_true, '--', c='gray')
ax.errorbar(z_sample, mu_sample, dmu, fmt='.k', ecolor='gray', lw=1)
ax.text(0.5, 0.05, r"$\chi^2_{\rm dof} = %.2f$" % chi2_dof,
ha='center', va='bottom', transform=ax.transAxes, fontsize=14)
ax.set_xlim(0.01, 1.8)
ax.set_ylim(36.01, 48)
ax.text(0.05, 0.95, 'Basis Function regression', ha='left', va='top',
transform=ax.transAxes)
ax.set_ylabel(r'$\mu$')
ax.set_xlabel(r'$z$')
#ax.plot(z, mu_out, '-k', color='red')
plt.show()
# Do it by hand so that we can overplot the Gaussians
def gaussian_basis(x, mu, sigma):
return np.exp(-0.5 * ((x - mu) / sigma) ** 2)
#------------------------------------------------------------
M = np.zeros(shape=[nGaussians, z_sample.shape[0]])
for i in range(nGaussians):
M[i] = gaussian_basis(z_sample, basis_mu[i], basis_sigma)
M = np.matrix(M).T
C = np.matrix(np.diagflat(dmu**2))
Y = np.matrix(mu_sample).T
coeff = (M.T * C.I * M).I * (M.T * C.I * Y)
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(6, 6))
fig.subplots_adjust(left=0.1, right=0.95, bottom=0.1, top=0.95, hspace=0.05, wspace=0.05)
ax = fig.add_subplot(111)
# Plot the gaussians and their sum
i=0
mu_fit = np.zeros(len(z))
for i in range(nGaussians):
mu_fit += coeff[i,0]*gaussian_basis(z, basis_mu[i], basis_sigma)
if (coeff[i,0] > 0.):
ax.plot(z,coeff[i,0]*gaussian_basis(z, basis_mu[i], basis_sigma),color='blue')
else:
ax.plot(z,-coeff[i,0]*gaussian_basis(z, basis_mu[i], basis_sigma),color='blue',ls='--')
#plot the data
ax.plot(z, mu_fit, '-k')
ax.plot(z, mu_true, '--', c='gray')
ax.errorbar(z_sample, mu_sample, dmu, fmt='.k', ecolor='gray', lw=1)
ax.text(0.5, 0.05, r"$\chi^2_{\rm dof} = %.2f$" % chi2_dof,
ha='center', va='bottom', transform=ax.transAxes, fontsize=14)
ax.set_xlim(0.01, 1.8)
ax.set_ylim(0.01, 48)
ax.text(0.05, 0.95, 'Basis Function regression', ha='left', va='top',
transform=ax.transAxes)
ax.set_ylabel(r'$\mu$')
ax.set_xlabel(r'$z$')
#ax.plot(z, mu_out, '-k', color='red')
plt.show()
# GTR: Hacked the above to make the basis a polynomial just to show
# that polynomial regression is a special case of basis function regression
#------------------------------------------------------------
# Define our "Gaussians" (order or polynomial in this case)
nGaussians = 4
basis_mu = np.linspace(0,2,nGaussians)[:, None]
basis_sigma = 1.0 * (basis_mu[1] - basis_mu[0])
n_constraints = nGaussians+1
def gaussian_basis(n, x, mu, sigma):
return x**n
#------------------------------------------------------------
M = np.zeros(shape=[nGaussians, z_sample.shape[0]])
for i in range(nGaussians):
M[i] = gaussian_basis(i, z_sample, basis_mu[i], basis_sigma)
M = np.matrix(M).T
C = np.matrix(np.diagflat(dmu**2))
Y = np.matrix(mu_sample).T
coeff = (M.T * C.I * M).I * (M.T * C.I * Y)
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(6, 6))
fig.subplots_adjust(left=0.1, right=0.95, bottom=0.1, top=0.95, hspace=0.05, wspace=0.05)
ax = fig.add_subplot(111)
# Plot the gaussians and their sum
i=0
mu_fit = np.zeros(len(z))
for i in range(nGaussians):
mu_fit += coeff[i,0]*gaussian_basis(i,z, basis_mu[i], basis_sigma)
if (coeff[i,0] > 0.):
ax.plot(z,coeff[i,0]*gaussian_basis(i,z, basis_mu[i], basis_sigma),color='blue')
else:
ax.plot(z,-coeff[i,0]*gaussian_basis(i,z, basis_mu[i], basis_sigma),color='blue',ls='--')
#plot the data
ax.plot(z, mu_fit, '-k')
ax.plot(z, mu_true, '--', c='gray')
ax.errorbar(z_sample, mu_sample, dmu, fmt='.k', ecolor='gray', lw=1)
ax.text(0.5, 0.05, r"$\chi^2_{\rm dof} = %.2f$" % chi2_dof,
ha='center', va='bottom', transform=ax.transAxes, fontsize=14)
ax.set_xlim(0.01, 1.8)
ax.set_ylim(0.01, 48)
ax.text(0.05, 0.95, 'Basis Function regression', ha='left', va='top',
transform=ax.transAxes)
ax.set_ylabel(r'$\mu$')
ax.set_xlabel(r'$z$')
#ax.plot(z, mu_out, '-k', color='red')
plt.show()
import numpy as np
from astroML.linear_model import NadarayaWatson
X = np.random.random((100,2))
y = X[:,0] + X[:,1]
model = NadarayaWatson('gaussian', 0.05)
model.fit(X,y)
y_pred = model.predict(X)
Using Nadaraya-Watson on our supernova data looks like this:
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import lognorm
from astroML.cosmology import Cosmology
from astroML.datasets import generate_mu_z
from astroML.linear_model import NadarayaWatson
#------------------------------------------------------------
# Generate data
z_sample, mu_sample, dmu = generate_mu_z(100, random_state=0)
cosmo = Cosmology()
z = np.linspace(0.01, 2, 1000)
mu_true = np.asarray(map(cosmo.mu, z))
n_constraints = 1
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(6, 6))
ax = fig.add_subplot(111)
#fit data using standard package
nwreg = NadarayaWatson('gaussian', 0.05)
nwreg.fit(z_sample[:, None], mu_sample)
mu_sample_fit = nwreg.predict(z_sample[:, None])
mu_fit = nwreg.predict(z[:, None])
chi2_dof = (np.sum(((mu_sample_fit - mu_sample) / dmu) ** 2)/(len(mu_sample) - n_constraints))
#plot the data
ax.plot(z, mu_fit, '-k')
ax.plot(z, mu_true, '--', c='gray')
ax.errorbar(z_sample, mu_sample, dmu, fmt='.k', ecolor='gray', lw=1)
ax.text(0.5, 0.05, r"$\chi^2_{\rm dof} = %.2f$" % chi2_dof,
ha='center', va='bottom', transform=ax.transAxes, fontsize=14)
ax.set_xlim(0.01, 1.8)
ax.set_ylim(36.01, 48)
ax.text(0.05, 0.95, 'Nadaraya-Watson', ha='left', va='top',
transform=ax.transAxes)
ax.set_ylabel(r'$\mu$')
ax.set_xlabel(r'$z$')
#ax.plot(z, mu_out, '-k', color='red')
plt.show()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from astroML.linear_model import PolynomialRegression
def f(x):
function to approximate by polynomial interpolation
return np.sin(x)
# generate points used to plot
x_plot = np.linspace(0, 8, 100)
# generate points and keep a subset of them
x = np.linspace(0, 8, 10)
#rng = np.random.RandomState(0)
#rng.shuffle(x)
#x = np.sort(x[:10])
y = f(x)+0.25*(np.random.random(len(x))-0.5)
# create matrix versions of these arrays
X = x[:, None]
X_plot = x_plot[:, None]
colors = ['teal', 'yellowgreen', 'gold']
lw = 2
plt.figure(figsize=(8,8))
plt.plot(x_plot, f(x_plot), color='cornflowerblue', linewidth=lw, label="ground truth")
plt.scatter(x, y, color='navy', s=30, marker='o', label="training points")
for count, degree in enumerate([3,4,5]):
poly = PolynomialRegression(degree)
poly.fit(X,y)
y_plot = poly.predict(X_plot)
plt.plot(x_plot, y_plot, color=colors[count], linewidth=lw, label="degree %d" % degree)
plt.legend(loc='lower left')
plt.show()
import numpy as np
from sklearn.linear_model import Ridge
X = np.random.random((100,10))
y = np.dot(X, np.random.random(10))
model = Ridge(alpha=0.05)
model.fit(X,y)
y_pred = model.predict(X)
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import lognorm
from sklearn.linear_model import LinearRegression, Ridge
from astroML.cosmology import Cosmology
from astroML.datasets import generate_mu_z
#----------------------------------------------------------------------
# generate data
np.random.seed(0)
z_sample, mu_sample, dmu = generate_mu_z(100, random_state=0)
cosmo = Cosmology()
z = np.linspace(0.01, 2, 1000)
mu = np.asarray(map(cosmo.mu, z))
#------------------------------------------------------------
# Manually convert data to a gaussian basis
# note that we're ignoring errors here, for the sake of example.
def gaussian_basis(x, mu, sigma):
return np.exp(-0.5 * ((x - mu) / sigma) ** 2)
centers = np.linspace(0, 1.8, 100)
widths = 0.2
X = gaussian_basis(z_sample[:, None], centers, widths)
#------------------------------------------------------------
# Set up the figure to plot the results
fig = plt.figure(figsize=(12, 8))
classifier = [LinearRegression, Ridge]
kwargs = [dict(), dict(alpha=0.005)]
labels = ['Gaussian Basis Regression', 'Ridge Regression']
for i in range(2):
clf = classifier[i](fit_intercept=True, **kwargs[i])
clf.fit(X, mu_sample)
w = clf.coef_
fit = clf.predict(gaussian_basis(z[:, None], centers, widths))
# plot fit
ax = fig.add_subplot(221 + i)
ax.xaxis.set_major_formatter(plt.NullFormatter())
# plot curves for regularized fits
if i == 0:
ax.set_ylabel('$\mu$')
else:
ax.yaxis.set_major_formatter(plt.NullFormatter())
curves = 37 + w * gaussian_basis(z[:, np.newaxis], centers, widths)
curves = curves[:, abs(w) > 0.01]
ax.plot(z, curves,
c='gray', lw=1, alpha=0.5)
ax.plot(z, fit, '-k')
ax.plot(z, mu, '--', c='gray')
ax.errorbar(z_sample, mu_sample, dmu, fmt='.k', ecolor='gray', lw=1, ms=4)
ax.set_xlim(0.001, 1.8)
ax.set_ylim(36, 52)
ax.text(0.05, 0.93, labels[i],
ha='left', va='top',
bbox=dict(boxstyle='round', ec='k', fc='w'),
transform=ax.transAxes)
# plot weights
ax = plt.subplot(223 + i)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.5))
ax.set_xlabel('$z$')
if i == 0:
ax.set_ylabel(r'$\theta$')
w *= 1E-12
ax.text(0, 1.01, r'$\rm \times 10^{12}$',
transform=ax.transAxes)
ax.scatter(centers, w, s=9, lw=0, c='k')
ax.set_xlim(-0.05, 1.8)
if i == 1:
ax.set_ylim(-2, 4)
elif i == 2:
ax.set_ylim(-0.5, 2)
ax.text(0.05, 0.93, labels[i],
ha='left', va='top',
bbox=dict(boxstyle='round', ec='k', fc='w'),
transform=ax.transAxes)
plt.show()
import numpy as np
from sklearn.linear_model import Lasso
XX = np.random.random((100,10))
yy = np.dot(XX, np.random.random(10))
model = Lasso(alpha = 0.05)
model.fit(XX,yy)
y_pred = model.predict(XX)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bayesian Regression
Step2: Print $C$, $M$, $A$, $B$, and $\theta$ and make sure that you understand how these are constructed.
Step3: Polynomial Regression
Step4: Recreate the supernovae figure from above now using the polynomial regression algorithm. (Hint
Step5: Basis function regression
Step6: We'll now repeat the supernova data example using basis function regression.
Step7: Kernel Regression
Step9: Regularization
Step10: This is fit with order = 3, 4, and 5. What happens if you make the order $\sim N_{\rm points}$?
Step11: The following examples compares Gaussian Basis Regression with and without the constraints from Ridge Regression. It uses 100 evenly spaced Gauassian, which we can see strongly overfits the problem and has very large coefficient values, until a constraint is imposed.
Step12: Least absolute shrinkage and selection (LASSO) regularization
|
8,733
|
<ASSISTANT_TASK:>
Python Code:
import SimpleITK as sitk
import numpy as np
# If the environment variable SIMPLE_ITK_MEMORY_CONSTRAINED_ENVIRONMENT is set, this will override the ReadImage
# function so that it also resamples the image to a smaller size (testing environment is memory constrained).
%run setup_for_testing
import registration_utilities as ru
import registration_callbacks as rc
%matplotlib inline
import matplotlib.pyplot as plt
from ipywidgets import interact, fixed
# utility method that either downloads data from the Girder repository or
# if already downloaded returns the file name for reading from disk (cached data)
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
%run popi_utilities_setup.py
images = []
masks = []
points = []
for i in range(0, 10):
image_file_name = f"POPI/meta/{i}0-P.mhd"
mask_file_name = f"POPI/masks/{i}0-air-body-lungs.mhd"
points_file_name = f"POPI/landmarks/{i}0-Landmarks.pts"
images.append(
sitk.ReadImage(fdata(image_file_name), sitk.sitkFloat32)
) # read and cast to format required for registration
masks.append(sitk.ReadImage(fdata(mask_file_name)))
points.append(read_POPI_points(fdata(points_file_name)))
interact(
display_coronal_with_overlay,
temporal_slice=(0, len(images) - 1),
coronal_slice=(0, images[0].GetSize()[1] - 1),
images=fixed(images),
masks=fixed(masks),
label=fixed(lung_label),
window_min=fixed(-1024),
window_max=fixed(976),
);
def demons_registration(
fixed_image, moving_image, fixed_points=None, moving_points=None
):
registration_method = sitk.ImageRegistrationMethod()
# Create initial identity transformation.
transform_to_displacment_field_filter = sitk.TransformToDisplacementFieldFilter()
transform_to_displacment_field_filter.SetReferenceImage(fixed_image)
# The image returned from the initial_transform_filter is transferred to the transform and cleared out.
initial_transform = sitk.DisplacementFieldTransform(
transform_to_displacment_field_filter.Execute(sitk.Transform())
)
# Regularization (update field - viscous, total field - elastic).
initial_transform.SetSmoothingGaussianOnUpdate(
varianceForUpdateField=0.0, varianceForTotalField=2.0
)
registration_method.SetInitialTransform(initial_transform)
registration_method.SetMetricAsDemons(
10
) # intensities are equal if the difference is less than 10HU
# Multi-resolution framework.
registration_method.SetShrinkFactorsPerLevel(shrinkFactors=[4, 2, 1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[8, 4, 0])
registration_method.SetInterpolator(sitk.sitkLinear)
# If you have time, run this code as is, otherwise switch to the gradient descent optimizer
# registration_method.SetOptimizerAsConjugateGradientLineSearch(learningRate=1.0, numberOfIterations=20, convergenceMinimumValue=1e-6, convergenceWindowSize=10)
registration_method.SetOptimizerAsGradientDescent(
learningRate=1.0,
numberOfIterations=20,
convergenceMinimumValue=1e-6,
convergenceWindowSize=10,
)
registration_method.SetOptimizerScalesFromPhysicalShift()
# If corresponding points in the fixed and moving image are given then we display the similarity metric
# and the TRE during the registration.
if fixed_points and moving_points:
registration_method.AddCommand(
sitk.sitkStartEvent, rc.metric_and_reference_start_plot
)
registration_method.AddCommand(
sitk.sitkEndEvent, rc.metric_and_reference_end_plot
)
registration_method.AddCommand(
sitk.sitkIterationEvent,
lambda: rc.metric_and_reference_plot_values(
registration_method, fixed_points, moving_points
),
)
return registration_method.Execute(fixed_image, moving_image)
#%%timeit -r1 -n1
# Uncomment the line above if you want to time the running of this cell.
# Select the fixed and moving images, valid entries are in [0,9]
fixed_image_index = 0
moving_image_index = 7
tx = demons_registration(
fixed_image=images[fixed_image_index],
moving_image=images[moving_image_index],
fixed_points=points[fixed_image_index],
moving_points=points[moving_image_index],
)
(
initial_errors_mean,
initial_errors_std,
_,
initial_errors_max,
initial_errors,
) = ru.registration_errors(
sitk.Euler3DTransform(), points[fixed_image_index], points[moving_image_index]
)
(
final_errors_mean,
final_errors_std,
_,
final_errors_max,
final_errors,
) = ru.registration_errors(tx, points[fixed_image_index], points[moving_image_index])
plt.hist(initial_errors, bins=20, alpha=0.5, label="before registration", color="blue")
plt.hist(final_errors, bins=20, alpha=0.5, label="after registration", color="green")
plt.legend()
plt.title("TRE histogram")
print(
f"Initial alignment errors in millimeters, mean(std): {initial_errors_mean:.2f}({initial_errors_std:.2f}), max: {initial_errors_max:.2f}"
)
print(
f"Final alignment errors in millimeters, mean(std): {final_errors_mean:.2f}({final_errors_std:.2f}), max: {final_errors_max:.2f}"
)
def smooth_and_resample(image, shrink_factors, smoothing_sigmas):
Args:
image: The image we want to resample.
shrink_factor(s): Number(s) greater than one, such that the new image's size is original_size/shrink_factor.
smoothing_sigma(s): Sigma(s) for Gaussian smoothing, this is in physical units, not pixels.
Return:
Image which is a result of smoothing the input and then resampling it using the given sigma(s) and shrink factor(s).
if np.isscalar(shrink_factors):
shrink_factors = [shrink_factors] * image.GetDimension()
if np.isscalar(smoothing_sigmas):
smoothing_sigmas = [smoothing_sigmas] * image.GetDimension()
smoothed_image = sitk.SmoothingRecursiveGaussian(image, smoothing_sigmas)
original_spacing = image.GetSpacing()
original_size = image.GetSize()
new_size = [
int(sz / float(sf) + 0.5) for sf, sz in zip(shrink_factors, original_size)
]
new_spacing = [
((original_sz - 1) * original_spc) / (new_sz - 1)
for original_sz, original_spc, new_sz in zip(
original_size, original_spacing, new_size
)
]
return sitk.Resample(
smoothed_image,
new_size,
sitk.Transform(),
sitk.sitkLinear,
image.GetOrigin(),
new_spacing,
image.GetDirection(),
0.0,
image.GetPixelID(),
)
def multiscale_demons(
registration_algorithm,
fixed_image,
moving_image,
initial_transform=None,
shrink_factors=None,
smoothing_sigmas=None,
):
Run the given registration algorithm in a multiscale fashion. The original scale should not be given as input as the
original images are implicitly incorporated as the base of the pyramid.
Args:
registration_algorithm: Any registration algorithm that has an Execute(fixed_image, moving_image, displacement_field_image)
method.
fixed_image: Resulting transformation maps points from this image's spatial domain to the moving image spatial domain.
moving_image: Resulting transformation maps points from the fixed_image's spatial domain to this image's spatial domain.
initial_transform: Any SimpleITK transform, used to initialize the displacement field.
shrink_factors (list of lists or scalars): Shrink factors relative to the original image's size. When the list entry,
shrink_factors[i], is a scalar the same factor is applied to all axes.
When the list entry is a list, shrink_factors[i][j] is applied to axis j.
This allows us to specify different shrink factors per axis. This is useful
in the context of microscopy images where it is not uncommon to have
unbalanced sampling such as a 512x512x8 image. In this case we would only want to
sample in the x,y axes and leave the z axis as is: [[[8,8,1],[4,4,1],[2,2,1]].
smoothing_sigmas (list of lists or scalars): Amount of smoothing which is done prior to resmapling the image using the given shrink factor. These
are in physical (image spacing) units.
Returns:
SimpleITK.DisplacementFieldTransform
# Create image pyramid in a memory efficient manner using a generator function.
# The whole pyramid never exists in memory, each level is created when iterating over
# the generator.
def image_pair_generator(
fixed_image, moving_image, shrink_factors, smoothing_sigmas
):
end_level = 0
start_level = 0
if shrink_factors is not None:
end_level = len(shrink_factors)
for level in range(start_level, end_level):
f_image = smooth_and_resample(
fixed_image, shrink_factors[level], smoothing_sigmas[level]
)
m_image = smooth_and_resample(
moving_image, shrink_factors[level], smoothing_sigmas[level]
)
yield (f_image, m_image)
yield (fixed_image, moving_image)
# Create initial displacement field at lowest resolution.
# Currently, the pixel type is required to be sitkVectorFloat64 because
# of a constraint imposed by the Demons filters.
if shrink_factors is not None:
original_size = fixed_image.GetSize()
original_spacing = fixed_image.GetSpacing()
s_factors = (
[shrink_factors[0]] * len(original_size)
if np.isscalar(shrink_factors[0])
else shrink_factors[0]
)
df_size = [
int(sz / float(sf) + 0.5) for sf, sz in zip(s_factors, original_size)
]
df_spacing = [
((original_sz - 1) * original_spc) / (new_sz - 1)
for original_sz, original_spc, new_sz in zip(
original_size, original_spacing, df_size
)
]
else:
df_size = fixed_image.GetSize()
df_spacing = fixed_image.GetSpacing()
if initial_transform:
initial_displacement_field = sitk.TransformToDisplacementField(
initial_transform,
sitk.sitkVectorFloat64,
df_size,
fixed_image.GetOrigin(),
df_spacing,
fixed_image.GetDirection(),
)
else:
initial_displacement_field = sitk.Image(
df_size, sitk.sitkVectorFloat64, fixed_image.GetDimension()
)
initial_displacement_field.SetSpacing(df_spacing)
initial_displacement_field.SetOrigin(fixed_image.GetOrigin())
# Run the registration.
# Start at the top of the pyramid and work our way down.
for f_image, m_image in image_pair_generator(
fixed_image, moving_image, shrink_factors, smoothing_sigmas
):
initial_displacement_field = sitk.Resample(initial_displacement_field, f_image)
initial_displacement_field = registration_algorithm.Execute(
f_image, m_image, initial_displacement_field
)
return sitk.DisplacementFieldTransform(initial_displacement_field)
# Define a simple callback which allows us to monitor the Demons filter's progress.
def iteration_callback(filter):
print(f"\r{filter.GetElapsedIterations()}: {filter.GetMetric():.2f}", end="")
fixed_image_index = 0
moving_image_index = 7
# Select a Demons filter and configure it.
demons_filter = sitk.FastSymmetricForcesDemonsRegistrationFilter()
demons_filter.SetNumberOfIterations(20)
# Regularization (update field - viscous, total field - elastic).
demons_filter.SetSmoothDisplacementField(True)
demons_filter.SetStandardDeviations(2.0)
# Add our simple callback to the registration filter.
demons_filter.AddCommand(
sitk.sitkIterationEvent, lambda: iteration_callback(demons_filter)
)
# Run the registration.
tx = multiscale_demons(
registration_algorithm=demons_filter,
fixed_image=images[fixed_image_index],
moving_image=images[moving_image_index],
shrink_factors=[4, 2],
smoothing_sigmas=[8, 4],
)
# Compare the initial and final TREs.
(
initial_errors_mean,
initial_errors_std,
_,
initial_errors_max,
initial_errors,
) = ru.registration_errors(
sitk.Euler3DTransform(), points[fixed_image_index], points[moving_image_index]
)
(
final_errors_mean,
final_errors_std,
_,
final_errors_max,
final_errors,
) = ru.registration_errors(tx, points[fixed_image_index], points[moving_image_index])
plt.hist(initial_errors, bins=20, alpha=0.5, label="before registration", color="blue")
plt.hist(final_errors, bins=20, alpha=0.5, label="after registration", color="green")
plt.legend()
plt.title("TRE histogram")
print(
f"\nInitial alignment errors in millimeters, mean(std): {initial_errors_mean:.2f}({initial_errors_std:.2f}), max: {initial_errors_max:.2f}"
)
print(
f"Final alignment errors in millimeters, mean(std): {final_errors_mean:.2f}({final_errors_std:.2f}), max: {final_errors_max:.2f}"
)
import glob
import pandas as pd
from gui import multi_image_display2D
# Fetch all of the data associated with this example.
data_directory = os.path.dirname(fdata("mr_slice_atlas/readme.txt"))
segmented_img = sitk.ReadImage(os.path.join(data_directory, "segmented_image.mha"))
new_img = sitk.ReadImage(os.path.join(data_directory, "new_image.mha"))
contours_list = []
for file_name in glob.glob(os.path.join(data_directory, "*.csv")):
df = pd.read_csv(file_name)
contours_list.append((list(df["X"]), list(df["Y"])))
# Display the images and overlay the contours onto the segmented image.
fig, axes = multi_image_display2D([segmented_img, new_img])
for contour in contours_list:
axes[0].plot(contour[0], contour[1], linewidth=5)
# Select a Demons filter and configure it.
demons_filter = sitk.DiffeomorphicDemonsRegistrationFilter()
demons_filter.SetNumberOfIterations(20)
# Regularization (update field - viscous, total field - elastic).
demons_filter.SetSmoothDisplacementField(True)
demons_filter.SetStandardDeviations(0.8)
# create initial transform
initial_tfm = initial_transform = sitk.CenteredTransformInitializer(
segmented_img,
new_img,
sitk.Euler2DTransform(),
sitk.CenteredTransformInitializerFilter.GEOMETRY,
)
# Run the registration.
final_tfm = multiscale_demons(
registration_algorithm=demons_filter,
fixed_image=segmented_img,
moving_image=new_img,
initial_transform=initial_tfm,
shrink_factors=[6, 4, 2],
smoothing_sigmas=[6, 4, 2],
)
# Display the transformed segmentation.
fig, axes = multi_image_display2D([segmented_img, new_img])
for contour in contours_list:
# Plot on segmented image.
axes[0].plot(contour[0], contour[1], linewidth=5)
# Transform the contour points from segmented image to new image (requires the use of points in physical space)
transformed_contour = [
new_img.TransformPhysicalPointToContinuousIndex(
final_tfm.TransformPoint(
segmented_img.TransformContinuousIndexToPhysicalPoint(p)
)
)
for p in zip(contour[0], contour[1])
]
x_coords, y_coords = zip(*transformed_contour)
axes[1].plot(x_coords, y_coords, linewidth=5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Utilities
Step2: Loading Data
Step3: Demons Registration
Step4: Running the Demons registration with the conjugate gradient optimizer on this data <font color="red">takes a long time</font> which is why the code above uses gradient descent. If you are more interested in accuracy and have the time then switch to the conjugate gradient optimizer.
Step7: SimpleITK also includes a set of Demons filters which are independent of the ImageRegistrationMethod. These include
Step8: Now we will use our newly minted multiscale framework to perform registration with the Demons filters. Some things you can easily try out by editing the code below
Step9: Transferring Segmentation
Step10: Register and transfer the segmentation.
|
8,734
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from autoc import DataExploration, PreProcessor, NaImputer
from autoc.utils.getdata import get_dataset
import numpy as np
# skicit learn
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import cross_val_score,train_test_split
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from sklearn.metrics import roc_curve, accuracy_score, auc, classification_report
# matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
titanic = get_dataset("titanic")
titanic.who.dtype.kind == 'O'
titanic.head()
exploration_titanic = DataExploration(titanic)
exploration_titanic.print_infos() # there is duplicates here because no id, interesting !
exploration_titanic.nacolcount()
exploration_titanic.structure()
titanic.corr()
titanic.loc[titanic.age.isnull(),:].head(5)
preprocessor = PreProcessor(titanic)
preprocessor.infer_subtypes()
titanic = titanic.drop('alive', axis = 1)
features_full = pd.concat([titanic.loc[:, ['fare', 'age', 'pclass', 'sibsp', 'parch']],
pd.get_dummies(titanic['sex'], prefix='sex'),
pd.get_dummies(titanic['who'], prefix='who'),
pd.get_dummies(titanic['alone'], prefix='alone'),
pd.get_dummies(titanic['embarked'], prefix='embarked')],
axis=1)
features = pd.concat([titanic[['fare', 'age', 'pclass']],
pd.get_dummies(titanic['sex'], prefix='sex'),
pd.get_dummies(titanic['who'], prefix='who'),
pd.get_dummies(titanic['embarked'], prefix='embarked')],
axis=1)
target = titanic.survived
# Impute missing values
imp = NaImputer(features_full)
features_full = imp.basic_naimputation(['age']) # this is still a pandas Dataframe but imputed
target = titanic.survived
# Creating test train
features_train, features_test, target_train, target_test = train_test_split(
features_full.values, target.values, test_size=0.25, random_state=0)
logreg = LogisticRegression(C=1)
logreg.fit(features_train, target_train)
target_pred = logreg.predict(features_test)
feature_names = features_full.columns
print("Accuracy : {}".format(accuracy_score(target_test, target_pred)))
weights = logreg.coef_.flatten()
dict_weights = {k:v for k,v in zip(feature_names, weights)}
def plot_simple_imp(imp, features_names,sort = True, absolute=False):
serie = pd.Series(index=feature_names, data=imp)
if absolute :
serie = np.abs(serie)
if sort :
serie.sort_values(inplace=True, ascending=False)
serie.plot(kind='barh')
plot_simple_imp(weights, feature_names)
# Looking at weights
feature_names = features_full.columns
def plot_abs_weights(coeff_arr, features_name, title=None,legend_size=12,figsize=(15,7)):
coeff_arr = np.abs(coeff_arr)# take absolute value
coeff_arr.sort()
plt.figure(figsize=figsize)
plt.barh(range(len(feature_names)), coeff_arr)
plt.yticks(range(len(feature_names)), feature_names, size=legend_size)
if title:
plt.title(title)
plot_abs_weights(logreg.coef_.ravel(), feature_names, title="Absolute Coefficient Logistic Regression")
rf_full = RandomForestClassifier(n_estimators=500)
rf_full.fit(features_train, target_train)
rf_full.score(features_test, target_test)
plot_simple_imp(rf_full.feature_importances_, feature_names)
def rf_cv(features, target,random_state=1, n_estimators=200,scoring='accuracy',n_jobs=4, verbose=True):
Print scores of a random forest cross validation
rf = RandomForestClassifier(n_estimators=n_estimators, random_state=random_state)
scores = cross_val_score(rf, features, target, cv=4,scoring=scoring,n_jobs=4)
if verbose :
print("Random Forest CV scores:min: {:.3f}, mean: {:.3f}, max: {:.3f}".format(
scores.min(), scores.mean(), scores.max()))
return scores
def logreg_cv(features, target, scoring='accuracy',n_jobs=4, verbose=True):
Print scores of a forest cross validation
logreg = LogisticRegression(C=1)
scores = cross_val_score(logreg, features, target, cv=4,scoring=scoring,n_jobs=4)
if verbose :
print("Logistic Regression CV scores: min: {:.3f}, mean: {:.3f}, max: {:.3f}".format(
scores.min(), scores.mean(), scores.max()))
return scores
def plot_roc_curve(target_test, target_predicted_proba):
fpr, tpr, thresholds = roc_curve(target_test, target_predicted_proba[:, 1])
roc_auc = auc(fpr, tpr)
# Plot ROC curve
plt.plot(fpr, tpr, label='ROC curve (AUC = %0.3f)' % roc_auc)
plt.plot([0, 1], [0, 1], 'k--') # random predictions curve
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate or (1 - Specifity)')
plt.ylabel('True Positive Rate or (Sensitivity)')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
# selecting index and transforming into numpy array
index_missing_age = features.age.isnull()
features_rm_a, target_rm_a = features.loc[~index_missing_age, :].values, target[~index_missing_age].values
features_rm_a.shape
rf_cv(features_rm_a, target_rm_a, scoring='accuracy')
features_train_rm, features_test_rm, target_train_rm, target_test_rm = train_test_split(
features_rm_a, target_rm_a, test_size=0.25, random_state=0)
rf = RandomForestClassifier(n_estimators=200)
rf.fit(features_train_rm, target_train_rm)
target_predicted_proba = rf.predict_proba(features_test_rm)
plot_roc_curve(target_test_rm, target_predicted_proba)
rf_cv(features_rm_a, target_rm_a,random_state=0, scoring='roc_auc')
# selecting index and transforming into numpy array
features_cm_a, target_cm_a = features.drop('age', axis =1).values, target.values
features_cm_a.shape
rf_cv(features_cm_a, target_cm_a, scoring="accuracy")
features_train_cm, features_test_cm, target_train_cm, target_test_cm = train_test_split(
features_cm_a, target_cm_a, test_size=0.25, random_state=0)
rf = RandomForestClassifier(n_estimators=200)
rf.fit(features_train_cm, target_train_cm)
target_predicted_proba = rf.predict_proba(features_test_cm)
plot_roc_curve(target_test_cm, target_predicted_proba)
rf_cv(features_cm_a, target_cm_a, scoring="roc_auc")
# selecting index and transforming into numpy array
features_imp = features.copy()
features.shape
# features_imp.loc[:,'is_na_age'] =features_imp.age.isnull().astype(int)
# imp = NaImputer(features) # creating our imputer instance
# features_imp = imp.basic_naimputation(columns_to_process=['age'])
features_imp = features.fillna(-1)
features_imp_a, target_imp_a = features_imp.values, target.values
rf_cv(features_imp_a, target_imp_a, scoring='accuracy')
features_train_imp, features_test_imp, target_train_imp, target_test_imp = train_test_split(
features_imp_a, target_imp_a, test_size=0.25, random_state=0)
rf = RandomForestClassifier(n_estimators=200)
rf.fit(features_train_imp, target_train_imp)
target_predicted_proba = rf.predict_proba(features_test_imp)
plot_roc_curve(target_test_imp, target_predicted_proba)
rf_cv(features_imp_a, target_imp_a, scoring='roc_auc')
rf.feature_importances_
# constructing features
features_imp = pd.concat([titanic[['pclass']],
pd.get_dummies(titanic['sex'], prefix='sex'),
pd.get_dummies(titanic['who'], prefix='who')],axis=1)
features_imp.pclass.value_counts()
#scores_imp = logreg_cv(features_imp.drop('pclass',1), target)
def insert_na(features_full=features_imp,target=target,index=False,
col_to_simulate='pclass', pct_na_toinsert=0.2, verbose=False):
Returns dataset with a certain pct of na injected in one colum
nb_na_toinsert = int(pct_na_toinsert * len(features_full))
index_na_toinsert = np.random.choice(range(len(features_full)),nb_na_toinsert, replace=False)
if verbose:
print("We are inserting {} missing values".format(len(index_na_toinsert)))
features_full_imp = features_full.copy()
if index :
return index_na_toinsert
else:
features_full_imp.loc[index_na_toinsert, col_to_simulate] = np.nan
return features_full_imp
def score_rf_sim(features_full=features_imp,target=target,
col_to_simulate='pclass', pct_na_toinsert=0.2, n_repeat=10,verbose=False, *args, **kwargs):
Inserting a percentage of missing values on a variable and look influence on performance
with a random forest model
features_full_imp = insert_na(features_full,target=target,
col_to_simulate=col_to_simulate, pct_na_toinsert=pct_na_toinsert,verbose=verbose)
imp_f = NaImputer(features_full_imp)
features_full_imp.loc[:,col_to_simulate] = imp_f.fillna_serie(colname=col_to_simulate)
# repeated cross validation
# score_rcv = 0
# for i in range(n_repeat):
# score_rcv += logreg_cv(features_full_imp, target,*args, **kwargs).mean()
return logreg_cv(features_full_imp, target, verbose=False).mean()
score_rf_sim(col_to_simulate='pclass', verbose=True)
accuracy_mean_pct_na = np.array([score_rf_sim(
pct_na_toinsert=i,col_to_simulate='pclass',verbose=True) for i in np.linspace(0,0.98,10)])
def sim_nmc(nmc=60,n_interval=5, *args, **kwargs):
res = np.zeros(n_interval)
for i in range(nmc):
res += np.array([score_rf_sim(
pct_na_toinsert=i, *args, **kwargs) for i in np.linspace(0,0.98,n_interval)])
return res/nmc
test = sim_nmc(nmc=30, n_interval=5)
test
np.linspace(0,0.98,5)
plt.plot(np.linspace(0,0.98,5), test)
plt.title('Accuracy function of percentage of missing values inserted')
features_pred = features_full.copy().drop_duplicates()
features_pred = features_pred.drop('age', axis = 1)
index_na = insert_na(col_to_simulate='pclass', pct_na_toinsert=0.2, index=True)
index_na = features_pred.index.isin(index_na)
features_pred.head()
target = features_pred.pclass
features_pred = features_pred.drop('pclass', axis = 1)
features_pred_train, target_pred_target = features_pred.loc[~index_na,:], target[~index_na]
features_pred_test, target_pred_test = features_pred.loc[index_na,:], target[index_na]
features_pred_train.shape
features_pred_test.head()
rf = RandomForestClassifier(n_estimators=200)
rf.fit(features_pred_train, target_pred_target)
target_predicted = rf.predict(features_pred_test)
target_predicted_proba = rf.predict_proba(features_pred_test)
print(classification_report(target_pred_test, target_predicted))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Playing with titanic data
Step2: Preprocessing data
Step3: Transform everything to numeric variables for skicit learn model
Step4: Simple Model to understand variable importances
Step5: Logistic Regression
Step6: Random Forest
Step9: Studying simple imputation technique for accuracy performance
Step10: Row Deletion strategy
Step11: Col Deletion strategy
Step12: Median Imputation strategy
Step13: Simulating missing values in a column and see imputation performance
Step16: Comparing simulated model and raw model
Step17: Trying to predict missing values
|
8,735
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-3', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
8,736
|
<ASSISTANT_TASK:>
Python Code:
import NotebookImport
from Imports import path
import numpy as np
import pandas as pd
def tranfer_fx(x, adult_age=20):
x = np.float(x)
x=(x+1)/(1+adult_age)
y = np.log(x) if x <= 1 else x - 1
return y
def anti_tranfer_fx(x, adult_age=20):
if x < 0:
return (1+adult_age)*np.exp(x)-1
else:
return (1+adult_age)*x+adult_age
horvath_model = pd.read_table(path + 'data/Horvath_Model.csv', index_col=0,
skiprows=[0,1])
horvath_intercept = horvath_model.CoefficientTraining['(Intercept)']
horvath_model = horvath_model.iloc[1:]
def run_horvath_model(df):
'''
Uses global variables horvath_model and horvath_intercept. At some point I should
move this to a class.
Input data-frame should be normalized using Horvath's normalization script.
'''
df = df.T.fillna(horvath_model.medianByCpG).T
df = df.ix[horvath_model.CoefficientTraining.dropna().index]
pred_age = df.T.dot(horvath_model.CoefficientTraining.dropna()) + horvath_intercept
pred_age = pred_age.apply(anti_tranfer_fx)
pred_age.name = 'predicted age (Horvath)'
return pred_age
hannum_model = pd.read_csv(path + 'data/Hannum_All.csv', index_col=0)
def run_hannum_model(df):
df = df.ix[hannum_model.Coefficient.index]
pred_age = df.T.dot(hannum_model.Coefficient)
pred_age.name = 'predicted age (Hannum)'
return pred_age
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Horvath's Transfer Functions
Step2: Horvath Model
Step3: Hannum Model
|
8,737
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from __future__ import print_function
import sys
import numpy as N
import libstempo as T
import libstempo.plot as LP, libstempo.toasim as LT
T.data = T.__path__[0] + '/data/' # example files
print("Python version :",sys.version.split()[0])
print("libstempo version:",T.__version__)
print("Tempo2 version :",T.libstempo.tempo2version())
psr = T.tempopulsar(parfile = T.data + 'B1953+29_NANOGrav_dfg+12.par',
timfile = T.data + 'B1953+29_NANOGrav_dfg+12.tim')
LP.plotres(psr)
LT.make_ideal(psr)
LP.plotres(psr)
#LT.add_line(psr,f=10**6.5,A=1e-5)
LT.add_efac(psr,efac=1.0,seed=1234)
LP.plotres(psr)
LT.add_rednoise(psr,1e-12,3)
LP.plotres(psr)
LT.add_gwb(psr,flow=1e-8,gwAmp=5e-12)
LP.plotres(psr)
help(LT.add_gwb)
LT.createGWB([psr],Amp=5e-15,gam=13./3.)
LP.plotres(psr)
psr.fit()
LP.plotres(psr)
psr.savepar('B1953+29-simulate.par')
psr.savetim('B1953+29-simulate.tim')
T.purgetim('B1953+29-simulate.tim')
psr2 = T.tempopulsar(parfile = 'B1953+29-simulate.par',
timfile = 'B1953+29-simulate.tim')
LP.plotres(psr2)
psr = LT.fakepulsar(parfile=T.data+'B1953+29_NANOGrav_dfg+12.par',
obstimes=N.arange(53000,54800,30)+N.random.randn(60), # observe every 30+-1 days
toaerr=0.1)
LT.add_efac(psr,efac=1.0,seed=1234)
LP.plotres(psr)
help(LT.fakepulsar)
# create a set of times (in MJD)
obstimes = N.arange(53000, 54800, 10, dtype=N.float128)
toaerr = 1e-3 # set the (probably arbitrary) errors in the times (us)
observatory = "ao" # the observatory
obsfreq = 1440.0 # the observation frequency (MHz)
psr = T.tempopulsar(
parfile="B1953+29-simulate.par",
toas=obstimes,
toaerrs=toaerr,
observatory=observatory,
obsfreq=obsfreq,
dofit=False,
)
# get the phases in cycles (mod 1) referenced to the initial observation time
phases = psr.phaseresiduals(removemean=False)
phaseref = psr.phaseresiduals(removemean="refphs", epoch=52973.0, site="@")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We open up a NANOGrav par/tim file combination with libstempo, and plot the residuals.
Step2: We now remove the computed residuals from the TOAs, obtaining (in effect) a perfect realization of the deterministic timing model. The pulsar parameters will have changed somewhat, so make_ideal calls fit() on the pulsar object.
Step3: We now add a single line of noise at $10^{6.5}$ Hz, with an amplitude of 10 us. We also put back radiometer noise, with rms amplitude equal to 1x the nominal TOA errors.
Step4: We could also add EQUAD quadrature noise (with add_equad) or its coarse-grained version (with add_jitter), but instead we prefer some red noise of "GW-like" amplitude $10^{-12}$ and spectral slope $\gamma = -3$.
Step5: Or, we may add a GW background as simulated by the tempo2 GWbkgrd plugin (see the docstring below).
Step6: Refitting will remove some of the power.
Step7: All done! We can save the resulting par and tim file, and analyze them with a favorite pipeline.
Step8: Note that currently the tim file that is output by tempo2 has a spurious "MODE 1" line that tempo2 does not like upon reloading. To erase it, you can do
Step9: And if we reload the files we get pack the same thing...
Step10: It's also possible to obtain a perfect realization of the timing model described in a par file without a tim file, by specifying a new set of observation times (in MJD) and errors (in us). The observation frequency, observatory, and flags can also be specified (see the docstring below).
Step11: Rather than generating fake TOAs you might want to calculate a pulsar's phase at a particular set of times. Using the tempopulsar object you can input an arbitrary set of observation times and use the residuals to get the pulsar's relative phase. For example
Step12: The observation times can be input as an array of astropy Time objects. The TOA error values, observatory values, and observation frequencies, can also be arrays of the same length as array of observation times.
|
8,738
|
<ASSISTANT_TASK:>
Python Code:
import os
os.chdir("../eppy/useful_scripts")
# changes directory, so we are where the scripts are located
# you would normaly install eppy by doing
# python setup.py install
# or
# pip install eppy
# or
# easy_install eppy
# if you have not done so, the following three lines are needed
import sys
# pathnameto_eppy = 'c:/eppy'
pathnameto_eppy = '../../'
sys.path.append(pathnameto_eppy)
%%bash
# ignore the line above. It simply lets me run a command line from ipython notebook
python eppy_version.py --help
%%bash
# ignore the line above. It simply lets me run a command line from ipython notebook
python eppy_version.py
%%bash
# ignore the line above. It simply lets me run a command line from ipython notebook
python idfdiff.py -h
from eppy.useful_scripts import doc_images #no need to know this code, it just shows the image below
for_images = doc_images
for_images.display_png(for_images.filemerge) # display the image below
%%bash
# python idfdiff.py idd file1 file2
python idfdiff.py --html --idd ../resources/iddfiles/Energy+V7_2_0.idd ../resources/idffiles/V_7_2/constructions.idf ../resources/idffiles/V_7_2/constructions_diff.idf
from eppy.useful_scripts import doc_images #no need to know this code, it just shows the image below
from IPython.display import HTML
h = HTML(open(doc_images.idfdiff_path, 'r').read())
h
%%bash
# python idfdiff.py idd file1 file2
python idfdiff.py --csv --idd ../resources/iddfiles/Energy+V7_2_0.idd ../resources/idffiles/V_7_2/constr.idf ../resources/idffiles/V_7_2/constr_diff.idf
%%bash
# ignore the line above. It simply lets me run a command line from ipython notebook
python loopdiagram.py --help
%%bash
# ignore the line above. It simply lets me run a command line from ipython notebook
python loopdiagram.py ../resources/iddfiles/Energy+V7_2_0.idd ../resources/idffiles/V_7_2/plantloop.idf
from eppy.useful_scripts import doc_images #no need to know this code, it just shows the image below
for_images = doc_images
for_images.display_png(for_images.plantloop) # display the image below
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you look in the folder "./eppy/useful_scripts", you fill find the following scripts
Step2: That was useful !
Step3: Redirecting output to a file
Step4: Now let us try this with two "idf" files that are slightly different. If we open them in a file comparing software, it would look like this
Step5: There are 4 differences between the files. Let us see what idfdiff.py does with the two files. We will use the --html option to print out the diff in html format.
Step6: It does look like html
Step7: Pretty straight forward. Scroll up and look at the origin text files, and see how idfdiff.py understands the difference
Step8: We see the same output, but now in csv format. You can redirect it to a ".csv" file and open it up as a spreadsheet
Step9: Pretty straightforward. Simply open png file and you will see the loop diagram. (ignore the dot file for now. it will be documented later)
Step10: The script prints out it's progress. On larger files, this might take a few seconds. If we open this file, it will look like the diagram below
|
8,739
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
image_train_url = 'https://d396qusza40orc.cloudfront.net/phoenixassets/image_train_data.csv'
image_test_url = 'https://d396qusza40orc.cloudfront.net/phoenixassets/image_test_data.csv'
image_train_data = graphlab.SFrame(image_train_url)
image_train_data.head()
image_test_data = graphlab.SFrame(image_test_url)
image_test_data.head()
raw_pixel_model = graphlab.logistic_classifier.create(image_train_data, target='label',
features=['image_array'])
# actual image labels (correct answers)
image_test_data[0:5]['label']
# model output
raw_pixel_model.predict(image_test_data[0:5])
raw_pixel_model.evaluate(image_test_data)
# deep_learning_model = graphlab.load_model('http://s3.amazonaws.com/GraphLab-Datasets/deeplearning/imagenet_model_iter45')
# image_train_data['deep_features'] = deep_learning_model.extract_features(image_train_data)
deep_features_model = graphlab.logistic_classifier.create(image_train_data,
features=['deep_features'],
target='label')
# actual image labels (correct answers)
image_test_data[0:5]['label']
# model output
deep_features_model.predict(image_test_data[0:5])
deep_features_model.evaluate(image_test_data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load CIFAR-10 dataset
Step2: Train classifier using raw image pixels, no deep features yet
Step3: Predict five images with this raw pixel model
Step4: Raw pixel model only got one out of five predictions correct. That's an F.
Step5: The accuracy of this model is only 47.6%.
Step6: Train a classifier using the deep features
Step7: Try predicting the first five images again
Step8: It got them all correct! A+.
|
8,740
|
<ASSISTANT_TASK:>
Python Code:
import pynams
from pynams import fO2
fO2 = fO2(celsius=1000, buffer_curve='NNO')
print(fO2)
from pynams import V_from_log10fO2
V_from_log10fO2(celsius=1000, log10fO2=fO2)
from pynams import log10fO2_from_V
logfO2 = log10fO2_from_V(celsius=1000, volts=-0.8)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What is the log base 10 of the fO2 in bars for a given temperature and buffer?
Step2: What does that fO2 correspond to in mV reported by an O2 sensor?
Step3: My fO2 meter is reading x mV. What fO2 does that correspond to in bars?
|
8,741
|
<ASSISTANT_TASK:>
Python Code:
# Load Biospytial modules and etc.
%matplotlib inline
import sys
sys.path.append('/apps')
sys.path.append('..')
#sys.path.append('../../spystats')
import django
django.setup()
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
## Use the ggplot style
plt.style.use('ggplot')
from external_plugins.spystats.spystats import tools as sptools
import scipy
#c_delta = lambda d : np.hstack(((4 + d),-1,np.zeros(128 - 3),-1))
#c_delta = lambda d : np.hstack(((0),-1,np.zeros(128 - 3),-1))
#C = scipy.linalg.circulant(c_delta(0.1))
def createToroidalCircularBase(d=0.1,N=128):
Creates a circular base similar to the one described in GMRF Rue and Held, 2005.
c00 = np.hstack(((4 + d),-1,np.zeros(N - 3),-1))
c01 = np.hstack((-1,np.zeros(N - 1)))
c0 = np.zeros((N - 2 ,N))
c1 = np.vstack((c00,c01))
c = np.vstack((c1,c0))
c[N -1, 0] = -1
return c
%%time
## Create circular base
d = 0.00001
N = 100
c = createToroidalCircularBase(d=d,N=N)
## Simulate random noise (Normal distributed)
from scipy.fftpack import ifft2, fft2
zr = scipy.stats.norm.rvs(size=(c.size,2),loc=0,scale=1,random_state=1234)
zr.dtype=np.complex_
#plt.hist(zr.real)
#Lm = scipy.sqrt(C.shape[0]*C.shape[0]) * fft2(C)
Lm = fft2(c)
v = 1.0/ len(c) * fft2((Lm ** -0.5)* zr.reshape(Lm.shape))
x = v.real
plt.imshow(x,interpolation='None')
## Calculate inverse of c
C_inv = ifft2 ((fft2(c) ** -1))
plt.plot(C_inv[:,0])
%%time
vm = sptools.ExponentialVariogram(sill=0.3,range_a=0.4)
xx,yy,z = sptools.simulatedGaussianFieldAsPcolorMesh(vm,grid_sizex=100,grid_sizey=100,random_seed=1234)
plt.imshow(z)
346 / 0.151
plt.figure(figsize=(10, 5))
plt.subplot(1,2,1)
plt.imshow(z)
plt.subplot(1,2,2)
plt.imshow(x,interpolation='None')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Algorithm to simulate GMRF with block-circulant Matrix.
Step3: For benchmarking we will perfom a GF simulation.
Step4: comparison
|
8,742
|
<ASSISTANT_TASK:>
Python Code:
import sys, os
from adaptivemd import Project
# Use this to completely remove the example-worker project from the database.
Project.delete('tutorial-multi')
project = Project('tutorial-multi')
from adaptivemd import LocalCluster, AllegroCluster
resource = LocalCluster()
project.initialize(resource)
from adaptivemd.engine.openmm import OpenMMEngine
from adaptivemd import File, Directory
pdb_file = File('file://../files/alanine/alanine.pdb').named('initial_pdb').load()
engine = OpenMMEngine(
pdb_file=pdb_file,
system_file=File('file://../files/alanine/system.xml').load(),
integrator_file=File('file://../files/alanine/integrator.xml').load(),
args='-r --report-interval 1 -p CPU'
).named('openmm')
engine.add_output_type('master', 'master.dcd', 10)
engine.add_output_type('protein', 'protein.dcd', 1)
engine.types
project.generators.add(engine)
s = engine._create_output_str()
print s
task = project.new_trajectory(pdb_file, 100, engine=engine).run()
project.queue(task) # shortcut for project.tasks.add(task)
print project.tasks
task.trajectory
task.state
t = project.trajectories.one
t.types['protein']
print project.files
print project.trajectories
project.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Alright, let's load the package and pick the Project since we want to start a project
Step2: Let's open a project with a UNIQUE name. This will be the name used in the DB so make sure it is new and not too short. Opening a project will always create a non-existing project and reopen an exising one. You cannot chose between opening types as you would with a file. This is a precaution to not accidentally delete your project.
Step3: Now we have a handle for our project. First thing is to set it up to work on a resource.
Step4: first pick your resource -- where you want to run your simulation. Local or on Allegro
Step5: 2. Add TaskGenerators
Step6: The engine
Step7: 3. Create one intial trajectory
Step8: That is all we can do from here. To execute the tasks you need to run a worker using
Step9: Once this is done, come back here and check your results. If you want you can execute the next cell which will block until the task has been completed.
Step10: and close the project.
|
8,743
|
<ASSISTANT_TASK:>
Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(54)
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 400, activation='relu')
net = tflearn.fully_connected(net, 200, activation='relu')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=512, n_epoch=25)
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Retrieving training and test data
Step2: Visualize the training data
Step3: Building the network
Step4: Training the network
Step5: Testing
|
8,744
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import numpy as np
from numpy import pi, sqrt,cos
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 25, 'legend.handlelength' : 1.25})
%matplotlib inline
import seaborn as sns
#sns.set(style="darkgrid")
sns.set_context("paper", font_scale=5, rc={"lines.linewidth": 1.5})
def D_matrices(N):
''' create an N x N difference matrices
D1 :: first-order, centered difference
D2 :: second-order, centered difference
'''
D2 = np.zeros((N,N))
D1 = np.zeros((N,N))
for i in range(N):
D2[i,i] = -2.
if i<N-1:
D2[i,i+1],D1[i,i+1] = 1.,-1
if i>0:
D2[i,i-1],D1[i,i-1] = 1.,1.
return D1,D2
data_path = 'linear_charney_num_kappa_8.npz'
charney = np.load(data_path)
kappa = charney['kappa']
phi_max = charney['e_num'][1:-1] # do no consider ghost points
N = charney['N']
# the critical level
zc = charney['c_num'].real - 1. # recall the domain has depth 1
# vertical coordinate
dz = 1./N # vertical resolution
z = np.arange(-dz/2,-1.-dz/2.,-dz) # level array
# horizontal coordinate
x = np.linspace(0,np.pi,100)
# grid
X,Z = np.meshgrid(x,z)
# wave structure in xz-plane
phi_max_abs = np.abs(phi_max)
phi_max_phase = np.arctan2(phi_max.imag,phi_max.real)
phase = np.repeat(phi_max_phase,x.size).reshape(z.size,x.size)
mag = np.repeat(phi_max_abs,x.size).reshape(z.size,x.size)
# wave structure
PSI = mag*np.cos( kappa*X + phase )
phi = charney['e_num'][:]
phi_abs = np.abs(phi)
phi_phase = np.arctan2(phi.imag,phi.real)
D1,D2 = D_matrices(N+2)
D1,D2 = np.matrix(D1),np.matrix(D2)
phi_abs_prime = np.array(D1*np.matrix(phi_abs).T)[1:-1]/(2*dz)
phi_abs_dprime = np.array(D2*np.matrix(phi_abs).T)[1:-1]/(dz**2)
phi_phase_prime = np.array(D1*np.matrix(phi_phase).T)[1:-1]/(2*dz)
phi_phase_dprime = np.array(D2*np.matrix(phi_phase).T)[1:-1]/(dz**2)
mag_prime = np.repeat(phi_abs_prime,x.size).reshape(z.size,x.size)
mag_dprime = np.repeat(phi_abs_dprime,x.size).reshape(z.size,x.size)
phase_prime = np.repeat(phi_phase_prime,x.size).reshape(z.size,x.size)
phase_dprime = np.repeat(phi_phase_dprime,x.size).reshape(z.size,x.size)
cost = np.cos( kappa*X + phase)
sint = np.sin( kappa*X + phase)
PV = (-(kappa**2)*mag + mag_dprime - mag*(phase_prime**2) )*cost \
- (2.*mag_prime*phase_prime + mag*phase_dprime)*sint
lw = 2.
aph = .5
# PV and psi wave structure
plt.figure(figsize=(12,9))
plt.contour(X,Z,1e2*PSI,np.linspace(-10,10,9),colors='k')
plt.contourf(X,Z,PV,np.linspace(-6.,6.,9),cmap='RdBu_r',extend='both')
#plt.plot(x,np.ones(x.size)*zc,'w--',linewidth=lw,alpha=1)
plt.text(-0.375,zc-.01,r' $z_c \rightarrow$',fontsize=35)
cb = plt.colorbar(extend='both',shrink=.9)
cb.ax.text(.0,1.075,'PV',rotation=0,fontsize=30)
plt.text(2.4, -.075, r"$\beta-$Eady Problem, $\kappa = 8$", size=25, rotation=0.,\
ha="center", va="center",\
bbox = dict(boxstyle="round",ec='k',fc='w'))
plt.xticks([0.,pi/4,pi/2,3*pi/4,pi],[r'$0$',r'$\pi/4$',r'$\pi/2$',\
r'$3\,\pi/4$',r'$\pi$'])
plt.ylabel('$z/H$')
plt.xlabel(r'$x/L_d$')
plt.savefig('figs/wave-structure_pv_psi_kappa_8_num.eps')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A function to compute difference matrices
Step2: Load data
Step3: set up domain
Step4: compute wavestructure
|
8,745
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib
print(matplotlib.__version__)
print(matplotlib.get_backend())
matplotlib.use('nbagg')
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure()
plt.show()
# Twice as tall as it is wide:
fig = plt.figure(figsize=plt.figaspect(2.0))
plt.show()
fig = plt.figure()
ax = fig.add_subplot(111) # We'll explain the "111" later. Basically, 1 row and 1 column.
ax.set(xlim=[0.5, 4.5], ylim=[-2, 8], title='An Example Axes',
ylabel='Y-Axis', xlabel='X-Axis')
plt.show()
ax.set_xlim([0.5, 4.5])
ax.set_ylim([-2, 8])
ax.set_title('An Example Axes')
ax.set_ylabel('Y-Axis')
ax.set_xlabel('X-Axis')
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot([1, 2, 3, 4], [10, 20, 25, 30], color='lightblue', linewidth=3)
ax.scatter([0.3, 3.8, 1.2, 2.5], [11, 25, 9, 26], color='darkgreen', marker='^')
ax.set_xlim(0.5, 4.5)
plt.show()
plt.plot([1, 2, 3, 4], [10, 20, 25, 30], color='lightblue', linewidth=3)
plt.scatter([0.3, 3.8, 1.2, 2.5], [11, 25, 9, 26], color='darkgreen', marker='^')
plt.xlim(0.5, 4.5)
plt.show()
fig, axes = plt.subplots(nrows=2, ncols=2)
plt.show()
fig, axes = plt.subplots(nrows=2, ncols=2)
axes[0,0].set(title='Upper Left')
axes[0,1].set(title='Upper Right')
axes[1,0].set(title='Lower Left')
axes[1,1].set(title='Lower Right')
# To iterate over all items in a multidimensional numpy array, use the `flat` attribute
for ax in axes.flat:
# Remove all xticks and yticks...
ax.set(xticks=[], yticks=[])
plt.show()
%load exercises/1.1-subplots_and_basic_plotting.py
import numpy as np
import matplotlib.pyplot as plt
# Try to reproduce the figure shown in images/exercise_1-1.png
# Our data...
x = np.linspace(0, 10, 100)
y1, y2, y3 = np.cos(x), np.cos(x + 1), np.cos(x + 2)
names = ['Signal 1', 'Signal 2', 'Signal 3']
# Can you figure out what to do next to plot x vs y1, y2, and y3 on one figure?
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Normally we wouldn't need to think about this too much, but IPython/Jupyter notebooks behave a touch differently than "normal" python.
Step2: On with the show!
Step3: Figures
Step4: Awww, nothing happened! This is because by default mpl will not show anything until told to do so, as we mentioned earlier in the "backend" discussion.
Step5: Great, a blank figure! Not terribly useful yet.
Step6: Axes
Step7: Notice the call to set. Matplotlib's objects typically have lots of "explicit setters" -- in other words, functions that start with set_<something> and control a particular option.
Step8: Clearly this can get repitive quickly. Therefore, Matplotlib's set method can be very handy. It takes each kwarg you pass it and tries to call the corresponding "setter". For example, foo.set(bar='blah') would call foo.set_bar('blah').
Step9: Axes methods vs. pyplot
Step10: Much cleaner, and much clearer! So, why will most of my examples not follow the pyplot approach? Because PEP20 "The Zen of Python" says
Step11: plt.subplots(...) created a new figure and added 4 subplots to it. The axes object that was returned is a 2D numpy object array. Each item in the array is one of the subplots. They're laid out as you see them on the figure.
Step12: One really nice thing about plt.subplots() is that when it's called with no arguments, it creates a new figure with a single subplot.
|
8,746
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import utilsNetwork
networkShp = '/home/openquake/GEM/Lifelines/Building_NetworkNew/Input_files/EntireNetwork/mo_FINAL.shp'
(shpAdj,maxNumConn) = utilsNetwork.shp_adj(networkShp)
resultsFolder = '/media/sf_Shared_Folder/Paper_Scenarios/2016-09-Revision/'
maxDist = 20
maxDistBr = 20
shpAdjMaxDist = utilsNetwork.divide_edges(shpAdj,maxNumConn,maxDist)
#Option A
bridgesShp = '/home/openquake/GEM/Lifelines/Building_NetworkNew/Input_files/EntireNetwork/mo_br.shp'
brAdj = utilsNetwork.shp_adj_br(bridgesShp,maxDistBr,resultsFolder)
brRows = utilsNetwork.one_node_per_bridge(brAdj,shpAdjMaxDist,resultsFolder)
##### OR #####
#Option B
#brNodes = '/media/sf_Shared_Folder/testFilesExposure/br-simple.csv'
#brRows = utilsNetwork.find_br_rows_closest_coord(shpAdjMaxDist, brNodes)
#brRows = utilsNetwork.find_br_rows_exact_coord(shpAdjMaxDist,brNodes)
#Option A
brPerpNodes = '/home/openquake/GEM/Lifelines/Building_NetworkNew/Input_files/EntireNetwork/br_nodes+links_nodes_mo.csv'
brPerpRows = utilsNetwork.find_br_rows_exact_coord(shpAdjMaxDist,brPerpNodes)
brPerpRows = utilsNetwork.find_br_rows_closest_coord(shpAdjMaxDist,brPerpNodes)
#Option B
#brPerpRows = 0
limit_length1 = 40.
limit_length2 = 150.
adj = utilsNetwork.sim_adj(shpAdjMaxDist,brRows,brPerpRows,maxDist,limit_length1,limit_length2)
(nodes, edges, weights) = utilsNetwork.save_files_networkx(adj,maxNumConn,resultsFolder)
import networkx as nx
#Directed graph:
G = nx.DiGraph()
#Undirected graph:
#G = nx.Graph()
(plotN,G,pos,pngLimits) = utilsNetwork.draw_network(nodes, edges, weights,G)
#nodes = '/media/sf_Shared_Folder/testFilesExposure/network-nodes.txt'
#Option A: Select only the nodes that are bridges
utilsNetwork.exposure_only_br(nodes,adj,resultsFolder)
##### OR #####
#Option B: Select all the nodes (bridges and pavement)
#utilsNetwork.exposure_br_pav(nodes,adj,resultsFolder)
from time import gmtime, strftime
print "Task started at "+strftime("%Y-%m-%d %H:%M:%S", gmtime())
zoneA = './Input_files/ShpZones/ZonaA-buffer.shp'
zoneB = './Input_files/ShpZones/ZonaB-buffer.shp'
zoneC = './Input_files/ShpZones/ZonaC-buffer.shp'
zoneD = './Input_files/ShpZones/ZonaD-buffer.shp'
pathNodes = '/media/sf_Shared_Folder/testFilesExposure/network-ff.txt'
utilsNetwork.RSA_zones(zoneA,zoneB,zoneC,zoneD,pathNodes,resultsFolder)
print "Task ended at "+strftime("%Y-%m-%d %H:%M:%S", gmtime())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Specify the location of the vector GIS file containing the network
Step2: Specify the location of the folder where the results will be saved
Step3: Specify maxDist, the maximum distance between 2 points in the network (in km). If an edge is bigger than maxDist, it will be divided in two, creating a new point.
Step4: Choose between option A or B
Step5: Option A
Step6: In this case we assume four typologies of assets
Step7: Choose between a directed or undirected graph (if the direction of the edges is taken into account or not)
Step8: Option A
Step9: If the fragility functions depend on the zone where the asset is located, provide files with the corresponding zones below
|
8,747
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from numpy import fft
from numpy import linalg as LA
from scipy import ndimage
from scipy import signal
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import os
%matplotlib inline
def int2intvec(a):
Auxiliary function to recover a vector with the digits of a
given integer (in inverse order)
`a` : integer
digit = a%10
vec = np.array([digit],dtype=int)
a = (a-digit)/10
while a!=0:
digit = a%10
vec = np.append(vec,int(digit))
a = (a-digit)/10
return vec
ALPHABET7 = "0123456"
ALPHABET10 = "0123456789"
def base_encode(num, alphabet):
Encode a number in Base X
`num`: The number to encode
if (str(num) == alphabet[0]):
return int(0)
arr = []
base = len(alphabet)
while num:
rem = num % base
num = num // base
arr.append(alphabet[rem])
arr.reverse()
return int(''.join(arr))
def base7to10(num):
Convert a number from base 10 to base 7
`num`: The number to convert
arr = int2intvec(num)
num = 0
for i in range(len(arr)):
num += arr[i]*(7**(i))
return num
def base10to7(num):
Convert a number from base 7 to base 10
`num`: The number to convert
return base_encode(num, ALPHABET7)
def rgb2gray(rgb):
Convert an image from RGB to grayscale
`rgb`: The image to convert
r, g, b = rgb[:,:,0], rgb[:,:,1], rgb[:,:,2]
gray = 0.2989 * r + 0.5870 * g + 0.1140 * b
return gray
def oversampling(image, factor = 7):
Oversample a grayscale image by a certain factor, dividing each
pixel in factor*factor subpixels with the same intensity.
`image`: The image to oversample
`factor`: The oversampling factor
old_shape = image.shape
new_shape = (factor*old_shape[0], factor*old_shape[1])
new_image = np.zeros(new_shape, dtype = image.dtype)
for i in range(old_shape[0]):
for j in range(old_shape[1]):
new_image[factor*i:factor*i+factor,factor*j:factor*j+factor] = image[i,j]*np.ones((factor,factor))
return new_image
# The centered hyperpel
hyperpel = np.array([\
[-1,4],[0,4],[1,4],[2,4],[3,4],\
[-2,3],[-1,3], [0,3], [1,3], [2,3], [3,3], [4,3],\
[-2,2],[-1,2], [0,2], [1,2], [2,2], [3,2], [4,2],\
[-3,1],[-2,1],[-1,1], [0,1], [1,1], [2,1], [3,1], [4,1],[5,1],\
[-3,0],[-2,0],[-1,0], [0,0], [1,0], [2,0], [3,0], [4,0],[5,0],\
[-2,-1],[-1,-1], [0,-1], [1,-1], [2,-1], [3,-1], [4,-1],\
[-2,-2],[-1,-2], [0,-2], [1,-2], [2,-2], [3,-2], [4,-2],\
[-1,-3], [0,-3], [1,-3], [2,-3], [3,-3]])
hyperpel_sa = hyperpel - np.array([1,1])
def sa2hex(spiral_address):
# Split the number in basic unit and call the auxiliary function
# Here we reverse the order, so that the index corresponds to the
# decimal position
digits = str(spiral_address)[::-1]
hex_address = np.array([0,0])
for i in range(len(digits)):
if int(digits[i])<0 or int(digits[i])>6:
print("Invalid spiral address!")
return
elif digits[i]!= '0':
hex_address += sa2hex_aux(int(digits[i]),i)
return hex_address
# This computes the row/column positions of the base cases,
# that is, in the form a*10^(zeros).
def sa2hex_aux(a, zeros):
# Base cases
if zeros == 0:
if a == 0:
return np.array([0,0])
elif a == 1:
return np.array([0,8])
elif a == 2:
return np.array([-7,4])
elif a == 3:
return np.array([-7,-4])
elif a == 4:
return np.array([0,-8])
elif a == 5:
return np.array([7,-4])
elif a == 6:
return np.array([7,4])
return sa2hex_aux(a,zeros-1)+ 2*sa2hex_aux(a%6 +1,zeros-1)
def sa_value(oversampled_image,spiral_address):
Computes the value of the hyperpel corresponding to the given
spiral coordinate.
hp = hyperpel_sa + sa2hex(spiral_address)
val = 0.
for i in range(56):
val += oversampled_image[hp[i,0],hp[i,1]]
return val/56
def spiral_add(a,b,mod=0):
addition_table = [
[0,1,2,3,4,5,6],
[1,63,15,2,0,6,64],
[2,15,14,26,3,0,1],
[3,2,26,25,31,4,0],
[4,0,3,31,36,42,5],
[5,6,0,4,42,41,53],
[6,64,1,0,5,53,52]
]
dig_a = int2intvec(a)
dig_b = int2intvec(b)
if (dig_a<0).any() or (dig_a>7).any() \
or (dig_b<0).any() or (dig_b>7).any():
print("Invalid spiral address!")
return
if len(dig_a) == 1 and len(dig_b)==1:
return addition_table[a][b]
if len(dig_a) < len(dig_b):
dig_a.resize(len(dig_b))
elif len(dig_b) < len(dig_a):
dig_b.resize(len(dig_a))
res = 0
for i in range(len(dig_a)):
if i == len(dig_a)-1:
res += spiral_add(dig_a[i],dig_b[i])*(10**i)
else:
temp = spiral_add(dig_a[i],dig_b[i])
res += (temp%10)*(10**i)
carry_on = spiral_add(dig_a[i+1],(temp - temp%10)/10)
dig_a[i+1] = str(carry_on)
if mod!=0:
return res%mod
return res
def spiral_mult(a,b, mod=0):
multiplication_table = [
[0,0,0,0,0,0,0],
[0,1,2,3,4,5,6],
[0,2,3,4,5,6,1],
[0,3,4,5,6,1,2],
[0,4,5,6,1,2,3],
[0,5,6,1,2,3,4],
[0,6,1,2,3,4,5],
]
dig_a = int2intvec(a)
dig_b = int2intvec(b)
if (dig_a<0).any() or (dig_a>7).any() \
or (dig_b<0).any() or (dig_b>7).any():
print("Invalid spiral address!")
return
sa_mult = int(0)
for i in range(len(dig_b)):
for j in range(len(dig_a)):
temp = multiplication_table[dig_a[j]][dig_b[i]]*(10**(i+j))
sa_mult=spiral_add(sa_mult,temp)
if mod!=0:
return sa_mult%mod
return sa_mult
def omegaf(fft_oversampled, sa):
Evaluates the vector omegaf corresponding to the given
spiral address sa.
`fft_oversampled`: the oversampled FFT of the image
`sa`: the spiral address where to compute the vector
omegaf = np.zeros(6, dtype=fft_oversampled.dtype)
for i in range(1,7):
omegaf[i-1] = sa_value(fft_oversampled,spiral_mult(sa,i))
return omegaf
def invariant(fft_oversampled, sa1,sa2,sa3):
Evaluates the generalized invariant of f on sa1, sa2 and sa3
`fft_oversampled`: the oversampled FFT of the image
`sa1`, `sa2`, `sa3`: the spiral addresses where to compute the invariant
omega1 = omegaf(fft_oversampled,sa1)
omega2 = omegaf(fft_oversampled,sa2)
omega3 = omegaf(fft_oversampled,sa3)
# Attention: np.vdot uses the scalar product with the complex
# conjugation at the first place!
return np.vdot(omega1*omega2,omega3)
def bispectral_inv(fft_oversampled_example, rotational = False):
Computes the (rotational) bispectral invariants for any sa1
and any sa2 in the above picture.
`fft_oversampled_example`: oversampled FFT of the image
`rotational`: if True, we compute the rotational bispectrum
if rotational == True:
bispectrum = np.zeros(9**2*6,dtype = fft_oversampled_example.dtype)
else:
bispectrum = np.zeros(9**2,dtype = fft_oversampled_example.dtype)
indexes = [0,1,10,11,12,13,14,15,16]
count = 0
for i in range(9):
sa1 = indexes[i]
sa1_base10 = base7to10(sa1)
for k in range(9):
sa2 = indexes[k]
if rotational == True:
for r in range(6):
sa2_rot = spiral_mult(sa2,r)
sa2_rot_base10 = base7to10(sa2_rot)
sa3 = base10to7(sa1_base10+sa2_rot_base10)
bispectrum[count]=invariant(fft_oversampled_example,sa1,sa2,sa3)
count += 1
else:
sa2_base10 = base7to10(sa2)
sa3 = base10to7(sa1_base10+sa2_base10)
bispectrum[count]=invariant(fft_oversampled_example,sa1,sa2,sa3)
count += 1
return bispectrum
example = 1 - rgb2gray(plt.imread('./test-images/butterfly.png'))
fft_example = np.fft.fftshift(np.fft.fft2(example))
fft_oversampled_example = oversampling(fft_example)
%%timeit
bispectral_inv(fft_oversampled_example)
%%timeit
bispectral_inv(fft_oversampled_example, rotational=True)
folder = './test-images'
def evaluate_invariants(image, rot = False):
Evaluates the invariants of the given image.
`image`: the matrix representing the image (not oversampled)
`rot`: if True we compute the rotational bispectrum
# compute the normalized FFT
fft = np.fft.fftshift(np.fft.fft2(image))
fft /= fft / LA.norm(fft)
# oversample it
fft_oversampled = oversampling(fft)
return bispectral_inv(fft_oversampled, rotational = rot)
%%timeit
evaluate_invariants(example)
%%timeit
evaluate_invariants(example, rot = True)
def bispectral_folder(folder_name = folder, rot = False):
Evaluates all the invariants of the images in the selected folder,
storing them in a dictionary with their names as keys.
`folder_name`: path to the folder
`rot`: if True we compute the rotational bispectrum
# we store the results in a dictionary
results = {}
for filename in os.listdir(folder_name):
infilename = os.path.join(folder_name, filename)
if not os.path.isfile(infilename):
continue
base, extension = os.path.splitext(infilename)
if extension == '.png':
test_img = 1 - rgb2gray(plt.imread(infilename))
bispectrum = evaluate_invariants(test_img, rot = rot)
results[os.path.splitext(filename)[0]] = bispectrum
return results
def bispectral_comparison(bispectrums, comparison = 'triangle', plot = True, log_scale = True):
Returns the difference of the norms of the given invariants w.r.t. the
comparison element.
`bispectrums`: a dictionary with as keys the names of the images and
as values their invariants
`comparison`: the element to use as comparison
if comparison not in bispectrums:
print("The requested comparison is not in the folder")
return
bispectrum_diff = {}
for elem in bispectrums:
diff = LA.norm(bispectrums[elem]-bispectrums[comparison])
# we remove nan results
if not np.isnan(diff):
bispectrum_diff[elem] = diff
return bispectrum_diff
def bispectral_plot(bispectrums, comparison = 'triangle', log_scale = True):
Plots the difference of the norms of the given invariants w.r.t. the
comparison element (by default in logarithmic scale).
`bispectrums`: a dictionary with as keys the names of the images and
as values their invariants
`comparison`: the element to use as comparison
`log_scale`: wheter the plot should be in log_scale
bispectrum_diff = bispectral_comparison(bispectrums, comparison = comparison)
plt.plot(bispectrum_diff.values(),'ro')
if log_scale == True:
plt.yscale('log')
for i in range(len(bispectrum_diff.values())):
# if we plot in log scale, we do not put labels on items that are
# too small, otherwise they exit the plot area.
if log_scale and bispectrum_diff.values()[i] < 10**(-3):
continue
plt.text(i,bispectrum_diff.values()[i],bispectrum_diff.keys()[i][:3])
plt.title("Comparison with as reference '"+ comparison +"'")
comparisons_paper = ['triangle', 'rectangle', 'ellipse', 'etoile', 'diamond']
def extract_table_values(bispectrums, comparisons = comparisons_paper):
Extract the values for the table of the paper.
`bispectrums`: a dictionary with as keys the names of the images and
as values their invariants
`comparison`: list of elements to use as comparison
Returns a list of tuples. Each tuple contains the name of the comparison
element, the maximal value of the difference of the norm of the invariants
with its rotated and the minimal values of the same difference with the
other images.
table_values = []
for elem in comparisons:
diff = bispectral_comparison(bispectrums, comparison= elem, plot=False)
l = len(elem)
match = [x for x in diff.keys() if x[:l]==elem]
not_match = [x for x in diff.keys() if x[:l]!=elem]
max_match = max([ diff[k] for k in match ])
min_not_match = min([ diff[k] for k in not_match ])
table_values.append((elem,'%.2E' % (max_match),'%.2E' % min_not_match))
return table_values
bispectrums = bispectral_folder()
bispectrums_rotational = bispectral_folder(rot=True)
extract_table_values(bispectrums)
extract_table_values(bispectrums_rotational)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step7: Auxiliary functions
Step8: Spiral architecture implementation
Step9: We now compute, in sa2hex, the address of the center of the hyperpel corresponding to a certain spiral address.
Step11: Then, we compute the value of the hyperpel corresponding to the spiral address, by averaging the values on the subpixels.
Step12: Spiral addition and multiplication
Step14: Computation of the bispectrum
Step16: Then, we can compute the "generalized invariant" corresponding to $\lambda_1$, $\lambda_2$ and $\lambda_3$, starting from the FFT of the image.
Step18: Finally, this function computes the bispectrum (or the rotational bispectrum) corresponding to the spiral addresses in the following picture.
Step19: Some timing tests.
Step21: Tests
Step25: Some timing tests.
Step27: Construction of the table for the paper
|
8,748
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
# Check that GPU is available: cf. https://colab.research.google.com/notebooks/gpu.ipynb
assert(tf.test.gpu_device_name())
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(False) # Start with XLA disabled.
def load_data():
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
x_train = x_train.astype('float32') / 256
x_test = x_test.astype('float32') / 256
# Convert class vectors to binary class matrices.
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)
y_test = tf.keras.utils.to_categorical(y_test, num_classes=10)
return ((x_train, y_train), (x_test, y_test))
(x_train, y_train), (x_test, y_test) = load_data()
def generate_model():
return tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(32, (3, 3)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Conv2D(64, (3, 3), padding='same'),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(64, (3, 3)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(10),
tf.keras.layers.Activation('softmax')
])
model = generate_model()
def compile_model(model):
opt = tf.keras.optimizers.RMSprop(lr=0.0001, decay=1e-6)
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
return model
model = compile_model(model)
def train_model(model, x_train, y_train, x_test, y_test, epochs=25):
model.fit(x_train, y_train, batch_size=256, epochs=epochs, validation_data=(x_test, y_test), shuffle=True)
def warmup(model, x_train, y_train, x_test, y_test):
# Warm up the JIT, we do not wish to measure the compilation time.
initial_weights = model.get_weights()
train_model(model, x_train, y_train, x_test, y_test, epochs=1)
model.set_weights(initial_weights)
warmup(model, x_train, y_train, x_test, y_test)
%time train_model(model, x_train, y_train, x_test, y_test)
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
# We need to clear the session to enable JIT in the middle of the program.
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(True) # Enable XLA.
model = compile_model(generate_model())
(x_train, y_train), (x_test, y_test) = load_data()
warmup(model, x_train, y_train, x_test, y_test)
%time train_model(model, x_train, y_train, x_test, y_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We define the model, adapted from the Keras CIFAR-10 example
Step2: We train the model using the
Step3: Now let's train the model again, using the XLA compiler.
|
8,749
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def compute_mean(list_of_numbers):
return float(sum(list_of_numbers))/len(list_of_numbers)
def count_elements_greater_than(list_of_numbers, threshold):
bool_list = [number >= threshold for number in list_of_numbers]
return bool_list.count(True)
input_file_name = './input-precipitazioni.txt'
threshold = 100
with open(input_file_name, 'r') as input_file:
file_rows = input_file.readlines()
file_rows
years = file_rows.pop(0).rstrip().split()
years
file_rows
months = [row.rstrip().split()[0] for row in file_rows]
months
rains_per_month = [list(map(int, row.rstrip().split()[1:])) for row in file_rows]
rains_per_month
rains_per_month = np.array(rains_per_month)
rains_per_month
rains_per_year = rains_per_month.transpose()
rains_per_year
monthly_averages = [compute_mean(rain_list) for rain_list in rains_per_month]
monthly_averages
months = [month[:3].upper() for month in months]
monthly_output = list(zip(months, monthly_averages))
monthly_output
yearly_total = [sum(rain_list) for rain_list in rains_per_year]
yearly_total
yearly_output1 = list(zip(years, yearly_total))
yearly_output1
yearly_count = [count_elements_greater_than(rain_list, threshold) for rain_list in rains_per_year]
yearly_count
yearly_output2 = list(zip(years, yearly_count))
yearly_output2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) Definizione della funzione compute_mean()
Step2: 3) Definizione della funzione count_elements_greater_than()
Step3: 4) Definizione dei parametri di input
Step4: 5) Lettura del dataset nella lista delle sue righe
Step5: 6) Estrazione della lista degli anni
Step6: NOTA BENE
Step7: 7) Estrazione della lista dei mesi
Step8: 8) Costruzione della matrice dei valori (interi) di pioggia
Step9: b) Convertire la lista in matrice.
Step10: NOTA BENE
Step11: NOTA BENE
Step12: NOTA BENE
Step13: 11) Output delle precipitazioni totali annue
Step14: b) Costruzione della lista di output delle 12 tuple di dimensione 2 in cui ogni tupla contiene il nome dell'anno come primo elemento e la precipitazione totale come secondo elemento.
Step15: 12) Output del numero annuo di mesi con almeno threshold mm di pioggia
Step16: b) Costruzione della lista di output delle N=7 tuple di dimensione 2 in cui ogni tupla contiene il nome dell'anno come primo elemento e il numero di mesi con almeno threshold mm di pioggia come secondo elemento.
|
8,750
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
import pandas as pd
import os
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from time import time
from mclearn.experiment import ActiveExperiment, load_results, save_results
from mclearn.tools import log
from sklearn.externals import joblib
from matplotlib.ticker import FuncFormatter
%matplotlib inline
sns.set_style('white')
RUN_EXPERIMENTS = False
uci_sets = ['glass', 'ionosphere', 'iris', 'magic', 'miniboone',
'pageblocks', 'pima', 'sonar', 'vehicle', 'wine', 'wpbc']
datasets = sorted(uci_sets + ['sdss'])
methods = ['passive', 'margin', 'w-margin', 'confidence',
'w-confidence', 'entropy', 'w-entropy',
'qbb-margin', 'qbb-kl', 'passive', 'thompson', 'ocucb', 'klucb',
'exp++', 'borda', 'geometric', 'schulze']
def run_expt(X, y, dataset, scale=True):
log(dataset, end='')
for method in methods:
log('.', end='')
expt = ActiveExperiment(X, y, dataset, method, scale)
expt.run_policies()
expt = ActiveExperiment(X, y, dataset, None, scale)
expt.run_asymptote()
log('')
if RUN_EXPERIMENTS:
for dataset in uci_sets:
data_path = os.path.join('data', dataset + '.csv')
data = pd.read_csv(data_path)
X, y = data.iloc[:, 1:], data['target']
run_expt(X, y, dataset)
data_path = os.path.join('data', 'sdss.h5')
data = pd.read_hdf(data_path, 'sdss')
class_idx = data.columns.get_loc('class')
X, y = data.iloc[:, (class_idx+1):], data['class']
run_expt(X, y, 'sdss', False)
if RUN_EXPERIMENTS:
for (i, dataset) in enumerate(datasets):
maximum = {}
measures = ['f1', 'accuracy', 'mpba']
for measure in measures:
asymptote_measure = 'asymptote_' + measure
max_measure = 'max_' + measure
results = {}
for method in methods:
results[method] = load_results(dataset, method, 'mpba', True)
results['asymptote'] = load_results(dataset, 'asymptote', asymptote_measure, True)
maximum[max_measure] = results['asymptote']
for method in methods:
maximum[max_measure] = max(maximum[max_measure], max(results[method]))
save_results(dataset, 'max', maximum)
def run_expt(X, y, dataset, scale=True):
log(dataset, end='')
for method in methods:
log('.', end='')
expt = ActiveExperiment(X, y, dataset, method, scale=scale, passive=False)
expt.run_policies()
methods = ['thompson', 'ocucb', 'klucb',
'exp++', 'borda', 'geometric', 'schulze']
if RUN_EXPERIMENTS:
for dataset in uci_sets:
data_path = os.path.join('data', dataset + '.csv')
data = pd.read_csv(data_path)
X, y = data.iloc[:, 1:], data['target']
run_expt(X, y, dataset)
data_path = os.path.join('data', 'sdss.h5')
data = pd.read_hdf(data_path, 'sdss')
class_idx = data.columns.get_loc('class')
X, y = data.iloc[:, (class_idx+1):], data['class']
run_expt(X, y, 'sdss', False)
def calculate_strength(asymptote, passive, policy):
deficiency = np.sum(asymptote - policy) / np.sum(asymptote - passive)
strength = 1 - deficiency
return strength
def plot_mpba_strength():
fig = plt.figure(figsize=(15, 20))
fig.subplots_adjust(hspace=.6)
for (i, dataset) in enumerate(datasets):
results = {}
for method in methods:
results[method] = load_results(dataset, method, 'mpba', True)
results['max'] = load_results(dataset, 'max', 'max_mpba')
strength_dict = {}
for method in methods:
s = calculate_strength(results['max'], results['passive'], results[method])
strength_dict[method] = [s]
strength_df = pd.DataFrame(strength_dict)
strength_df = strength_df[(-strength_df.ix[0]).argsort()]
ax = fig.add_subplot(6, 2, i + 1)
# set bar colours
palette = {}
for method in methods:
if strength_df.ix[0, method] > strength_df.ix[0, 'passive']:
palette[method] = sns.color_palette()[0]
else:
palette[method] = sns.color_palette()[2]
sns.barplot(data=strength_df, ax=ax, palette=palette)
ax.set_title(dataset)
ax.set_ylim(-0.3, 0.8)
ax.set_xticklabels(strength_df.columns, rotation=45, rotation_mode='anchor', ha='right')
# set bar width
new_width = 0.5
for bar in ax.patches:
x = bar.get_x()
width = bar.get_width()
centre = x + new_width / 2.
bar.set_x(centre - new_width / 2.)
bar.set_width(new_width)
#fig.savefig('strengths.pdf', bbox_inches='tight')
plt.show()
# methods = ['thompson', 'ocucb', 'klucb',
# 'exp++', 'borda', 'geometric', 'schulze']
# methods += [method + '-wop' for method in methods]
# methods += ['passive']
plot_mpba_strength()
def plot_learning_curves():
selected_methods = ['passive', 'confidence', 'borda', 'exp++']
format_as_percent_plot = lambda x, pos: "{:.0f}%".format(x * 100)
fig = plt.figure(figsize=(15, 20))
for (i, dataset) in enumerate(datasets):
learning_curves = {}
for method in selected_methods:
learning_curves[method] = load_results(dataset, method, 'mpba', True)
maximum = load_results(dataset, 'max', 'max_mpba')
sample_size = learning_curves['passive'].shape[0] + 49
ax = fig.add_subplot(4, 3, i + 1)
for method in selected_methods:
xticks = np.arange(50, 50 + len(learning_curves[method]))
ax.plot(xticks, learning_curves[method], label=method, linewidth=1)
ax.legend(loc='lower right', frameon=True)
ax.get_yaxis().set_major_formatter(FuncFormatter(format_as_percent_plot))
ax.set_title(dataset)
ax.tick_params(top='off')
ax.plot([50, sample_size], [maximum, maximum], ls='--', color='#377eb8')
ax.set_xlim(50, sample_size)
fig.savefig('learning_curves.pdf', bbox_inches='tight')
plt.show()
plot_learning_curves()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Experiment
Step2: No passive arm
Step3: Results
|
8,751
|
<ASSISTANT_TASK:>
Python Code:
plt.plot(ST.index[1:], np.log10(ST.values)[1:])
#plt.plot(f[1:N_final], 20.0 / len(T) * np.log10(np.abs(ST[1:N_final])))
plt.xlabel('frequency in days')
plt.ylabel('Power')
plt.title('T spectra')
rx = (1. / N) * correlate(T, T, mode = 'same')
plt.plot(fftshift(rx)[0:N//2])
# THIS METHOD OF FINDING THE PEAKS IS COMPLETELY NON-ROBUST
a = np.argsort(-ST.values.flatten())
# Frequeny in units of days
f0_yr = ST.index[a[0]]
f0_d = ST.index[a[1]]
print(f0_yr, f0_d)
print(a[0:25])
f_yr = f[52]
f_d = f[19012]
f_hd = f[38024]
print(f_yr, f_d, f_hd)
unix_birth = datetime.datetime(1970, 1, 1)
time_in_days = lambda t: (t - unix_birth).total_seconds() / 86400 # 86400 = timedelta(days=1).total_seconds()
t_days = np.fromiter(map(time_in_days, t), np.float64) # Time indices in units of days
# Error functions for sinusoidal regression
def err_f0(theta): # No frequency optimization
a_yr, a_d, a_hd, phi_yr, phi_d, phi_hd = theta
syr = a_yr * np.sin(2*pi*f_yr*t_days + phi_yr)
sd = a_d * np.sin(2*pi*f_d*t_days + phi_d)
shd = a_hd * np.sin(2*pi*f_hd*t_days + phi_hd)
return T - syr - sd - shd
res = least_squares(err_f0, (1, 1, 1, 0, 0, 0), method='lm', loss='linear', verbose=1)
a_yr, a_d, a_hd, phi_yr, phi_d, phi_hd = res.x
print('theta0:', res.x)
print('Optimality:', res.optimality)
print('status:', res.status)
print('message:', res.message)
print('success:', res.success)
x_hat = a_yr * np.sin(2*pi*f_yr*t_days + phi_yr) + a_d * np.sin(2*pi*f_d*t_days + phi_d) +\
a_hd * np.sin(2*pi*f_hd*t_days + phi_hd)
fig, axes = plt.subplots(3, 1)
ax0, ax1, ax2 = axes
ax0.plot(t, x_hat); ax0.plot(t, T, alpha=0.5)
ax1.plot(t[0:100000], x_hat[0:100000]); ax1.plot(t[0:100000], T[0:100000], alpha=0.5)
ax2.plot(t[0:1000], x_hat[0:1000]); ax2.plot(t[0:1000], T[0:1000], alpha=0.5)
T1 = T - x_hat
plt.plot(t, T1)
plt.plot(t, T, alpha = 0.5)
N = len(T1)
ST1 = fft(T1)[:N // 2]
ST1 = pd.DataFrame(index=f, data=np.abs(ST1))
plt.plot(ST1.index, np.log10(ST1.values))
#plt.plot(f[1:N_final], 20.0 / len(T) * np.log10(np.abs(ST[1:N_final])))
plt.xlabel('frequency in days')
plt.ylabel('Power')
plt.title('$T_1$ spectra')
rx = (1. / N) * correlate(T1, T1, mode = 'same')
plt.plot(fftshift(rx)[0:N//2])
print(a[0:25])
f0_yr = f[52]
f0_d = f[19012]
print(f0_yr, f0_d)
# Error functions for sinusoidal regression
def err_f1(theta): # No frequency optimization
a_yr, a_d, phi_yr, phi_d = theta
syr = a_yr * np.sin(2*pi*f0_yr*t_days + phi_yr)
sd = a_d * np.sin(2*pi*f0_d*t_days + phi_d)
return T1 - syr - sd
res0 = least_squares(err_f1, (1, 1, 0, 0), method='lm', loss='linear', verbose=1)
a0_yr, a0_d, phi0_yr, phi0_d = res0.x
print('theta0:', res0.x)
print('Optimality:', res0.optimality)
print('status:', res0.status)
print('message:', res0.message)
print('success:', res0.success)
x_hat = a_yr * np.sin(2*pi*f0_yr*t_days + phi_yr) + a_d * np.sin(2*pi*f0_d*t_days + phi_d)
fig, axes = plt.subplots(3, 1)
ax0, ax1, ax2 = axes
ax0.plot(t, x_hat); ax0.plot(t, T1, alpha=0.5)
ax1.plot(t[0:100000], x_hat[0:100000]); ax1.plot(t[0:100000], T1[0:100000], alpha=0.5)
ax2.plot(t[0:1000], x_hat[0:1000]); ax2.plot(t[0:1000], T1[0:1000], alpha=0.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Repeating the process is not useful
|
8,752
|
<ASSISTANT_TASK:>
Python Code:
# Importa as bibliotecas
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Carrega os dados
cols = ['buying','maint','doors','persons','lug_book','safety','class']
carset = pd.read_csv('carData.csv',names=cols)
carset.head()
carset.info()
# Implementa o classificador naive bayes
class naive_classifier(object):
def _init_(self):
self.priors = {}
self.lh_probs = {}
self.train = None
def calc_prior_probs(self):
train = self.train
classes = train['class'].unique()
priors = {}
for class_ in classes:
priors[class_] = len(train[train['class'] == class_])/len(train)
self.priors = priors
def calc_likelihood_probs(self):
train = self.train
columns = carset.drop(['class'],axis=1).columns
classes = carset['class'].unique()
lh_probs = {}
for column in columns:
for class_ in classes:
for cat in carset[column].unique():
cat_prior_prob = sum(train[column]==cat)/len(train)
conditional_prob = sum((train['class']==class_) & (train[column]==cat))/sum(train['class']==class_)
if not conditional_prob: conditional_prob = 0.001
lh_probs[column,cat,class_] = conditional_prob/cat_prior_prob
self.lh_probs = lh_probs
def fit(self,train):
self.train = train
self.calc_prior_probs()
self.calc_likelihood_probs()
def predict(self,xtest):
columns = self.train.drop(['class'],axis=1).columns
classes = self.train['class'].unique()
predictions = []
for i in xtest.index:
posterior_prob = {}
x = xtest.loc[i]
for class_ in classes:
posterior_prob[class_] = 1
for column in columns:
#cat_prior = sum(self.train[column]==x[column])
posterior_prob[class_] *= self.lh_probs[column,x[column],class_]
posterior_prob[class_] *= self.priors[class_]
predictions.append(max(posterior_prob,key=posterior_prob.get))
return predictions
# Instancia um classificador e realiza predições para os dados de teste
msk = np.random.rand(len(carset))<=0.8
xtrain = carset[msk]
xtest = carset[~msk]
ytrain = xtrain['class']
ytest = xtest['class']
naive = naive_classifier()
naive.fit(xtrain)
pred = naive.predict(xtest)
acc_my_nb = np.mean(pred==ytest)*100
from sklearn.metrics import classification_report
my_nv_report = classification_report(ytest,pred)
# Transform as features categóricas em numéricas
from sklearn.naive_bayes import GaussianNB
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(xtrain['class'])
ytrain = le.transform(xtrain['class'])
ytest = le.transform(xtest['class'])
xtrain =pd.get_dummies(xtrain.drop(['class'],axis=1))
xtest = pd.get_dummies(xtest.drop(['class'],axis=1))
xtrain.head()
# Aplica o classifcador Naive Bayes (sklearn)
nb = GaussianNB()
nb.fit(xtrain,ytrain)
pred = nb.predict(xtest)
acc_sk_nb = np.mean(pred == ytest)*100
print('Acurária do classificador(from scratch):',acc_my_nb)
print('Acurária do classificador(sklearn version):',acc_sk_nb)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sem valores faltantes. Amém!
Step2: Questão 2
Step3: Questão 3
|
8,753
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import string
from collections import defaultdict
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
matplotlib.style.use('ggplot')
df = pd.read_csv('data/train_data2.csv', encoding='latin-1')
print(len(df))
df.head()
df['Released'] = pd.to_datetime(df['Released'])
df['Year'] = pd.DatetimeIndex(df['Released']).year
df['Month'] = pd.DatetimeIndex(df['Released']).month
df.head()
df['Year'].describe().astype(int)
# dictionary - year counts
yr_dict = df['Year'].value_counts().to_dict()
import operator
yr_lst = sorted(yr_dict.items(), key=operator.itemgetter(0)) # sort by year
yr_lst = yr_lst[::-1]
#print(yr_lst)
plt.figure(figsize=(25,10))
ind = np.arange(len(yr_dict))
width = 0.35
bar_year = [year for year, count in yr_lst]
bar_count = [count for year, count in yr_lst]
plt.bar(ind, bar_count, width, color='r')
plt.ylabel('Count')
plt.xlabel('Year')
plt.title('Number of Torrents per Year')
plt.xticks(ind + width/2., (bar_year), rotation='vertical')
plt.yticks(np.arange(0, 91, 5))
plt.show()
# cut off at year
before = len(df)
yr_cut_bot = 1998
yr_cut_top = 2015
mask = (df['Year'] >= yr_cut_bot) & (df['Year'] < yr_cut_top)
df_yr = df.loc[mask]
df_yr.sort_values('Year').head()
after = len(df_yr)
print('{0} entries lost ({1}%) due to date cutoff between {2} and {3}'.format(before-after,
round((before/after)/before *100, 2), yr_cut_bot, yr_cut_top))
# look at current data set AFTER year cutoff
plt.rcParams['figure.figsize'] = (15, 15)
_ = pd.tools.plotting.scatter_matrix(df_yr)
# unique list of grouped genres as strings
unq_genres = df_yr['Genre'].unique()
unq_genres = unq_genres.tolist()
#print(len(unq_genres))
#print(unq_genres[:10])
# unique list of grouped genres as lists
lst_grp_genres = []
for lst in unq_genres:
temp = []
for genre in lst.split(','):
temp.append(genre)
lst_grp_genres.append(temp)
#print(len(lst_grp_genres))
#print(lst_grp_genres)
# unique list of individual genres
ind_genre = set()
for lst in unq_genres:
for genre in lst.split(','):
ind_genre.add(genre.strip())
ind_genre = sorted(ind_genre)
#print(len(ind_genre))
#print(ind_genre)
# dictionary - count of genre occurences
count = defaultdict(lambda:0)
for genre in ind_genre:
count[genre] = df_yr.Genre.str.contains(genre).sum()
import operator
srt = sorted(count.items(), key=operator.itemgetter(1))
srt = srt[::-1]
#print(srt)
def split_to_array(ser):
split_array = np.array(ser.strip().replace(',','').split(' '))
return pd.Series(split_array)
genres = df_yr.Genre.apply(split_to_array)
genres = pd.Series(genres.values.ravel()).dropna()
genres = genres.value_counts().sort_values(ascending=False)
def convert_frequency(ser, genres=genres):
split_array = np.array(ser.strip().replace(',','').split(' '))
genre = genres.loc[split_array].argmax()
return genre
df_yr['Genre_Single'] = df_yr.Genre.apply(convert_frequency)
# select only genres of significance
genre = ['Action', 'Adventure', 'Comedy', 'Drama']
df_sub = df_yr.loc[df_yr['Genre_Single'].isin(genre)]
# select only genres of significance
ratings = ['PG-13', 'PG', 'G', 'R']
df_sub = df_sub.loc[df_sub['Rated'].isin(ratings)]
#df_sub['Runtime'].value_counts()
#df_sub['Genre_Single'].value_counts()
#df_sub['Rated'].value_counts()
df_sub.describe()
# entire dataframe
plt.rcParams['figure.figsize'] = (15, 15)
_ = pd.tools.plotting.scatter_matrix(df_sub)
from patsy import dmatrices
patsy_formula = 'Total_Torrents ~ Prod_Budget + Year + Genre_Single'
y, x = dmatrices(patsy_formula, data=df_sub, return_type='dataframe')
import statsmodels.api as sm
model = sm.OLS(y, x)
results = model.fit()
results.summary()
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(x, y)
mod_lr_score = model.score(x, y)
mod_lr_coef = model.coef_
from sklearn import cross_validation as cv
from sklearn import metrics
x_train, x_test, y_train, y_test = cv.train_test_split(x,y,test_size=0.20,random_state=1234)
model = LinearRegression().fit(x_train, y_train)
# store results
mean_sq_err = metrics.mean_squared_error(y_train,model.predict(x_train))
cv_mod_score = model.score(x_train, y_train)
# reset x, y otherwise errors occur
y, x = dmatrices(patsy_formula, data=df_sub, return_type='dataframe')
from sklearn.cross_validation import KFold
kf = KFold(len(df_sub), n_folds=10, shuffle=True)
for train_index, test_index in kf:
x_train, x_test = x.iloc[train_index], x.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
clf2 = LinearRegression().fit(x.iloc[train_index], y.iloc[train_index])
# store results
mean_sq_errKf = metrics.mean_squared_error(y_train,model.predict(x_train))
cvKf_mod_score = clf2.score(x,y)
#NORMAL RESULTS
print('Model Linear Regression Score = {0}'.format(mod_lr_score))
print(' Mean Square Error = {0}'.format(mean_sq_err))
print(' Cross Validation Model Score = {0}'.format(cv_mod_score))
print(' Mean Squred Error K-Fold = {0}'.format(mean_sq_errKf))
print('Cross Val. K-Fold Model Score = {0}'.format(cvKf_mod_score))
_ = plt.plot(y, model.predict(x), 'ro')
# entire dataframe
plt.rcParams['figure.figsize'] = (15, 15)
_ = pd.tools.plotting.scatter_matrix(df_sub)
df.columns
df_sub['log_budg']=np.log(df_sub.Prod_Budget)
#df_sub['log_year']=np.log(df_sub.Year)
#df_sub['log_run']=np.log(df_sub.Runtime)
df_sub['log_tor']=np.log(df_sub.Total_Torrents)
trans = df_sub[['log_budg', 'Year', 'log_tor']]
plt.rcParams['figure.figsize'] = (15, 15)
_ = pd.tools.plotting.scatter_matrix(trans)
log_patsy_formula = 'log_tor ~ log_budg + Year + Genre_Single'
y, x = dmatrices(log_patsy_formula, data=df_sub, return_type='dataframe')
import statsmodels.formula.api as smf
results = smf.ols(formula=log_patsy_formula, data=df_sub,).fit()
results.summary()
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(x, y)
# store results
log_mod_lr_score = model.score(x,y)
from sklearn import cross_validation as cv
from sklearn import metrics
x_train, x_test, y_train, y_test = cv.train_test_split(x,y,test_size=0.20,random_state=1234)
model = LinearRegression().fit(x_train, y_train)
# store results
log_mean_sq_err = metrics.mean_squared_error(y_train,model.predict(x_train))
log_cv_mod_score = model.score(x_train, y_train)
# reset x, y otherwise errors occur
y, x = dmatrices(log_patsy_formula, data=df_sub, return_type='dataframe')
from sklearn.cross_validation import KFold
kf = KFold(len(df_sub), n_folds=10, shuffle=True)
for train_index, test_index in kf:
x_train, x_test = x.iloc[train_index], x.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
clf2 = LinearRegression().fit(x.iloc[train_index], y.iloc[train_index])
# store results
log_mean_sq_errKf = metrics.mean_squared_error(y_train,model.predict(x_train))
log_cvKf_mod_score = clf2.score(x,y)
#LOG RESULTS
print('Log Model Linear Regression Score = {0}'.format(log_mod_lr_score))
print(' Log Mean Square Error = {0}'.format(log_mean_sq_err))
print(' Log Cross Validation Model Score = {0}'.format(log_cv_mod_score))
print(' Log Mean Squred Error K-Fold = {0}'.format(log_mean_sq_errKf))
print('Log Cross Val. K-Fold Model Score = {0}'.format(log_cvKf_mod_score))
df_TEST = pd.read_csv('data/test_data2.csv', encoding='latin-1')
df_TEST['log_budg']=np.log(df_TEST.Prod_Budget)
df_TEST['log_run']=np.log(df_TEST.Runtime)
df_TEST['log_tor']=np.log(df_TEST.Total_Torrents)
def split_to_array(ser):
split_array = np.array(ser.strip().replace(',','').split(' '))
return pd.Series(split_array)
genres = df_yr.Genre.apply(split_to_array)
genres = pd.Series(genres.values.ravel()).dropna()
genres = genres.value_counts().sort_values(ascending=False)
def convert_frequency(ser, genres=genres):
split_array = np.array(ser.strip().replace(',','').split(' '))
genre = genres.loc[split_array].argmax()
return genre
df_TEST['Genre_Single'] = df_TEST.Genre.apply(convert_frequency)
log_patsy_formula_test = 'log_tor ~ log_budg + Year + Month + Genre_Single'
y, x = dmatrices(log_patsy_formula_test, data=df_TEST, return_type='dataframe')
print(clf2.score(x_test, y_test))
print(metrics.mean_squared_error(y_test,model.predict(x_test)))
#_ = plt.plot(y, model.predict(x), 'ro')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in TRAIN data set and select pertinent columns
Step2: Convert dates to datetime objects
Step3: Inspect years
Step4: df => df_yr
Step5: Select only significant values from dataframe
Step6: Log Transform
|
8,754
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import Image
Image(filename='images/bn.png')
#%matplotlib notebook
%matplotlib inline
from matplotlib.widgets import Button
import matplotlib.pyplot as plt
import numpy as np
import math
fig = plt.figure(figsize=(9, 3))
ax1 = fig.add_subplot(1,2,1)
ax1.set_title('no-BN')
ax2 = fig.add_subplot(1,2,2)
ax2.set_title('BN')
@np.vectorize
def sigmoid_func(x):
return 1/(1 + math.exp(-x))
# ------------------ parameeters of BN -----------------------------
W = np.mat([[-2.0, 0.4],[-0.24, -1]])
gamma = 1.2
beta = 0.2
# -------------------traditional ------------------------------------
mean = np.array([2,1])
Sigma = np.mat([[1.5,-0.5],[-0.5,1.2]])
N = 500
np.random.seed(356)
X = np.random.beta(5,2,[N,2])
#print X[1,:]
Z_hat = np.transpose(W * np.transpose(X))
Z_no_bn = Z_hat
a_no_bn = sigmoid_func(Z_no_bn)
# ------------------------ BN ----------------------------------
mu = np.array(np.mean(Z_hat,axis=0)).ravel()
varss = np.array(np.var(Z_hat,axis=0)).ravel()
sigmas = np.sqrt(varss)
#print Z_hat[1,:]
#print mu, sigmas
epsilon = 0.001
mean_subtracted = Z_hat - np.reshape(np.repeat(mu,Z_hat.shape[0]),[Z_hat.shape[0],-1],1)
Z_widehat = np.column_stack((mean_subtracted[:,0]/sigmas[0], mean_subtracted[:,1]/sigmas[1]))
Z_bn = Z_widehat * gamma + beta
a_bn = sigmoid_func(Z_bn)
plot_data = {'X': X, 'Z_hat': Z_hat, 'Z_no_bn': Z_no_bn, 'a_no_bn': a_no_bn, 'Z_widehat': Z_widehat, 'Z_bn': Z_bn, 'a_bn': a_bn }
class BN_plot:
def __init__(self,indices, plot_data):
self.indices = indices
self.cid = fig.canvas.mpl_connect('button_press_event', self)
self.plot_data = plot_data
def __call__(self, event):
ax = event.inaxes
print('click', ax)
if ax.get_title() == 'no-BN':
if self.indices[0] == 1:
X = self.plot_data['X']
h11 = ax.plot(X[:,0], X[:,1],'.', color='C1')
ax.legend(['$X$'])
if self.indices[0] == 2:
Z_no_bn = self.plot_data['Z_no_bn']
h12 = ax.plot(Z_no_bn[:,0], Z_no_bn[:,1],'.',color='C2')
ax.legend(['$X$','$Z = W * X$'])
if self.indices[0] == 3:
a_no_bn = self.plot_data['a_no_bn']
ax.plot(a_no_bn[:,0], a_no_bn[:,1],'.',color='C5')
ax.legend(['$X$','$Z = W * X$','$a = \sigma(Z)$'])
self.indices[0] = self.indices[0] + 1
if ax.get_title() == 'BN':
if self.indices[1] == 1:
X = self.plot_data['X']
ax.plot(X[:,0], X[:,1],'.',color='C1')
ax.legend(['$X$'])
if self.indices[1] == 2:
Z_hat = self.plot_data['Z_hat']
ax.plot(Z_hat[:,0], Z_hat[:,1],'.',color='C2')
ax.legend(['$X$','$\hat{Z} = W * X$','$a = \sigma(Z)$'])
# if self.indices[1] == 3:
# Z_widehat = self.plot_data['Z_widehat']
# ax.plot(Z_widehat[:,0], Z_widehat[:,1],'.',color='C3')
if self.indices[1] == 3:
Z_bn = self.plot_data['Z_bn']
ax.plot(Z_bn[:,0], Z_bn[:,1],'.',color='C4')
ax.legend(['$X$','$\hat{Z} = W * X$','$Z = BN(\hat{Z})$'])
if self.indices[1] == 4:
a_bn = self.plot_data['a_bn']
ax.plot(a_bn[:,0], a_bn[:,1],'.',color='C5')
ax.legend(['$X$','$\hat{Z} = W * X$','$Z = BN(\hat{Z})$', '$a = \sigma(Z)$'])
#ax.text(0,0, str(self.indices[1]) + ax.get_title(), va="bottom", ha="left")
self.indices[1] = self.indices[1] + 1
text1=ax1.text(0,0, "", va="bottom", ha="left")
text2=ax2.text(0,0, "", va="bottom", ha="left")
indices = [1,1]
linebuilder = BN_plot(indices,plot_data)
plt.show()
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
%matplotlib inline
mean = [1,2]
Sigma = np.mat([[2,1],[1,1]])
N = 500
data = np.random.multivariate_normal(mean,Sigma,N)
plt.plot(data[:,0], data[:,1],'.')
#print data
delta = 0.025
Xmesh, Ymesh = np.meshgrid(np.arange(mean[0]-3, mean[0]+3, delta), np.arange(mean[1]-3, mean[1]+3, delta) )
pos = np.empty(Xmesh.shape + (2,))
pos[:, :, 0] = Xmesh
pos[:, :, 1] = Ymesh
from scipy.stats import multivariate_normal
rv = multivariate_normal(mean, Sigma)
plt.contour(Xmesh, Ymesh, rv.pdf(pos))
plt.show
from scipy import linalg
s_mean = np.mean(data,axis=0)
s_Sigma = np.cov(np.transpose(data))
# mean subtracted array
mean_array = np.reshape(np.repeat(s_mean,data.shape[0]),[data.shape[0],-1],1)
inv_Sigma = np.mat(linalg.fractional_matrix_power(s_Sigma, -0.5))
print type(inv_Sigma)
data_prime = inv_Sigma * np.transpose(data - mean_array)
data_prime = np.transpose(data_prime)
# note that plt.plot treats horizontal and vertical data very differently
plt.plot(data_prime[:,0], data_prime[:,1],'.')
delta = 0.025
Xmesh, Ymesh = np.meshgrid(np.arange(-3.0, 3.0, delta), np.arange(-3.0, 3.0, delta) )
pos = np.empty(Xmesh.shape + (2,))
pos[:, :, 0] = Xmesh
pos[:, :, 1] = Ymesh
from scipy.stats import multivariate_normal
rv = multivariate_normal(np.array(np.mean(data_prime,axis=0)).ravel(), np.cov(np.transpose(data_prime)))
plt.contour(Xmesh, Ymesh, rv.pdf(pos))
plt.show
import numpy as np
import matplotlib.pyplot as plt
data_prime = np.array(data_prime)
@np.vectorize
def objective(w0, w1):
return sum((data_prime[:,0] * np.repeat(w0,N))**2 + (data_prime[:,1] * np.repeat(w1,N))**2)
delta = 0.025
W0mesh, W1mesh = np.meshgrid(np.arange(-3.0, 3.0, delta), np.arange(-2.0, 2.0, delta))
Z = objective(W0mesh, W1mesh)
plt.figure()
plt.contour(W0mesh, W1mesh, Z)
plt.title('symmetric quadratic error surface')
plt.axes().set_aspect('equal', 'datalim')
import numpy as np
import matplotlib.pyplot as plt
const = 3;
weight_matrix = np.mat([[const, 0],[0, const]])
@np.vectorize
def objective(w0, w1):
return np.mat([w0,w1]) * weight_matrix * np.mat([[w0], [w1]])
def derivative(w0, w1):
return 2 * np.array(np.mat([w0,w1]) * weight_matrix).ravel()
delta = 0.025
W0mesh, W1mesh = np.meshgrid(np.arange(-3.0, 3.0, delta), np.arange(-2.0, 2.0, delta))
Z = objective(W0mesh, W1mesh)
plt.figure()
plt.contour(W0mesh, W1mesh, Z)
MAX_ITER = 20
pt = [-3,1.5]
learning_rate = 0.1
ax = plt.axes()
for t in range(MAX_ITER):
pt_temp = pt - learning_rate * derivative(pt[0], pt[1])
ax.arrow(pt[0], pt[1], pt_temp[0] - pt[0], pt_temp[1] - pt[1], head_width=0.1, head_length=0.1)
pt = pt_temp
plt.title('symmetric quadratic error surface')
plt.axes().set_aspect('equal', 'datalim')
import numpy as np
import matplotlib.pyplot as plt
const = 3;
weight_matrix = np.mat([[const/2, -1],[-1, const*2]])
@np.vectorize
def objective(w0, w1):
return np.mat([w0,w1]) * weight_matrix * np.mat([[w0], [w1]])
def derivative(w0, w1):
return 2 * np.array(np.mat([w0,w1]) * weight_matrix).ravel()
delta = 0.025
W0mesh, W1mesh = np.meshgrid(np.arange(-3.0, 3.0, delta), np.arange(-2.0, 2.0, delta))
Z = objective(W0mesh, W1mesh)
plt.figure()
plt.contour(W0mesh, W1mesh, Z,11)
MAX_ITER = 20
pt = [-3,1.5]
learning_rate = 0.1
ax = plt.axes()
for t in range(MAX_ITER):
pt_temp = pt - learning_rate * derivative(pt[0], pt[1])
ax.arrow(pt[0], pt[1], pt_temp[0] - pt[0], pt_temp[1] - pt[1], head_width=0.1, head_length=0.1)
pt = pt_temp
plt.title('unsymmetric quadratic error surface')
plt.axes().set_aspect('equal', 'datalim')
import math
import numpy as np
import matplotlib.pyplot as plt
A = 2
B = 1.5
t = 0.5
k = 1
x_0 = A*math.cos(t);
y_0 = B*math.sin(t);
P = [x_0,y_0]
# [df/dx, df/dy]
d_func = [2*P[0]/A**2,2*P[1]/B**2]
# dR/dt = [dx/dt, dy/dt]
d_R = [-A*math.sin(t), B*math.cos(t)]
delta = 0.025
X, Y = np.meshgrid(np.arange(-3.0, 3.0, delta), np.arange(-2.0, 2.0, delta))
# this is to plot an implicit function
l_length = 0.8
plt.contour(X, Y, X**2/(A**2) + Y**2/(B**2), [k])
plt.text(P[0],P[1], "P")
# plot the gradient of the contour
plt.plot([P[0] - d_R[0] * l_length, P[0] + d_R[0] * l_length ],
[P[1] - d_R[1] * l_length, P[1] + d_R[1] * l_length ])
# plot the gradient of steepest descent
plt.plot([P[0] - d_func[0] * l_length, P[0] + d_func[0] * l_length ],
[P[1] - d_func[1] * l_length, P[1] + d_func[1] * l_length ])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: rewrite it by adding inputs from previous layer
Step2: Motivation
Step3: Data whitening means that you need to transform each dimension of data to be distributed as standard Gaussian
Step4: Batch normalization is not a "true data whitening"
Step5: We know that if we have an error function has square contour, the gradient descent is going to be much faster.
Step6: unsymmetric quadratic error surface 非对称二次函数
Step7: 4. Extra stuff
|
8,755
|
<ASSISTANT_TASK:>
Python Code:
list1 = [["a", "b", "c"], [1, 2, 3]]
# print tuple(["a", "b", "c"])
# print tuple([1, 2, 3])
map(tuple, list1)
for item in list1:
tuple(item)
table1 = [["a", "b", "c"], [1, 2, 3]]
table2 = [["a", "b", "c"], [1, 2, 3]]
table3 = [["a", "b", "c"], [1, 2, 4]]
table4 = [[1, 2, 3], ["a", "b", "c"]]
# list.sort()
# sorted(list)
def is_equal(t1, t2):
t1.sort()
t2.sort()
return t1 == t2
# Create a set
one_set = set()
# Constructor takes a single argument that is an iterable
another_set = set('aaabcc')
print(another_set)
# Turn this code into a function
def what_to_wear(temperature):
output = ""
if (temperature < 15):
output = "Wear a coat."
return output
what_to_wear(8)
def get_hourly_rate():
print ("get_hourly_rate")
def get_hours_worked():
print ("get_hours_worked")
def get_input():
print ("get_input")
get_hours_worked()
get_hourly_rate()
def main():
print("main")
get_input()
main()
import random
# The Coin class simulates a coin that can be flipped
class Coin:
# The __init__ methods is the constructor
def __init__(self):
self.sideup = 'Heads'
def toss(self):
flip = random.randint(0, 1)
if flip == 0:
self.sideup = "Heads"
else:
self.sideup = "Tails"
def get_sideup(self):
return self.sideup
def main():
my_coin = Coin()
print("This side is up: " + my_coin.get_sideup())
print("I am tossing the coin...")
my_coin.toss()
print("This side is up: " + my_coin.get_sideup())
main()
for _ in range(5):
print random.randint(0, 1)
with open("jabberwocky.txt", "r") as file_reader:
# insert file operation here
file_contents = file_reader.read()
print file_contents
file_name = "jabberwocky.txt"
try:
file_reader = open(file_name)
# file read operation goes here
except(OSError, IOError):
print("Error reading file: " + file_name)
else:
s = file_reader.read()
file_reader.close()
file_name = "jabberwocky.txt"
# All at once into a string
with open(file_name) as file_reader:
file_contents = file_reader.read()
# All at once into a list
with open(file_name) as file_reader:
file_contents = file_reader.readlines()
# All lines, one at a time
with open(file_name) as file_reader:
for line in file_reader:
print (line),
# One line at a time
with open(file_name) as file_reader:
line = file_reader.readline()
print(line)
line= file_reader.readline()
print(line)
line = file_reader.readline()
print(line)
# By groups of characters
with open(file_name) as file_reader:
input_group = file_reader.read(10) # Read 10 characters
print(input_group)
input_group = file_reader.read(10) # Read the next 10 characters
print(input_group)
# Open a file called filename and read line by line until EOF is reached
file_name = "missive.txt"
with open(file_name, 'w') as output_file:
output_file.write("Dear Uncle Marty,\n")
output_file.write("Thanks for the nifty present.\n")
output_file.write("Love,\nYour Niece\n")
output_file = open(file_name, 'w')
output_file.write("Dear Uncle Marty,\n")
output_file.write("Thanks for the nifty present.\n")
output_file.write("Love,\nYour Niece\n")
kb_hit = raw_input("waiting...")
output_file.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sets
Step2: Problem
Step3: Designing a Program to Use Functions
Steps in Top-Down Design
Step4: Example
Step5: File Input and Output
Step6: with is a Python keyword that automatically calls certain functions before and after the statement
Step7: Reading from a file
Step8: Why is the output double spaced?
Step9: When would you use this instead of a for loop?
Step10: Advantages and Disadvantages
Step11: Writing to a file
Step12: Until you close, Python may save up the text for later output.
|
8,756
|
<ASSISTANT_TASK:>
Python Code:
# Sign up for a free account at Genius.com to access the API
# http://genius.com/api-clients
client_access_token = 'CLIENT_ACCESS_TOKEN'
# Let's take a look at how we might search for an artist using the Genius API.
import requests
import urllib2
# Format a request URL for the Genius API
search_term = 'Andy Shauf'
_URL_API = "https://api.genius.com/"
_URL_SEARCH = "search?q="
querystring = _URL_API + _URL_SEARCH + urllib2.quote(search_term)
request = urllib2.Request(querystring)
request.add_header("Authorization", "Bearer " + client_access_token)
# request.add_header("User-Agent","curl/7.9.8 (i686-pc-linux-gnu) libcurl 7.9.8 (OpnSSL 0.9.6b) (ipv6 enabled)")
request.add_header("User-Agent", "")
# Now that we’ve formatted the URL, we can make a request to the database.
import json
response = urllib2.urlopen(request, timeout=3)
raw = response.read()
json_obj = json.loads(raw)
# The JSON object is just a normal python dictionary
json_obj.viewkeys()
# The 'hits` key stores info on each song in the search result.
# From here it's easy to grab the song title, album, etc.
# List each key contained within a single search hit
[key for key in json_obj['response']['hits'][0]['result']]
# View the song name for each search hit
[song['result']['title'] for song in json_obj['response']['hits']]
# URL to artist image
print(json_obj['response']['hits'][0]['result']['primary_artist']['image_url'])
# If you have an artist or song ID, you can access that entry
# directly by reformatting the request URL.
song_id = 82926
querystring = "https://api.genius.com/songs/" + str(song_id)
request = urllib2.Request(querystring)
request.add_header("Authorization", "Bearer " + client_access_token)
request.add_header("User-Agent", "")
response = urllib2.urlopen(request, timeout=3)
raw = response.read()
json_obj = json.loads(raw)
print((json_obj['response']['song']['title'],\
json_obj['response']['song']['primary_artist']['name']))
from bs4 import BeautifulSoup
import re
URL = 'https://genius.com/Andy-shauf-the-magician-lyrics'
page = requests.get(URL)
html = BeautifulSoup(page.text, "html.parser") # Extract the page's HTML as a string
# Scrape the song lyrics from the HTML
lyrics = html.find("div", class_="lyrics").get_text().encode('ascii','ignore')
# lyrics = re.sub('\[.*\]','',lyrics) # Remove [Verse] and [Bridge] stuff
# lyrics = re.sub('\n{2}','',lyrics) # Remove gaps between verses
# lyrics = str(lyrics).strip('\n')
print(lyrics[:150]+'...')
# Create an instance of the API interface
import genius
api = genius.Genius()
# Search for an artist
artist = G.search_artist('Andy Shauf', max_songs=5)
print(artist)
# Search for a specific song
song = G.search_song('Wendell Walker', artist.name)
artist.add_song(song)
print(artist)
print(artist.songs[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <img src="https
Step2: Scrape song lyrics
Step3: Python wrapper
|
8,757
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.filter(qualifier='l3_mode')
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
print(b.filter(qualifier='l3*'))
print(b.filter(qualifier='l3*'))
print(b.get_parameter('l3'))
print(b.compute_l3s())
b.set_value('l3_mode', 'fraction')
print(b.filter(qualifier='l3*'))
print(b.get_parameter('l3_frac'))
print(b.compute_l3s())
b.run_compute(irrad_method='none', model='no_third_light')
b.set_value('l3_mode', 'flux')
b.set_value('l3', 5)
b.run_compute(irrad_method='none', model='with_third_light')
afig, mplfig = b['lc01'].plot(model='no_third_light')
afig, mplfig = b['lc01'].plot(model='with_third_light', legend=True, show=True)
b.add_dataset('mesh', times=[0], dataset='mesh01', columns=['intensities@lc01', 'abs_intensities@lc01'])
b.set_value('l3', 0.0)
b.run_compute(irrad_method='none', model='no_third_light', overwrite=True)
b.set_value('l3', 5)
b.run_compute(irrad_method='none', model='with_third_light', overwrite=True)
print("no_third_light abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='no_third_light')))
print("with_third_light abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='with_third_light')))
print("no_third_light intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='no_third_light')))
print("with_third_light intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='with_third_light')))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Relevant Parameters
Step3: So let's add a LC dataset
Step4: We now see that the LC dataset created an 'l3_mode' parameter, and since l3_mode is set to 'flux' the 'l3' parameter is also visible.
Step5: l3_mode = 'flux'
Step6: To compute the fractional third light from the provided value in flux units, call b.compute_l3s. This assumes that the flux of the system is the sum of the extrinsic passband luminosities (see the pblum tutorial for more details on intrinsic vs extrinsic passband luminosities) divided by $4\pi$ at t0@system, and according to the compute options.
Step7: l3_mode = 'fraction'
Step8: Similarly to above, we can convert to actual flux units (under the same assumptions), by calling b.compute_l3s.
Step9: Influence on Light Curves (Fluxes)
Step10: As expected, adding 5 W/m^3 of third light simply shifts the light curve up by that exact same amount.
Step11: Influence on Meshes (Intensities)
|
8,758
|
<ASSISTANT_TASK:>
Python Code:
# nametuple 举例
from collections import namedtuple
point = namedtuple('Point', ['x', 'y'])
p = Point(1, 2)
print(p.x, p.y)
print(type(p))
i = p.x + p.y
print(i)
# nametuple 举例
from collections import namedtuple
Web = namedtuple('web', ['name', 'type', 'url'])
p1 = Web('google', 'search', 'www.google.com')
p2 = Web('sina', 'portal', 'www.sina.com.cn')
print(p1)
print(p1.name, p1.url)
print(p1.url, p2.url)
# 遍历 nametuple
for i in p2:
print(i)
# 复杂的基于 namedtuple list demo
from collections import namedtuple
Web = namedtuple('web', ['name', 'type', 'url'])
p = []
p.append(Web('google', 'search', 'www.google.com'))
p.append(Web('sina', 'portal', 'www.sina.com.cn'))
print(p)
for i in p:
print(i.name)
# 显示 namedtuple 的字段名称
print(Web._fields)
# deque 举例
from collections import deque
q = deque(['a', 'b', 'c'])
q.append('x')
q.appendleft('y')
print(q)
# 对比 list 和 deque 的速度
from collections import deque
import time
# list
q0 = [x*x for x in range(10000000)]
# list
a = time.time()
q0.insert(0,88888)
b = time.time()
print(b-a)
# 生成 deque
q1= deque(q0)
# deque
a = time.time()
q1.appendleft(88888)
b = time.time()
print('%2.16f' % (b-a))
from collections import deque
l = ['a','b','c','d']
l = deque(l)
print(l)
# deque rotation
l = ['a','b','c','d','e']
l = deque(l)
l.rotate(2)
print(l)
l.rotate(-2)
print(l)
# deque pop() 同样可以区分头尾
l = deque(['a','b','c'])
l.pop()
print(l)
# 删除左边的元素
l = deque(['a','b','c'])
l.popleft()
print(l)
# 标准的字典用法
i = {'name':'David'}
print(i['name'])
# 不存在 key,则会报错
i = {'name':'David'}
print(i['score'])
# defaultdict 举例
from collections import defaultdict
d = defaultdict(lambda: 100)
d['name']='David'
print(d['name'])
# default 返回默认值,不会报错
print(d['score'])
print(d['best_score'])
from collections import defaultdict
d = defaultdict(lambda: '100')
d['name']='David'
print(d['name'])
print(d['score'])
# dict 是无序的
d = dict([('a', 1), ('b', 2), ('c', 3)])
print(d)
# 传统dict 追加一对 key value
d = dict([('a', 1), ('b', 2), ('c', 3)])
print(d)
d['d'] = 4
print(d)
# 使用 OrderedDict
from collections import OrderedDict
d = OrderedDict()
d['a'] = 1
d['b'] = 2
d['c'] = 3
print(d)
# 使用 OrderedDict, 追加一对 key value
# OrderedDict 的 key 会按照插入的顺序排列,不是 key 本身排序
from collections import OrderedDict
d = OrderedDict()
d['a'] = 1
d['b'] = 2
d['c'] = 3
print(d)
d['d'] = 4
print(d)
# OrderedDict可以实现一个FIFO(先进先出)的dict,当容量超出限制时,先删除最早添加的Key:
from collections import OrderedDict
class LastUpdatedOrderedDict(OrderedDict):
def __init__(self, capacity):
super(LastUpdatedOrderedDict, self).__init__()
self._capacity = capacity
def __setitem__(self, key, value):
containsKey = 1 if key in self else 0
if len(self) - containsKey >= self._capacity:
last = self.popitem(last=False)
print('remove:', last)
if containsKey:
del self[key]
print('set:', (key, value))
else:
print('add:', (key, value))
OrderedDict.__setitem__(self, key, value)
d = LastUpdatedOrderedDict(4)
d['a'] = 1
d['b'] = 2
d['c'] = 3
print(d)
d['d'] = 4
d['e'] = 5
d['f'] = 6
print(d)
# 简化的先进先出Dict
from collections import OrderedDict
d = OrderedDict()
d['a'] = 1
d['b'] = 2
d['c'] = 3
print(d)
# 3个参数:原始有序字典,容量限制,待插入的key,待插入的value
def update_ordereddict(ordered_dict, len_limit ,key, value):
if len(ordered_dict) >=0 and len(ordered_dict) < len_limit:
ordered_dict[key]=value
return ordered_dict
else:
ordered_dict.popitem(last=False)
ordered_dict[key]=value
return ordered_dict
# 插入一个新key-value
update_ordereddict(d, 3, 'new_key', 4)
# Counter类的目的是用来跟踪值出现的次数,以字典的键值对形式存储,其中元素作为key,其计数作为value
# 下面这个例子就是使用 Counter 模块统计一段句子里面所有字符出现次数
from collections import Counter
s = 'A Counter is a dict subclass. '.lower()
c = Counter(s)
# 获取出现频率最高的5个字符
print(c.most_common(5))
# 是否可以统计中文呢?
s = '他会自己长大远去我们也各自远去'
c = Counter(s)
print(c.most_common(5))
# Counter 举例,用户输入内容
from collections import Counter
s = input('Please input:')
s = s.lower()
c = Counter(s)
# 获取出现频率最高的5个字符
print(c.most_common(5))
# 读出 Counter 对象中的key 和value
s = 'A Counter is a dict subclass. '.lower()
c = Counter(s)
# 用列表生成式获得 counter 中的 key
list1 = [k for k,v in c.most_common()]
# 用列表生成式获得 counter 中的 value
list2 = [v for k,v in c.most_common()]
print(list1)
print(list2)
# 我们在 notebook 中画个图
import matplotlib.pyplot as plt
plt.bar(range(len(list2)), list2,color='rgb',tick_label=list1)
plt.show()
# 回文数
# 参考 leetcode 用了类来表示
class Solution:
def isPalindrome(self, x: int) -> bool:
x = str(x)
return x == x[::-1]
Solution().isPalindrome(x='121')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: deque
Step2: defaultdict
Step3: OrderedDict
Step4: Counter
Step5: 思考一下
|
8,759
|
<ASSISTANT_TASK:>
Python Code:
import sys
print(sys.version)
import numpy as np # Import library and give it alias np
print(np.__version__) # The version I'm using
a = np.zeros(3) # Create an array of zeros
a # Print a
type(a)
a = np.zeros(3)
type(a[1])
z = np.zeros(10)
z.shape
z.shape = (10, 1)
z
z = np.zeros(4)
z.shape = (2, 2)
z
z = np.empty(3)
z
z = np.linspace(2, 4, 5) # From 2 to 4, with 5 elements
z
z = np.ones(3)
z
z = np.identity(2)
z
z = np.array([10, 20])
z
z = np.array((10, 20), dtype=float)
z
z = np.array([[1, 2], [3, 4]]) # 2D array from a list of lists
z
z = np.linspace(1, 2, 5)
z
z[0] # First element --- Python sequences are zero based, like C, Java, etc.
z[-1] # Special syntax for last element
z[0:2] # Meaning: Two elements, starting from element 0
z = np.array([[1, 2], [3, 4]])
z
z[0, 0]
z[0,:] # First row
z[:,0] # First column
z = np.linspace(2, 4, 5)
z
d = np.array([0, 1, 1, 0, 0], dtype=bool)
d
z[d]
A = np.array((4, 3, 2, 1))
A
A.sort()
A
A.mean()
A.sum()
A.max()
A.cumsum()
A.var()
A.shape = (2, 2)
A
A.T # Transpose, equivalent to A.transpose()
a = np.array([1, 2, 3, 4])
b = np.array([5, 6, 7, 8])
a + b
a - b
a + 10
a.shape = 2, 2
b.shape = 2, 2
a
b
a * b # Pointwise multiplication!!
np.dot(a, b) # Matrix multiplication
a @ b
z = np.array([2, 3])
y = np.array([2, 3])
z == y
y[0] = 3
z == y
z = np.linspace(0, 10, 5)
z
z > 3
z[z > 3] # Conditional extraction
import matplotlib.pyplot as plt # Import main functionality
%matplotlib inline
x = np.linspace(-2, 2, 100)
y = x**2
fig, ax = plt.subplots() # Create axes and figure window
ax.plot(x, y, 'b-')
y3 = x**3
fig, ax = plt.subplots() # Create axes and figure window
ax.plot(x, y, 'b-', lw=2, alpha=0.8, label='$x^2$')
ax.plot(x, y3, 'g-', lw=2, alpha=0.8, label='$x^3$')
ax.legend(loc='lower right')
from scipy.stats import beta
q = beta(5, 5) # Beta(a, b), with a = b = 5
obs = q.rvs(2000) # 2000 observations
fig, ax = plt.subplots()
ax.hist(obs, bins=40, normed=True)
grid = np.linspace(0.01, 0.99, 100)
ax.plot(grid, q.pdf(grid), 'k-', linewidth=2)
type(q)
dir(q) # Let's see all its methods
q.cdf(0.5)
q.pdf(0.5)
q.mean()
from scipy.stats import linregress
n = 100
alpha, beta, sigma = 1, 2, 1.5
x = np.random.randn(n) # n standard normals
y = alpha + beta * x + sigma * np.random.randn(n)
beta_hat, alpha_hat, r_value, p_value, std_err = linregress(x, y)
print("gradient = {}".format(beta_hat))
print("intercept = {}".format(alpha_hat))
fig, ax = plt.subplots(figsize=(8, 5))
ax.plot(x, y, 'bo', alpha=0.6, label='observations')
xgrid = np.linspace(-3, 3, 2)
ax.plot(xgrid, alpha_hat + beta_hat * xgrid, 'k-', lw=2, alpha=0.8, label='best fit')
ax.grid()
ax.legend(loc='upper left')
fig, ax = plt.subplots()
def f(x):
return np.sin(4 * (x - 0.25)) + x + x**20 - 1
x = np.linspace(0, 1, 100)
ax.plot(x, f(x))
ax.plot(x, 0 * x)
from scipy.optimize import bisect # Bisection algorithm --- slow but robust
bisect(f, 0, 1)
from scipy.optimize import newton # Newton's method --- fast but less robust
newton(f, 0.2) # Start the search at initial condition x = 0.2
newton(f, 0.7) # Start the search at x = 0.7 instead
from scipy.optimize import brentq
brentq(f, 0, 1) # Hybrid method
timeit bisect(f, 0, 1)
timeit newton(f, 0.2)
timeit brentq(f, 0, 1)
from scipy.optimize import fminbound
fminbound(lambda x: x**2, -1, 2) # Search in [-1, 2]
from scipy.integrate import quad
integral, error = quad(lambda x: x**2, 0, 1)
integral
import scipy.linalg as la
A = [[2, -1],
[3, 0]]
A = np.array(A) # Convert from list to NumPy array
b = np.ones((2, 1)) # Shape is 2 x 1
A
b
x = la.solve(A, b) # Solve for x in Ax = b
print(x)
np.dot(A, x)
la.inv(A)
np.dot(A, la.inv(A)) # Should be the identity
eigvals, eigvecs = la.eig(A)
print("eigenvalues = {}".format(eigvals))
print("first eigenvector = {}".format(eigvecs[:, 0]))
import pandas as pd
%%file test_data.csv
"country","country isocode","year","POP","XRAT","tcgdp","cc","cg"
"Argentina","ARG","2000","37335.653","0.9995","295072.21869","75.716805379","5.5788042896"
"Australia","AUS","2000","19053.186","1.72483","541804.6521","67.759025993","6.7200975332"
"India","IND","2000","1006300.297","44.9416","1728144.3748","64.575551328","14.072205773"
"Israel","ISR","2000","6114.57","4.07733","129253.89423","64.436450847","10.266688415"
"Malawi","MWI","2000","11801.505","59.543808333","5026.2217836","74.707624181","11.658954494"
"South Africa","ZAF","2000","45064.098","6.93983","227242.36949","72.718710427","5.7265463933"
"United States","USA","2000","282171.957","1","9898700","72.347054303","6.0324539789"
"Uruguay","URY","2000","3219.793","12.099591667","25255.961693","78.978740282","5.108067988"
%ls ./*.csv # Check it's there
df = pd.read_csv('./test_data.csv')
df
df = pd.read_csv('./test_data.csv', index_col='country')
df
df.drop(['year'], axis=1, inplace=True)
df
df['GDP percap'] = df['tcgdp'] / df['POP']
df
df.sort_values(by='GDP percap', inplace=True)
df
df['GDP percap'].plot(kind='bar')
# Put your solution here
# Put your solution here
# Print some nonsense to partially hide solutions
filler_text = "solution below\n" * 25
print(filler_text)
from scipy.stats import expon
alpha = 0.5
n = 10000
ep = expon(scale=1.0/alpha) # scale controls the exponential parameter
x = ep.rvs(n)
fig, ax = plt.subplots(figsize=(8, 5))
xmin, xmax = 0.001, 10.0
ax.set_xlim(xmin, xmax)
ax.hist(x, normed=True, bins=40, alpha=0.3)
grid = np.linspace(xmin, xmax, 200)
ax.plot(grid, ep.pdf(grid), 'g-', lw=2, label='true density')
ax.legend()
alpha_mle = 1.0 / x.mean()
print("max likelihood estimate of alpha is {}".format(alpha_mle))
s = x.sum()
def neg_loglike(a):
"Minus the log likelihood function for exponential"
return - n * np.log(a) + a * s
from scipy.optimize import fminbound
fminbound(neg_loglike, 0.01, 10.0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic NumPy
Step2: NumPy defines a basic data type called an array (actually a numpy.ndarray)
Step3: Note that array data must be homogeneous
Step4: When we create an array such as
Step5: z is a "flat" array with no dimension--- neither row nor column vector
Step6: Here the shape tuple has only one element, which is the length of the array (tuples with one element end with a comma)
Step7: Creating arrays
Step8: These are just garbage numbers --- whatever was in those memory slots
Step9: Creating an array of ones
Step10: Arrays can be made from Python lists or tuples
Step11: Array indexing
Step12: Array methods
Step13: Operations on arrays
Step14: For Python $\geq 3.5$ and NumPy $\geq 1.1$ the @ operator also works.
Step15: I'll continue to use np.dot below for the benefit of those who are using older versions. But in my opinion the @ operator is much nicer.
Step16: Matplotlib
Step17: Display figures in this browser window rather than having them open up separately
Step18: Create something to plot
Step19: Here's a slightly more complex plot
Step20: SciPy
Step21: Now let's histogram it and compare it to the original density
Step22: Other methods
Step23: Basic linear regression
Step24: Let's plot this with data and line of best fit
Step25: Roots and fixed points
Step26: Here we see that the algorithm gets it wrong --- newton is fast but not robust
Step27: Note that the hybrid method is robust but still quite fast...
Step28: Linear Algebra
Step29: We'll experiment with matrices
Step30: Let's check that $Ax = b$
Step31: We can also invert directly
Step32: Let's compute the eigenvalues and eigenvectors
Step33: More information
Step34: Let's start by writing a test data set to the present working directory, so we can read it back in as a dataframe using pandas. We use an IPython magic to write the data from a cell to a file
Step35: Let's try that again but this time using the country as the index column
Step36: Let's drop the year since it's not very informative
Step37: Let's add a column for GDP per capita
Step38: Let's sort the whole data frame by GDP per capita
Step39: Now we'll plot per capital GDP using the dataframe's plot method
Step40: Exercises
Step41: Exercise 2
Step42: Solutions
Step43: Solution to Exercise 1
Step44: Let's check we've got the right distribution here
Step45: It's well-known that the MLE of $\alpha$ is $1/\bar x$ where $\bar x$ is the mean of the sample. Let's check that it is indeed close to $\alpha$.
Step46: Minimize over a reasonable parameter space
|
8,760
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(42)
np.random.seed(4)
m = 60
w1, w2 = 0.1, 0.3
noise = 0.1
angles = np.random.rand(m) * 3 * np.pi / 2 - 0.5
X = np.empty((m, 3))
X[:, 0] = np.cos(angles) + np.sin(angles)/2 + noise * np.random.randn(m) / 2
X[:, 1] = np.sin(angles) * 0.7 + noise * np.random.randn(m) / 2
X[:, 2] = X[:, 0] * w1 + X[:, 1] * w2 + noise * np.random.randn(m)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot(X[:, 0], X[:, 1], X[:, 2], "o", alpha=0.6)
X_centered = X - X.mean(axis=0)
U, s, Vt = np.linalg.svd(X_centered)
Vt
W2 = Vt.T[:,:2]
X2D = X_centered.dot(W2)
fig, ax = plt.subplots(figsize=(8,4))
ax.plot(X2D[:, 0], X2D[:, 1], "o", alpha=0.6)
fig, ax = plt.subplots(figsize=(8,4))
ax.plot(X[:, 0], X[:, 1], "o", alpha=0.6)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot(X[:, 0], X[:, 1], "o", alpha=0.6)
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
X2D = pca.fit_transform(X)
fig, ax = plt.subplots(figsize=(8,4))
ax.plot(X2D[:, 0], X2D[:, 1], "o", alpha=0.6)
pca.explained_variance_ratio_
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate the dataset
Step2: Plot the dataset
Step3: Factorise the matrix using the Singular Value Decomposition (SVD)
Step4: The columns of $V^T$ represent the principle components.
Step5: To perform the projection, we compute the dot product of X by matrix $W_d$ which contains the first $d$ principle components
Step6: Let us plot it on a 2D plane
Step7: Let us plot the original 3D dataset onto a 2D plane by ignoring the third dimension without PCA.
Step8: Let us plot the 3D dataset ignoring the third dimension.
Step9: The scikit-learn provides PCA class that implements PCA using SVD.
Step10: The explained_variance_ratio of the PCA object describes the proportion of the dataset's variance that lies along the axis of each principle component.
|
8,761
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from geopy.distance import great_circle
from collections import deque
cols = ["Airport ID", "Name", "City", "Country", "IATA", "ICAO", "Latitude", "Longitude", "Altitude",
"Timezone", "DST", "Tz database", "Type", "Source"]
airports = pd.read_csv("data/airports.dat", header=None, names=cols, index_col=0)
print(f"There are {airports.shape[0]} airports")
airports.head()
# dropping the columns we don't need
keep_cols = ["City", "Country", "Latitude", "Longitude"]
airports = airports[keep_cols]
print(f"There are {airports.shape[0]} airports")
airports.head()
# getting rid of null values
airports = airports.dropna(axis=0)
airports.shape
cols = ["Airline", "Airline ID", "Source airport", "Source Airport ID", "Destination airport",
"Dest Airport ID", "Codeshare", "Stops", "Equipment"]
routes = pd.read_csv("data/routes.dat", header=None, names=cols)
print(f"There are {routes.shape[0]} routes")
routes.head()
# first drop all rows where stops aren't 0, as we only want direct connections
routes = routes[routes["Stops"] == 0]
keep_cols = ["Source Airport ID", "Dest Airport ID"]
routes = routes[keep_cols]
print(f"There are {routes.shape[0]} routes")
routes.head(10)
def make_int_or_null(x):
returns int or np.nan if can't return int
try:
return int(x)
except:
return np.nan
routes = routes.applymap(make_int_or_null)
print(f"There are {routes.shape[0]} routes before dropping null values")
routes = routes.dropna(axis=0)
print(f"there are {routes.shape[0]} after dropping null values")
routes.head()
def route_distance(edge):
takes a route as a pandas row, and returns distance b/w them
src_airport = airports.iloc[int(edge["Source Airport ID"])]
dest_airport = airports.iloc[int(edge["Dest Airport ID"])]
src_lat = src_airport["Latitude"]
src_long = src_airport["Longitude"]
dest_lat = dest_airport["Latitude"]
dest_long = dest_airport["Longitude"]
src_loc = (float(src_lat), float(src_long))
dest_loc = (float(dest_lat), float(dest_long))
return great_circle(src_loc, dest_loc).km
#checking algo
print(route_distance(routes.iloc[100]))
graph = {}
for i, row in airports.iterrows():
graph[i] = []
from tqdm import tqdm
for i, row in tqdm(routes.iterrows(), total=len(routes), mininterval=0.5):
try:
src_airport = airports.iloc[int(row["Source Airport ID"])]
dest_airport = airports.iloc[int(row["Dest Airport ID"])]
except:
pass
dist = great_circle((src_airport["Latitude"], src_airport["Longitude"]),
(dest_airport["Latitude"],dest_airport["Longitude"]) ).km
n = int(row["Source Airport ID"])
d = int(row["Dest Airport ID"])
if n in graph.keys():
graph[n].append((d, int(dist)))
find = airports["City"] == "Karachi"
airports[find]
for i, t in enumerate(graph[2206]):
if t[0] in graph.keys():
print(i, airports.loc[t[0]]["City"], t[1])
else:
continue
airports.shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Date to make a graph with
Step2: Step one is is to get rid of all the info we don't need for our graph.
Step3: Edges, aka routes flown
Step4: We just need
Step6: Now there are some values which aren't numbers, so we need to clean them up, as you can see in row 7 above.
Step8: Now we need to be able to get the distance b/w two airports to be able to assign a weight in our graphs.
Step9: so the data seems ready to go. We now have two pandas dataframes, airports and routes, and a function which returns the distance b/w airports.
Step10: Now going through each route and adding (dest_airport, distance) to each src_airport in the graph
Step11: Now say I want to find out the graph for Karachi. Karachi has three airports, so only looking at the first one for now...
Step12: So I can see all the cities it's connected to, though since some cities have multiple airports they show up twice. So how do I deal with that? I can ignore multiple airports as the distance is the same, though a more complex graph would take into account the different $$ value.
|
8,762
|
<ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print 'Testing affine_forward function:'
print 'difference: ', rel_error(out, correct_out)
# Test the affine_backward function
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be less than 1e-10
print 'Testing affine_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 1e-8
print 'Testing relu_forward function:'
print 'difference: ', rel_error(out, correct_out)
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 1e-12
print 'Testing relu_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print 'Testing svm_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx)
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print '\nTesting softmax_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Modular neural nets
Step2: Affine layer
Step3: Affine layer
Step4: ReLU layer
Step5: ReLU layer
Step6: Loss layers
|
8,763
|
<ASSISTANT_TASK:>
Python Code:
# reshape is needed so we can use plt.imshow
rgb_to_gray = tf.reshape(tf.image.rgb_to_grayscale(ob), [ob.shape[0], ob.shape[1]])
gray_ob = rgb_to_gray.eval()
gray_ob.shape, gray_ob.dtype
plt.gray()
plt.imshow(gray_ob)
# let's get the current ratio
from __future__ import division
ratio = ob.shape[0] / ob.shape[1]
print ratio
def resize_op(img, h, w):
resized_ob = tf.image.resize_bilinear(tf.reshape(ob, [1, ob.shape[0], ob.shape[1], ob.shape[2]]), [h,w ])
return tf.reshape(resized_ob, [h, w, 3])
# 84 x 84 is the size of the img used in the original DQN paper, 21168 or 7056 (grayscale) points per img
plt.imshow(resize_op(ob, 84, 84).eval())
w = 40
h = int(w * ratio)
rgb_pixels = 3 * h * w
grayscale_pixels = h * w
print 'height =', h, 'width =', w
print 'pixels with rgb =', rgb_pixels, 'grayscale =', grayscale_pixels
plt.imshow(resize_op(ob, h, w).eval())
scaled_ob = tf.image.convert_image_dtype(ob, tf.float32, saturate=True).eval()
print scaled_ob.min(), scaled_ob.max()
np.unique(ob)
np.unique(gray_ob)
np.unique(scaled_ob)
# assumes img.shape is (batch_size, h, w, c)
def img_preprocess(img, h, w):
img = tf.convert_to_tensor(img)
rgb2y = tf.image.rgb_to_grayscale(img)
resized = tf.image.resize_bilinear(rgb2y, [h, w])
return resized
obs = np.reshape(ob, [1] + list(ob.shape))
obs.shape
preprocessed_img = img_preprocess(obs, 84, 84).eval()
plt.gray()
plt.imshow(preprocessed_img.reshape(84, 84))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Resizing Images
Step2: Scaling Pixel Values
Step3: Putting it Together - Making a Image Preprocessing Pipeline
|
8,764
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image('../../../python_for_probability_statistics_and_machine_learning.jpg')
%matplotlib inline
from matplotlib.pylab import subplots
from numpy import ma
import numpy as np
np.random.seed(12345678)
from sklearn import tree
clf = tree.DecisionTreeClassifier()
import numpy as np
M=np.fromfunction(lambda i,j:j>=2,(4,4)).astype(int)
print(M)
i,j = np.where(M==0)
x=np.vstack([i,j]).T # build nsamp by nfeatures
y = j.reshape(-1,1)*0 # 0 elements
print(x)
print(y)
i,j = np.where(M==1)
x=np.vstack([np.vstack([i,j]).T,x ]) # build nsamp x nfeatures
y=np.vstack([j.reshape(-1,1)*0+1,y]) # 1 elements
clf.fit(x,y)
clf.score(x,y)
M[1,0]=1 # put in different class
print(M) # now contaminated
i,j = np.where(M==0)
x=np.vstack([i,j]).T
y = j.reshape(-1,1)*0
i,j = np.where(M==1)
x=np.vstack([np.vstack([i,j]).T,x])
y = np.vstack([j.reshape(-1,1)*0+1,y])
clf.fit(x,y)
y[x[:,1]>1.5] # first node on the right
y[x[:,1]<=1.5] # first node on the left
np.logical_and(x[:,1]<=1.5,x[:,1]>0.5)
y[np.logical_and(x[:,1]<=1.5,x[:,1]>0.5)]
_=clf.fit(x,y)
fig,axs=subplots(2,2,sharex=True,sharey=True)
ax=axs[0,0]
ax.set_aspect(1)
_=ax.axis((-1,4,-1,4))
ax.invert_yaxis()
# same background all on axes
for ax in axs.flat:
_=ax.plot(ma.masked_array(x[:,1],y==1),ma.masked_array(x[:,0],y==1),'ow',mec='k')
_=ax.plot(ma.masked_array(x[:,1],y==0),ma.masked_array(x[:,0],y==0),'o',color='gray')
lines={'h':[],'v':[]}
nc=0
for i,j,ax in zip(clf.tree_.feature,clf.tree_.threshold,axs.flat):
_=ax.set_title('node %d'%(nc))
nc+=1
if i==0: _=lines['h'].append(j)
elif i==1: _=lines['v'].append(j)
for l in lines['v']: _=ax.vlines(l,-1,4,lw=3)
for l in lines['h']: _=ax.hlines(l,-1,4,lw=3)
i,j = np.indices((5,5))
x=np.vstack([i.flatten(),j.flatten()]).T
y=(x[:,0]>=x[:,1]).astype(int).reshape((-1,1))
_=clf.fit(x,y)
fig,ax=subplots()
_=ax.axis((-1,5,-1,5))
ax.set_aspect(1)
ax.invert_yaxis()
_=ax.plot(ma.masked_array(x[:,1],y==1),ma.masked_array(x[:,0],y==1),'ow',mec='k',ms=15)
_=ax.plot(ma.masked_array(x[:,1],y==0),ma.masked_array(x[:,0],y==0),'o',color='gray',ms=15)
for i,j in zip(clf.tree_.feature,clf.tree_.threshold):
if i==1:
_=ax.hlines(j,-1,6,lw=3.)
else:
_=ax.vlines(j,-1,6,lw=3.)
from numpy import sin, cos, pi
rotation_matrix=np.matrix([[cos(pi/4),-sin(pi/4)],
[sin(pi/4),cos(pi/4)]])
xr=(rotation_matrix*(x.T)).T
xr=np.array(xr)
fig,ax=subplots()
ax.set_aspect(1)
_=ax.axis(xmin=-2,xmax=7,ymin=-4,ymax=4)
_=ax.plot(ma.masked_array(xr[:,1],y==1),ma.masked_array(xr[:,0],y==1),'ow',mec='k',ms=15)
_=ax.plot(ma.masked_array(xr[:,1],y==0),ma.masked_array(xr[:,0],y==0),'o',color='gray',ms=15)
_=clf.fit(xr,y)
for i,j in zip(clf.tree_.feature,clf.tree_.threshold):
if i==1:
_=ax.vlines(j,-1,6,lw=3.)
elif i==0:
_=ax.hlines(j,-1,6,lw=3.)
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split, cross_val_score
X_train,X_test,y_train,y_test=train_test_split(x,y,random_state=1)
clf = tree.DecisionTreeClassifier(max_depth=2)
_=clf.fit(X_train,y_train)
rfc = RandomForestClassifier(n_estimators=4,max_depth=2)
_=rfc.fit(X_train,y_train.flat)
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=4,max_depth=2)
rfc.fit(X_train,y_train.flat)
def draw_board(x,y,clf,ax=None):
if ax is None: fig,ax=subplots()
xm,ymn=x.min(0).T
ax.axis(xmin=xm-1,ymin=ymn-1)
xx,ymx=x.max(0).T
_=ax.axis(xmax=xx+1,ymax=ymx+1)
_=ax.set_aspect(1)
_=ax.invert_yaxis()
_=ax.plot(ma.masked_array(x[:,1],y==1),ma.masked_array(x[:,0],y==1),'ow',mec='k')
_=ax.plot(ma.masked_array(x[:,1],y==0),ma.masked_array(x[:,0],y==0),'o',color='gray')
for i,j in zip(clf.tree_.feature,clf.tree_.threshold):
if i==1:
_=ax.vlines(j,-1,6,lw=3.)
elif i==0:
_=ax.hlines(j,-1,6,lw=3.)
return ax
fig,axs = subplots(2,2)
# draw constituent decision trees
for est,ax in zip(rfc.estimators_,axs.flat):
_=draw_board(X_train,y_train,est,ax=ax)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A decision tree is the easiest classifer to understand, interpret, and explain.
Step2: Let's also create some example data,
Step3: Programming Tip.
Step4: Thus, the elements of x are the two-dimensional indicies of the
Step5: With all that established, all we have to do is train the classifer,
Step6: To evaluate how the classifer performed, we can report the score,
Step7: For this classifer, the score is the accuracy, which is
Step8: Now we have a 1 entry in the previously
Step9: The result is shown
Step10: This obviously has a zero Gini coefficient. Likewise, the node on the
Step11: The Gini coefficient in this case is computed as
Step12: with corresponding classes,
Step13: Programming Tip.
Step14: <!-- dom
Step15: <!-- dom
Step16: <!-- dom
|
8,765
|
<ASSISTANT_TASK:>
Python Code:
%time d_pagerank = G.pagerank()
%time u_pagerank = G.as_undirected().pagerank()
%time d_betweenness = G.betweenness(directed=True)
%time u_betweenness = G.as_undirected().betweenness(directed=False)
%time d_closeness = G.closeness(mode="IN", normalized=True)
%time u_closeness = G.as_undirected().closeness(normalized=True)
%time d_eigen = G.eigenvector_centrality()
%time u_eigen = G.as_undirected().eigenvector_centrality()
%time hubs = G.hub_score()
%time authorities = G.authority_score()
indegree = G.indegree()
outdegree = G.outdegree()
degree = G.degree()
df = pd.DataFrame(index=G.vs['name'])
df['year'] = G.vs['year']
df['indegree'] = indegree
df['outdegree'] = outdegree
df['degree'] = degree
df['d_pagerank'] = d_pagerank
df['u_pagerank'] = u_pagerank
df['d_betweenness'] = d_betweenness
df['u_betweenness'] = u_betweenness
df['d_closeness'] = d_closeness
df['u_closeness'] = u_closeness
df['d_eigen'] = d_eigen
df['u_eigen'] = u_eigen
df['hubs'] = hubs
df['authorities'] = authorities
all_metrics = ['indegree', 'outdegree', 'degree',
'd_pagerank', 'u_pagerank',
'd_betweenness', 'u_betweenness',
'd_closeness', 'u_closeness',
'd_eigen', 'u_eigen',
'hubs', 'authorities']
# map types to issues
type_to_issue = {'procedural': [1, 4, 6, 9],
'substantive': [2, 3, 5, 7, 8, 12, 14],
'other': [10, 11, 13, 0]}
# map issues to type
issue_to_type = {i: '' for i in range(13 + 1)}
for t in type_to_issue.keys():
for i in type_to_issue[t]:
issue_to_type[i] = t
# create type
G.vs['issueArea'] = [int(i) for i in G.vs['issueArea']]
G.vs['type'] = [issue_to_type[i] for i in G.vs['issueArea']]
# add to data frame
df['issueArea'] = G.vs['issueArea']
df['type'] = G.vs['type']
# get type subsets
df_sub = df[df['type'] == 'substantive']
df_pro = df[df['type'] == 'procedural']
df_oth = df[df['type'] == 'other']
print 'num substantive: %d' % df_sub.shape[0]
print 'num procedural: %d' % df_pro.shape[0]
print 'num other: %d' % df_oth.shape[0]
df.to_csv(subnet_dir + 'issue_area/metrics.csv', index=True)
df.columns
metric = 'authorities'
bins = np.linspace(min(df[metric]), max(df[metric]), 100)
# substantive
plt.hist(df_sub[metric],
bins=bins,
color='red',
label='substantive (mean: %1.5f)' % np.mean(df_sub[metric]))
# procedural
plt.hist(df_pro[metric],
bins=bins,
color='blue',
label='procedural (mean: %1.5f)' % np.mean(df_pro[metric]))
# other
plt.hist(df_oth[metric],
bins=bins,
color='green',
label='other (mean: %1.5f)' % np.mean(df_oth[metric]))
plt.xlim([0, .2])
plt.ylim([0, 2000])
plt.xlabel(metric)
plt.legend(loc='upper right')
# look at propotion of top cases of each type
T = 100
top_cases = df.sort_values(by=metric, ascending=False).iloc[0:T]['type']
top_breakdown = top_cases.value_counts(normalize=True)
# compare to proportion of all cases
all_breakdown = df['type'].value_counts(normalize=True)
diff = top_breakdown - all_breakdown
diff
metric= 'indegree'
df_pro_sub = df[df['type'] != 'other']
T = 100
# observed proportion of top cases that are substantive
obs_top_breakdown = df_pro_sub.\
sort_values(by=metric, ascending=False).\
iloc[0:T]['type'].\
value_counts(normalize=True)
obs_prop_sub = obs_top_breakdown['substantive']
R = 1000
perm_prop_sub = [0] * R
for r in range(R):
# randomly select T cases
perm_indices = np.random.choice(range(df_pro_sub.shape[0]), replace=False, size=T)
# compute the type breakdown of the T cases
perm_breakdown = df_pro_sub.\
iloc[perm_indices]['type'].\
value_counts(normalize=True)
# proportion of T cases that are substantive
perm_prop_sub[r] = perm_breakdown['substantive']
perm_prop_sub = np.array(perm_prop_sub)
pval = 1 - np.mean(perm_prop_sub < obs_prop_sub)
plt.title('permutation test substantive vs. procedural (pval: %1.3f)' % pval)
plt.hist(perm_prop_sub,
color='blue',
label='permutation')
plt.axvline(obs_prop_sub,
color='red',
label='obs')
plt.xlabel(metric)
df_pro_sub = df[df['type'] != 'other']
U, D, V = get_PCA(df_pro_sub[all_metrics], scale=True)
plot_2class_scores(U,
classes = df_pro_sub['type'],
start=6,
n_comp=5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: issue area
Step2: compare metric vs. issue type
Step3: permutation test
Step4: Results
|
8,766
|
<ASSISTANT_TASK:>
Python Code:
import tempfile
import girder_client
import numpy as np
from pandas import read_csv
from histomicstk.annotations_and_masks.annotation_and_mask_utils import (
delete_annotations_in_slide)
from histomicstk.saliency.cellularity_detection_thresholding import (
Cellularity_detector_thresholding)
import matplotlib.pylab as plt
from matplotlib.colors import ListedColormap
%matplotlib inline
APIURL = 'http://candygram.neurology.emory.edu:8080/api/v1/'
SAMPLE_SLIDE_ID = "5d8c296cbd4404c6b1fa5572"
gc = girder_client.GirderClient(apiUrl=APIURL)
gc.authenticate(apiKey='kri19nTIGOkWH01TbzRqfohaaDWb6kPecRqGmemb')
# This is where the run logs will be saved
logging_savepath = tempfile.mkdtemp()
# read GT codes dataframe
GTcodes = read_csv('../../histomicstk/saliency/tests/saliency_GTcodes.csv')
# deleting existing annotations in target slide (if any)
delete_annotations_in_slide(gc, SAMPLE_SLIDE_ID)
GTcodes
print(Cellularity_detector_thresholding.__doc__)
print(Cellularity_detector_thresholding.__init__.__doc__)
# init cellularity detector
cdt = Cellularity_detector_thresholding(
gc, slide_id=SAMPLE_SLIDE_ID, GTcodes=GTcodes,
verbose=2, monitorPrefix='test',
logging_savepath=logging_savepath)
print(cdt.set_color_normalization_target.__doc__)
tissue_pieces = cdt.run()
print(
'Tissue piece 0: ',
'xmin', tissue_pieces[0].xmin,
'xmax', tissue_pieces[0].xmax,
'ymin', tissue_pieces[0].ymin,
'ymax', tissue_pieces[0].ymax,
)
# color map
tmp = tissue_pieces[0].labeled.copy()
tmp[0, :256] = np.arange(256)
vals = ['black'] * 256
vals[6] = 'cyan' # sharpie / ink
vals[7] = 'yellow' # blood
vals[8] = 'grey' # whitespace
vals[9] = 'indigo' # maybe cellular
vals[10] = 'green' # salient / top cellular
cMap = ListedColormap(vals)
plt.figure(figsize=(10,10))
plt.imshow(tmp, cmap=cMap)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prepwork
Step2: Let's explore the GTcodes dataframe
Step3: Initialize the cellularity detector
Step4: The only required arguments to initialize are gc, slide_id, and GTcodes.
Step5: Set the color normalization values (optional)
Step6: Run the detector
Step7: Check the results
|
8,767
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.metrics.pairwise import cosine_similarity
similarity=cosine_similarity(document_term_matrix)
pd.DataFrame(similarity)
similarity=cosine_similarity(document_term_matrix.T)
pd.DataFrame(similarity, index=vocab, columns=vocab)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What if want to understand which words are more similar in this context?
|
8,768
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -u -v -d -p matplotlib,numpy,scipy
%matplotlib inline
import matplotlib.pyplot as plt
def errorbar_default():
# Data
data = [1, 1.5, 1.2]
std_devs = [0.15, 0.25, 0.12]
# X axis positions
x_pos = range(len(data))
for d, std, x in zip(data, std_devs, x_pos):
plt.errorbar(x=x, y=d, yerr=std, fmt='o')
# setting axis limits
plt.xlim([min(x_pos)-1, max(x_pos)+1])
plt.ylim([min(data)*0.7, max(data)*1.3])
# setting labels and titles
plt.ylabel('x label')
plt.title('Matplotlib default')
plt.legend(['X1', 'X2', 'X3'], loc='upper right')
plt.show()
import numpy as np
def errorbar_modified():
# Data
data = [1, 1.5, 1.2]
std_devs = [0.15, 0.25, 0.12]
# X axis positions
x_pos = range(len(data))
colors = ['lightblue', 'pink', 'lightgreen']
fig = plt.gca()
ax = plt.subplot(111)
# draw plots
for d, std, col, x in zip(data, std_devs, colors, x_pos):
plt.errorbar(x=x, y=d, yerr=std, fmt='o', color=col, ecolor='black')
# setting axis limits
plt.xlim([min(x_pos)-1, max(x_pos)+1])
plt.ylim([min(data)*0.7, max(data)*1.3])
# setting labels and titles
plt.ylabel('x label')
plt.text(1, 2, 'Modified',
horizontalalignment='center',
fontsize=14)
# remove axis spines
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
# hiding axis ticks
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="off", left="off", right="off", labelleft="on")
# adding horizontal grid lines
ax.yaxis.grid(True)
plt.legend(['X1', 'X2', 'X3'], loc='upper right', fancybox=True, numpoints=1)
plt.tight_layout()
plt.show()
errorbar_default()
errorbar_modified()
data = [np.random.normal(0, std, 50) for std in range(1, 4)]
import matplotlib.pyplot as plt
import numpy as np
def boxplot_default():
fig = plt.figure(figsize=(8,6))
plt.boxplot(data,
notch=False, # box instead of notch shape
sym='rs', # red squares for outliers
vert=True) # vertical box aligmnent
plt.xticks([y+1 for y in range(len(data))], ['x1', 'x2', 'x3'])
plt.title('Matplotlib default')
plt.show()
def boxplot_modified():
fig = plt.figure(figsize=(8,6))
ax = plt.subplot(111)
bplot = plt.boxplot(data,
notch=True, # notch shape
vert=True, # vertical box aligmnent
sym='ko', # red circle for outliers
patch_artist=True, # fill with color
)
# choosing custom colors to fill the boxes
colors = ['pink', 'lightblue', 'lightgreen']
for patch, color in zip(bplot['boxes'], colors):
patch.set_facecolor(color)
# modifying the whiskers: straight lines, black, wider
for whisker in bplot['whiskers']:
whisker.set(color='black', linewidth=1.2, linestyle='-')
# making the caps a little bit wider
for cap in bplot['caps']:
cap.set(linewidth=1.2)
# hiding axis ticks
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
# adding horizontal grid lines
ax.yaxis.grid(True)
# remove axis spines
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
plt.xticks([y+1 for y in range(len(data))], ['x1', 'x2', 'x3'])
# raised title
plt.text(2, 9, 'Modified',
horizontalalignment='center',
fontsize=18)
plt.tight_layout()
plt.show()
boxplot_default()
boxplot_modified()
import matplotlib.pyplot as plt
def barplot_default():
# input data
mean_values = [1, 2, 3]
variance = [0.2, 0.4, 0.5]
bar_labels = ['bar 1', 'bar 2', 'bar 3']
fig = plt.figure(figsize=(6,4))
# plot bars
x_pos = list(range(len(bar_labels)))
plt.bar(x_pos, mean_values, yerr=variance, align='center')
# set axes labels and title
plt.ylabel('variable y')
plt.xticks(x_pos, bar_labels)
plt.title('Matplotlib default')
plt.show()
import matplotlib.pyplot as plt
def barplot_modified():
# input data
mean_values = [1, 2, 3]
variance = [0.2, 0.4, 0.5]
bar_labels = ['bar 1', 'bar 2', 'bar 3']
fig = plt.figure(figsize=(6,4))
ax = plt.subplot(111)
# plot bars
x_pos = list(range(len(bar_labels)))
plt.bar(x_pos, mean_values, yerr=variance,
ecolor='black', # black error bar color
alpha=0.5, # transparency
width=0.5, # smaller bar width
align='center')
# set height of the y-axis
max_y = max(zip(mean_values, variance)) # returns a tuple, here: (3, 5)
plt.ylim([0, (max_y[0] + max_y[1]) * 1.1])
# hiding axis ticks
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
# adding horizontal grid lines
ax.yaxis.grid(True)
# remove axis spines
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
# set axes labels and title
plt.ylabel('variable y')
plt.xticks(x_pos, bar_labels)
plt.text(1, 4, 'Modified',
horizontalalignment='center',
fontsize=18)
plt.tight_layout()
plt.show()
barplot_default()
barplot_modified()
import numpy as np
import random
from matplotlib import pyplot as plt
data1 = [random.gauss(15,10) for i in range(500)]
data2 = [random.gauss(5,5) for i in range(500)]
def histogram_default():
fig = plt.figure(figsize=(8,6))
bins = np.arange(-60, 60, 2.5)
# plot histograms
plt.hist(data1, bins=bins, label='class 1')
plt.hist(data2, bins=bins, label='class 2')
# labels
plt.title('Matplotlib default')
plt.xlabel('variable X')
plt.ylabel('count')
plt.legend(loc='upper right')
plt.show()
def histogram_modified():
bins = np.arange(-60, 60, 2.5)
fig = plt.figure(figsize=(8,6))
ax = plt.subplot(111)
# plot histograms
plt.hist(data1, bins=bins,
alpha=0.3, # transparency
label='class 1')
plt.hist(data2, bins=bins,
alpha=0.3, # transparency
label='class 2')
# axis formatting
plt.ylim([0, 110])
plt.xlim([min(data1+data2)-5, max(data1+data2)+5])
# hiding axis ticks
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
# adding horizontal grid lines
ax.yaxis.grid(True)
# remove axis spines
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
# labels
plt.xlabel('variable X')
plt.ylabel('count')
plt.legend(loc='upper right', fancybox=True)
# raised title
plt.text(15, 120, 'Modified',
horizontalalignment='center',
fontsize=18)
plt.show()
histogram_default()
histogram_modified()
from matplotlib import pyplot as plt
import numpy as np
def piechart_default():
plt.pie(
(10,5),
labels=('spam','ham'))
plt.legend()
plt.title('Matplotlib default')
plt.show()
def piechart_modified():
plt.pie(
(10,5),
labels=('spam','ham'),
shadow=True,
colors=('lightskyblue', 'yellowgreen'),
explode=(0,0.15), # space between slices
startangle=90, # rotate conter-clockwise by 90 degrees
autopct='%1.1f%%',# display fraction as percentage
)
plt.legend(fancybox=True)
plt.axis('equal') # plot pyplot as circle
plt.tight_layout()
plt.title('Modified')
plt.show()
piechart_default()
piechart_modified()
import numpy as np
data = [np.random.normal(0, std, 50) for std in range(1, 4)]
import matplotlib.pyplot as plt
def violin_default():
fig = plt.figure(figsize=(8,6))
plt.violinplot(data)
plt.xticks([y+1 for y in range(len(data))], ['x1', 'x2', 'x3'])
plt.title('Matplotlib default')
plt.show()
def violin_modified():
fig = plt.figure(figsize=(8,6))
ax = plt.subplot(111)
vplot = plt.violinplot(data,
showmeans=False,
showmedians=True,
showextrema=False
)
# choosing custom colors to fill the boxes
colors = ['red', 'blue', 'green']
for patch, color in zip(vplot['bodies'], colors):
patch.set_facecolor(color)
# hiding axis ticks
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
# adding horizontal grid lines
ax.yaxis.grid(True)
# remove axis spines
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
plt.xticks([y+1 for y in range(len(data))], ['x1', 'x2', 'x3'])
# raised title
plt.text(2, 9, 'Modified',
horizontalalignment='center',
fontsize=18)
plt.tight_layout()
plt.show()
violin_default()
violin_modified()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font size="1.5em">More info about the %watermark extension</font>
Step2: <br>
Step3: Modified Errorbar Plot
Step4: <hr>
Step5: <br>
Step6: Default Boxplot
Step7: Modified Boxplot
Step8: <hr>
Step9: <br>
Step10: Modified Barplot
Step11: <hr>
Step12: <br>
Step13: Default Histogram
Step14: Modified Histogram
Step15: <hr>
Step16: <br>
Step17: Modified pie chart
Step18: <br>
Step19: Default violin plot
Step20: Modified violin plot
|
8,769
|
<ASSISTANT_TASK:>
Python Code:
from arcgis.gis import GIS
import getpass
password = getpass.getpass("Enter your ArcGIS Organizational Account Password: ")
gis = GIS("https://esrihax.maps.arcgis.com", "johnyHack", password)
print("Logged in successfully to {} as {}.".format(gis.properties.urlKey + '.' + gis.properties.customBaseUrl, \
gis.users.me.username))
from IPython.display import display
map = gis.map("Springfield, IL", 12)
map
feat_services = gis.content.search(query="title:Church*", item_type="Feature Service", max_items=15)
print("{} has access to {} feature service items.".format(gis.users.me.username, len(feat_services)))
for feat_svc in feat_services:
print("{} is a {}.".format(feat_svc.title.capitalize(), type(feat_svc)))
print("\t{}".format(feat_svc.url))
churches_itm = [feat_services][0]
churches_itm
map.add_layer(churches_itm)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load a map widget from the gis class
Step2: Use the Content Manager to access data in your portal.
Step3: Correct results from API functions require input of specific formats, so it's important to know what types of objects are returned from queries. The Search method on the Content Manager returns a Python list of ArcGIS Items. Accessing members from a list and obtaining properties from items require knowing specific functions and methods. Typing an object, followed by a period, followed by a Tab provides access to properties and methods accessible from objects.
Step4: Use indexing to obtain the item from the list.
Step5: Add the layer to the Map
|
8,770
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import seaborn as sns
from IPython.display import display, HTML
series_one = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])
series_one
series_two = pd.Series({'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5})
series_two
series_one[2:4]
series_one['a']
series_one + series_two
series_one * 3
df_one = pd.DataFrame({'one': pd.Series(np.random.rand(5),
index=['a', 'b', 'c', 'd' , 'e']),
'two': pd.Series(np.random.rand(4),
index=['a', 'b', 'c', 'e'])})
df_one
iris = pd.read_csv("iris.csv", index_col=0)
iris.head()
url = "ftp://ftp.sanger.ac.uk/pub/gencode/Gencode_human/release_24/gencode.v24.primary_assembly.annotation.gtf.gz"
gencode = pd.read_csv(url, compression="gzip", iterator=True, header=None,
sep="\t", comment="#", quoting=3,
usecols=[0, 1, 2, 3, 4, 6])
gencode.get_chunk(10)
planets = pd.read_csv("planets.csv", index_col=0)
planets.head()
planets_melt = pd.melt(planets, id_vars="method")
planets_melt.head()
heatmap = pd.read_csv("Heatmap.tsv", sep="\t", index_col=0)
heatmap.head(10)
heatmap.iloc[4:8]
heatmap.loc[['prisons', 'jacks', 'irons']]
def color_negative_red(val):
Takes a scalar and returns a string with
the css property `'color: red'` for negative
strings, black otherwise.
color = 'red' if val < 0 else 'black'
return 'color: %s' % color
# Apply the function like this
heatmap.head(10).style.applymap(color_negative_red)
heatmap.head(10).style.background_gradient(cmap="RdBu_r")
# No need to iter through to apply mean based on species
iris_species_grouped = iris.groupby('species')
iris_species_grouped.mean()
# The previous iterator has reached it's end, so re-initialize
iris_species_grouped = iris.groupby('species')
for species, group in iris_species_grouped:
display(HTML(species))
display(pd.DataFrame(group.mean(axis=0)).T)
pd.DataFrame(iris[[0, 1, 2, 3]].apply(np.std, axis=0)).T
def add_length_width(x):
Adds up the length and width of the features and returns
a pd.Series object so as to get a pd.DataFrame
sepal_sum = x['sepal_length'] + x['sepal_width']
petal_sum = x['petal_length'] + x['petal_width']
return pd.Series([sepal_sum, petal_sum, x['species']],
index=['sepal_sum', 'petal_sum', 'species'])
iris.apply(add_length_width, axis=1).head(5)
iris.loc[iris.sepal_width > 3.5]
iris.loc[(iris.sepal_width > 3.5) & (iris.species == 'virginica')]
heatmap.loc[heatmap.index.str.contains("due|ver|ap")]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pandas Python Data Analysis Library
Step2: <span class="mark">Starting from version v0.8.0, pandas supporst non-unique index values</span>
Step3: DataFrame
Step4: There are several other constructors for creating a DataFrame object
Step5: Let's see the power of pandas. We'll use Gencode v24 to demonstrate and read the annotation file.
Step6: pd.DataFrame.to_csv
Step7: Indexing and Selecting Data
Step9: <span class="burk"><span class="girk">Almost forgot, HTML conditional formatting just made it into the latest release 0.17.1 and it's pretty awesome. Use a function to your liking or do it with a background gradient</span></span>
Step10: Group-by and apply
Step12: Applying a function
Step13: Filtering (Numeric and String)
|
8,771
|
<ASSISTANT_TASK:>
Python Code:
USERNAME = ""
BASE_URL = "https://{u}.carto.com".format(u=USERNAME)
API_KEY = ""
from carto.auth import APIKeyAuthClient
auth_client = APIKeyAuthClient(api_key=API_KEY, base_url=BASE_URL)
from carto.kuvizs import KuvizManager
km = KuvizManager(auth_client)
html = "<html><body><h1>Working with CARTO Kuviz</h1></body></html>"
public_kuviz = km.create(html=html, name="kuviz-public-test")
print(public_kuviz.id)
print(public_kuviz.url)
print(public_kuviz.privacy)
html = "<html><body><h1>Working with CARTO Kuviz</h1></body></html>"
password_kuviz = km.create(html=html, name="kuviz-password-test", password="1234")
print(password_kuviz.id)
print(password_kuviz.url)
print(password_kuviz.privacy)
new_html = "<html><body><h1>Another HTML</h1></body></html>"
password_kuviz.data = new_html
password_kuviz.save()
print(password_kuviz.id)
print(password_kuviz.url)
print(password_kuviz.privacy)
password_kuviz.password = None
password_kuviz.save()
print(password_kuviz.id)
print(password_kuviz.url)
print(password_kuviz.privacy)
password_kuviz.password = "1234"
password_kuviz.save()
print(password_kuviz.id)
print(password_kuviz.url)
print(password_kuviz.privacy)
password_kuviz.delete()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Kuviz manager creation
Step2: Create public Kuviz
Step3: Create Kuviz protected by password
Step4: Update a kuviz
Step5: If you want to remove the password
Step6: And if you want to add the password again
Step7: Delete a kuviz
|
8,772
|
<ASSISTANT_TASK:>
Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
import os
filename = 'World_population_estimates.html'
if not os.path.exists(filename):
!wget https://raw.githubusercontent.com/AllenDowney/ModSimPy/master/data/World_population_estimates.html
from pandas import read_html
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
un = table2.un / 1e9
census = table2.census / 1e9
t_0 = census.index[0]
t_end = census.index[-1]
elapsed_time = t_end - t_0
p_0 = census[t_0]
p_end = census[t_end]
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
from modsim import System
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
annual_growth=annual_growth)
system
from modsim import TimeSeries
def run_simulation1(system):
results = TimeSeries()
results[system.t_0] = system.p_0
for t in range(system.t_0, system.t_end):
results[t+1] = results[t] + system.annual_growth
return results
results1 = run_simulation1(system)
from modsim import decorate
def plot_estimates():
census.plot(style=':', label='US Census')
un.plot(style='--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)')
results1.plot(label='model', color='gray')
plot_estimates()
decorate(title='Constant Growth Model')
def run_simulation2(system):
results = TimeSeries()
results[system.t_0] = system.p_0
for t in range(system.t_0, system.t_end):
births = system.birth_rate * results[t]
deaths = system.death_rate * results[t]
results[t+1] = results[t] + births - deaths
return results
system.death_rate = 7.7 / 1000
system.birth_rate = 25 / 1000
results2 = run_simulation2(system)
results2.plot(label='model', color='gray')
plot_estimates()
decorate(title='Proportional Growth Model')
def growth_func1(pop, t, system):
births = system.birth_rate * pop
deaths = system.death_rate * pop
return births - deaths
def run_simulation(system, growth_func):
results = TimeSeries()
results[system.t_0] = system.p_0
for t in range(system.t_0, system.t_end):
growth = growth_func(results[t], t, system)
results[t+1] = results[t] + growth
return results
results = run_simulation(system, growth_func1)
system.alpha = system.birth_rate - system.death_rate
def growth_func2(pop, t, system):
return system.alpha * pop
results = run_simulation(system, growth_func2)
# Solution
def growth_func3(pop, t, system):
Compute the population next year.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
if t < 1980:
return system.alpha1 * pop
else:
return system.alpha2 * pop
# Solution
system.alpha1 = 19 / 1000
system.alpha2 = 15 / 1000
results3 = run_simulation(system, growth_func3)
results3.plot(label='model', color='gray')
plot_estimates()
decorate(title='Proportional growth, parameter changes over time')
# Solution
# Using two parameters, we can make the model fit the data better.
# But it still seems like the shape of the function is not right.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the previous chapter we simulated a model of world population with
Step2: System objects
Step3: Some of these are parameters we need to simulate the system; others are temporary values we can discard.
Step4: t0 and t_end are the first and last years; p_0 is the initial
Step5: Next we'll wrap the code from the previous chapter in a function
Step6: run_simulation1 takes a System object and uses the parameters in it to determine t_0, t_end, and annual_growth.
Step7: Here's the function we used in the previous chapter to plot the estimates.
Step8: And here are the results.
Step9: It might not be obvious that using functions and System objects is a
Step10: Now we can choose the values of birth_rate and death_rate that best fit the data.
Step11: Then I ran the simulation and plotted the results
Step12: The proportional model fits
Step13: This function takes as arguments the current population, current year,
Step14: This function demonstrates a feature we have not seen before
Step15: Passing a function as an argument is the same as passing any other
Step16: The name of this parameter, alpha, is the conventional name for a
Step17: And here's how we run it
Step19: The results are the same as the previous versions, but now the code is organized in a way that makes it easy to explore other models.
|
8,773
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
n = 20
x = np.random.random((n,1))
y = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))
plt.plot(x, y, 'b.')
plt.show()
intercept_x = np.hstack((np.ones((n,1)), x))
intercept_x
np.linalg.lstsq(intercept_x,y)
coeff, residuals, rank, sing_vals = np.linalg.lstsq(intercept_x,y)
intercept_x.shape, coeff.T.shape
np.sum(intercept_x * coeff.T, axis=1)
predictions = np.sum(intercept_x * coeff.T, axis=1)
plt.plot(x, y, 'bo')
plt.plot(x, predictions, 'ko')
plt.show()
predictions.shape
np.sum((predictions.reshape((20,1)) - y) ** 2), residuals
our_coeff = np.dot(np.dot(np.linalg.inv(np.dot(intercept_x.T, intercept_x)), intercept_x.T), y)
print(coeff, '\n', our_coeff)
our_predictions = np.dot(intercept_x, our_coeff)
predictions, our_predictions
plt.plot(x, y, 'ko', label='True values')
plt.plot(x, our_predictions, 'ro', label='Predictions')
plt.legend(numpoints=1, loc=4)
plt.show()
np.arange(12).reshape((3,4))
plt.plot(x, y - our_predictions, 'ko')
plt.show()
plt.plot(x, y, 'ko', label='True values')
all_x = np.linspace(0, 1, 1000).reshape((1000,1))
intercept_all_x = np.hstack((np.ones((1000,1)), all_x))
print(intercept_all_x.shape, our_coeff.shape)
#all_x_predictions = np.dot(intercept_all_x, our_coeff)
all_x_predictions = np.sum(intercept_all_x * our_coeff.T, axis=1)
plt.plot(all_x, all_x_predictions, 'r-', label='Predictions')
plt.legend(numpoints=1, loc=4)
plt.show()
x_expanded = np.hstack((x**i for i in range(1,20)))
b, residuals, rank, s = np.linalg.lstsq(x_expanded, y)
print(b)
plt.plot(x, y, 'ko', label='True values')
plt.plot(x, np.dot(x_expanded, b), 'ro', label='Predictions')
plt.legend(numpoints=1, loc=4)
plt.show()
n = 20
p = 12
training = []
val = []
for i in range(1, p):
np.random.seed(0)
x = np.random.random((n,1))
y = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))
x = np.hstack((x**j for j in np.arange(i)))
our_coeff = np.dot(
np.dot(
np.linalg.inv(
np.dot(
x.T, x
)
), x.T
), y
)
our_predictions = np.dot(x, our_coeff)
our_training_rss = np.sum((y - our_predictions) ** 2)
training.append(our_training_rss)
val_x = np.random.random((n,1))
val_y = 5 + 6 * val_x ** 2 + np.random.normal(0,0.5, size=(n,1))
val_x = np.hstack((val_x**j for j in np.arange(i)))
our_val_pred = np.dot(val_x, our_coeff)
our_val_rss = np.sum((val_y - our_val_pred) ** 2)
val.append(our_val_rss)
#print(i, our_training_rss, our_val_rss)
plt.plot(range(1, p), training, 'ko-', label='training')
plt.plot(range(1, p), val, 'ro-', label='validation')
plt.legend(loc=2)
plt.show()
np.random.seed(0)
n = 200
x = np.random.random((n,1))
y = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))
intercept_x = np.hstack((np.ones((n,1)), x))
coeff, residuals, rank, sing_vals = np.linalg.lstsq(intercept_x,y)
print('lstsq', coeff)
def gradient_descent(x, y, rounds = 1000, alpha=0.01):
theta = np.zeros((x.shape[1], 1))
costs = []
for i in range(rounds):
prediction = np.dot(x, theta)
error = prediction - y
gradient = np.dot(x.T, error / y.shape[0])
theta -= gradient * alpha
costs.append(np.sum(error ** 2))
return (theta, costs)
theta, costs = gradient_descent(intercept_x, y, rounds=10000)
print(theta, costs[::500])
np.random.seed(0)
n = 200
x = np.random.random((n,1))
y = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))
x = np.hstack((x**j for j in np.arange(20)))
coeff, residuals, rank, sing_vals = np.linalg.lstsq(x,y)
print('lstsq', coeff)
theta, costs = gradient_descent(x, y, rounds=10000)
print(theta, costs[::500])
plt.plot(x[:,1], y, 'ko')
plt.plot(x[:,1], np.dot(x, coeff), 'co')
plt.plot(x[:,1], np.dot(x, theta), 'ro')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is a very simple dataset. There is only one input value for each record and then there is the output value. Our goal is to determine the output value or dependent variable, shown on the y-axis, from the input or independent variable, shown on the x-axis.
Step2: Numpy contains the linalg module with many common functions for performing linear algebra. Using this module finding a solution is quite simple.
Step3: The values returned are
Step4: Least squares refers to the cost function for this algorithm. The objective is to minimize the residual sum of squares. The difference between the actual and predicted values is calculated, it is squared and then summed over all records. The function is as follows
Step5: Exercise
Step6: Types of independent variable
Step7: There is a tradeoff with model complexity. As we add more complexity to our model we can fit our training data increasingly well but eventually will lose our ability to generalize to new data.
Step8: Gradient descent
|
8,774
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import statsmodels.api as sm
import statsmodels.formula.api as smf
star98 = sm.datasets.star98.load_pandas().data
formula = 'SUCCESS ~ LOWINC + PERASIAN + PERBLACK + PERHISP + PCTCHRT + \
PCTYRRND + PERMINTE*AVYRSEXP*AVSALK + PERSPENK*PTRATIO*PCTAF'
dta = star98[['NABOVE', 'NBELOW', 'LOWINC', 'PERASIAN', 'PERBLACK', 'PERHISP',
'PCTCHRT', 'PCTYRRND', 'PERMINTE', 'AVYRSEXP', 'AVSALK',
'PERSPENK', 'PTRATIO', 'PCTAF']]
endog = dta['NABOVE'] / (dta['NABOVE'] + dta.pop('NBELOW'))
del dta['NABOVE']
dta['SUCCESS'] = endog
mod1 = smf.glm(formula=formula, data=dta, family=sm.families.Binomial()).fit()
mod1.summary()
def double_it(x):
return 2 * x
formula = 'SUCCESS ~ double_it(LOWINC) + PERASIAN + PERBLACK + PERHISP + PCTCHRT + \
PCTYRRND + PERMINTE*AVYRSEXP*AVSALK + PERSPENK*PTRATIO*PCTAF'
mod2 = smf.glm(formula=formula, data=dta, family=sm.families.Binomial()).fit()
mod2.summary()
print(mod1.params[1])
print(mod2.params[1] * 2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then, we fit the GLM model
Step2: Finally, we define a function to operate customized data transformation using the formula framework
Step3: As expected, the coefficient for double_it(LOWINC) in the second model is half the size of the LOWINC coefficient from the first model
|
8,775
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment_network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment_network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
# Filter out that review with 0 length
reviews_ints = [each for each in reviews_ints if len(each) > 0]
print(len(reviews_ints))
print(reviews_ints[1])
seq_len = 200
print(len(reviews))
features = np.zeros((len(reviews), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
print(len(features))
features[:10,:100]
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='prob')
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Encoding the words
Step3: Encoding the labels
Step4: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Step5: Exercise
Step6: Training, Validation, Test
Step7: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step8: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Step9: Embedding
Step10: LSTM cell
Step11: RNN forward pass
Step12: Output
Step13: Validation accuracy
Step14: Batching
Step15: Training
Step16: Testing
|
8,776
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
# CODE HERE
np.zeros(10)
# CODE HERE
np.ones(10)
# CODE HERE
np.ones(10) * 5
# CODE HERE
np.arange(10,51)
# CODE HERE
np.arange(10,51,2)
# CODE HERE
np.arange(9).reshape(3,3)
# CODE HERE
np.eye(3)
# CODE HERE
np.random.rand(1)
# CODE HERE
np.random.randn(25)
np.arange(1,101).reshape(10,10) / 100
np.linspace(0,1,20)
# CODE HERE
mat = np.arange(1,26).reshape(5,5)
mat
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[2:,1:]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[3,4]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[:3,1:2]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[4,:]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[3:5,:]
# CODE HERE
mat.sum()
# CODE HERE
mat.std()
# CODE HERE
mat.sum(axis=0)
np.random.seed(101)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create an array of 10 zeros
Step2: Create an array of 10 ones
Step3: Create an array of 10 fives
Step4: Create an array of the integers from 10 to 50
Step5: Create an array of all the even integers from 10 to 50
Step6: Create a 3x3 matrix with values ranging from 0 to 8
Step7: Create a 3x3 identity matrix
Step8: Use NumPy to generate a random number between 0 and 1
Step9: Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution
Step10: Create the following matrix
Step11: Create an array of 20 linearly spaced points between 0 and 1
Step12: Numpy Indexing and Selection
Step13: Now do the following
Step14: Get the standard deviation of the values in mat
Step15: Get the sum of all the columns in mat
Step16: Bonus Question
|
8,777
|
<ASSISTANT_TASK:>
Python Code:
import random
from tenacity import retry
@retry
def do_something_unreliable():
# Pick a number between 0 and 10
if random.randint(0, 10) > 1:
# If it's greater than 1, raise an error
print("this number was bad...")
raise Exception
else:
print("...but this one is good! :D")
return
do_something_unreliable()
from tenacity import retry, stop_after_attempt
@retry(reraise=True, stop=stop_after_attempt(3))
def raise_my_exception():
print("Pass me the ball!")
raise Exception()
try:
raise_my_exception()
except Exception:
# Ran out of attempts
print("Whoa, time out, y'all...")
pass
from tenacity import retry, wait_fixed
@retry(wait=wait_fixed(3))
def wait_3_s():
print("Wait 3 seconds between retries")
raise Exception
wait_3_s()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: - Stop After (X) Attempts
Step2: - Wait Between Attempts
|
8,778
|
<ASSISTANT_TASK:>
Python Code:
# Create a pymatgen Structure for NaCl
from pymatgen import Structure, Lattice
# Create a pymatgen Structure for NaCl
from pymatgen import Structure, Lattice
a = 5.6402 # NaCl lattice parameter
lattice = Lattice.from_parameters(a, a, a, 90.0, 90.0, 90.0)
lattice
structure = Structure.from_spacegroup(sg='Fm-3m', lattice=lattice,
species=['Na', 'Cl'],
coords=[[0,0,0], [0.5, 0, 0]])
structure
from vasppy.rdf import RadialDistributionFunction
indices_na = [i for i, site in enumerate(structure) if site.species_string is 'Na']
indices_cl = [i for i, site in enumerate(structure) if site.species_string is 'Cl']
print(indices_na)
print(indices_cl)
rdf_nana = RadialDistributionFunction(structures=[structure],
indices_i=indices_na)
rdf_clcl = RadialDistributionFunction(structures=[structure],
indices_i=indices_cl)
rdf_nacl = RadialDistributionFunction(structures=[structure],
indices_i=indices_na, indices_j=indices_cl)
import matplotlib.pyplot as plt
plt.plot(rdf_nana.r, rdf_nana.rdf, label='Na-Na')
plt.plot(rdf_clcl.r, rdf_clcl.rdf, label='Cl-Cl')
plt.plot(rdf_nacl.r, rdf_nacl.rdf, label='Na-Cl')
plt.legend()
plt.show()
plt.plot(rdf_nana.r, rdf_nana.smeared_rdf(), label='Na-Na') # default smearing of 0.1
plt.plot(rdf_clcl.r, rdf_clcl.smeared_rdf(sigma=0.050), label='Cl-Cl')
plt.plot(rdf_nacl.r, rdf_nacl.smeared_rdf(sigma=0.2), label='Na-Cl')
plt.legend()
plt.show()
rdf_nana = RadialDistributionFunction.from_species_strings(structures=[structure],
species_i='Na')
rdf_clcl = RadialDistributionFunction.from_species_strings(structures=[structure],
species_i='Cl')
rdf_nacl = RadialDistributionFunction.from_species_strings(structures=[structure],
species_i='Na', species_j='Cl')
plt.plot(rdf_nana.r, rdf_nana.smeared_rdf(), label='Na-Na')
plt.plot(rdf_clcl.r, rdf_clcl.smeared_rdf(), label='Cl-Cl')
plt.plot(rdf_nacl.r, rdf_nacl.smeared_rdf(), label='Na-Cl')
plt.legend()
plt.show()
from pymatgen.io.vasp import Xdatcar
xd = Xdatcar('data/NaCl_800K_MD_XDATCAR')
rdf_nana_800K = RadialDistributionFunction.from_species_strings(structures=xd.structures,
species_i='Na')
rdf_clcl_800K = RadialDistributionFunction.from_species_strings(structures=xd.structures,
species_i='Cl')
rdf_nacl_800K = RadialDistributionFunction.from_species_strings(structures=xd.structures,
species_i='Na', species_j='Cl')
plt.plot(rdf_nana_800K.r, rdf_nana_800K.smeared_rdf(), label='Na-Na')
plt.plot(rdf_clcl_800K.r, rdf_clcl_800K.smeared_rdf(), label='Cl-Cl')
plt.plot(rdf_nacl_800K.r, rdf_nacl_800K.smeared_rdf(), label='Na-Cl')
plt.legend()
plt.show()
struct_1 = struct_2 = struct_3 = structure
rdf_nacl_mc = RadialDistributionFunction(structures=[struct_1, struct_2, struct_3],
indices_i=indices_na, indices_j=indices_cl,
weights=[34, 27, 146])
# structures and weights lists must be equal lengths
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The default required arguments for creating a RadialDistributionFunction object are a list of pymatgen Structure objects, and the numerical indices of the atoms (or Site objects) that we want to compute the rdf between.
Step2: To compute a rdf between different species, we need to pass both indices_i and indices_j.
Step3: The Na and Cl sublattices are equivalent, so the Na–Na and Cl–Cl rdfs sit on top of each other.
Step4: Selecting atoms by their species strings
Step5: Calculating a RDF from a VASP XDATCAR
Step6: Weighted RDF calculations
|
8,779
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import MeanShift
# Load data
iris = datasets.load_iris()
X = iris.data
# Standarize features
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
# Create meanshift object
clt = MeanShift(n_jobs=-1)
# Train model
model = clt.fit(X_std)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Iris Flower Dataset
Step2: Standardize Features
Step3: Conduct Meanshift Clustering
|
8,780
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-1', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
8,781
|
<ASSISTANT_TASK:>
Python Code:
import pandas
import numpy as np
import scipy.optimize
import matplotlib.pyplot as plt
%matplotlib inline
data1 = pandas.read_csv("ex2data1.txt", header=None, names=['test1', 'test2', 'accepted'])
data1.head()
def plotData(data):
fig, ax = plt.subplots()
results_accepted = data[data.accepted == 1]
results_rejected = data[data.accepted == 0]
ax.scatter(results_accepted.test1, results_accepted.test2, marker='+', c='b', s=40)
ax.scatter(results_rejected.test1, results_rejected.test2, marker='o', c='r', s=30)
return ax
ax = plotData(data1)
ax.set_ylim([20, 130])
ax.legend(['Admitted', 'Not admitted'], loc='best')
ax.set_xlabel('Exam 1 score')
ax.set_ylabel('Exam 2 score')
X = data1[['test1', 'test2']].values
y = data1.accepted.values
m, n = X.shape
X = np.insert(X, 0, np.ones(len(X)), 1)
m, n
def sigmoid(z):
#SIGMOID Compute sigmoid functoon
# J = SIGMOID(z) computes the sigmoid of z.
# You need to return the following variables correctly
g = np.zeros(z.shape)
# ====================== YOUR CODE HERE ======================
# Instructions: Compute the sigmoid of each value of z (z can be a matrix,
# vector or scalar).
# =============================================================
return g
def cost(X, y, theta, lambda_=0):
#COSTFUNCTION Compute cost and gradient for logistic regression
# J = COSTFUNCTION(theta, X, y) computes the cost of using theta as the
# parameter for logistic regression and the gradient of the cost
# w.r.t. to the parameters.
# Initialize some useful values
m = len(y)
# You need to return the following variables correctly
J = 0
# ====================== YOUR CODE HERE ======================
# Instructions: Compute the cost of a particular choice of theta.
# You should set J to the cost.
# Compute the partial derivatives and set grad to the partial
# derivatives of the cost w.r.t. each parameter in theta
#
# =============================================================
return J
def gradient(X, y, theta, lambda_=0):
# Initialize some useful values
m = len(y)
# You need to return the following variables correctly
grad = np.zeros(theta.shape)
# ====================== YOUR CODE HERE ======================
# =============================================================
return grad
initial_theta = np.zeros(n + 1)
initial_theta.shape
cost(X, y, np.array(initial_theta))
gradient(X, y, np.array([0,0,0]))
def mycost(t):
return cost(X, y, t)
def mygrad(t):
return gradient(X, y, t)
optimal_theta = scipy.optimize.fmin_ncg(mycost,
initial_theta,
fprime=mygrad)
optimal_theta
ax = plotData(data1)
x_plot = np.array([np.max(X[:, 1]), np.min(X[:,1])])
y_plot = (-optimal_theta[0] - optimal_theta[1]*x_plot) / (optimal_theta[2])
ax.plot(x_plot, y_plot)
def predict(t, x):
#PREDICT Predict whether the label is 0 or 1 using learned logistic
#regression parameters theta
# p = PREDICT(theta, X) computes the predictions for X using a
# threshold at 0.5 (i.e., if sigmoid(theta'*x) >= 0.5, predict 1)
m = X.shape[0] # Number of training examples
# You need to return the following variables correctly
p = np.zeros(m)
# ====================== YOUR CODE HERE ======================
# Instructions: Complete the following code to make predictions using
# your learned logistic regression parameters.
# You should set p to a vector of 0's and 1's
#
# =========================================================================
return p
0
np.mean(predict(optimal_theta, X) == y)
data2 = pandas.read_csv("./ex2data2.txt", header=None, names=['test1', 'test2', 'accepted'])
data2.head()
ax = plotData(data2)
ax.legend(['y = 1', 'y = 0'], loc='best')
ax.set_xlabel('Microchip test 1')
ax.set_ylabel('Microchip test 2')
def mapFeature(x1, x2):
ret = np.array([x1**(i-j) * x2**j
for i in range(1,7) for j in range(i+1)
])
return np.insert(ret, 0, np.ones(len(x1)), 0).T
mapFeature(np.array([2,3]),np.array([3,2]))[:, :10]
X = mapFeature(data2.test1, data2.test2)
y = data2.accepted.values
initial_theta = np.zeros(X.shape[1])
X.shape, y.shape, initial_theta.shape
cost(X, y, initial_theta, lambda_)
lambda_ = 0
optimal_theta = scipy.optimize.fmin_bfgs(lambda t: cost(X, y, t, lambda_),
initial_theta,
lambda t: gradient(X, y, t, lambda_))
np.mean(predict(optimal_theta, X) == y)
optimal_theta
contour_x = np.linspace(-1, 1.5)
contour_y = np.linspace(-1, 1.5)
def calc_z(x, y):
return mapFeature(np.array([x]), np.array([y])).dot(optimal_theta)
z = np.zeros((len(contour_x), len(contour_y)))
for i, c_x in enumerate(contour_x):
for j, c_y in enumerate(contour_y):
z[i,j] = calc_z(c_x, c_y)[0]
ax = plotData(data2)
ax.contour(contour_x, contour_y, z, levels=[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: Plotting data with + indicating (y = 1) examples and o
Step3: Part 2
Step4: The cost at initial theta (zeros) should be about 0.693.
Step5: The gradient at initial theta should be [-0.1, -12.01, -11.26].
Step6: Part 3
Step7: Value of theta that minimizes the cost function
Step8: We plot the decision boundary.
Step9: Part 4
Step10: Let's predict the admission probably of a student with scores 45 and 85
Step11: Training set accuracy
Step12: Part 2
Step13: Note that mapFeature also adds a column of ones for us, so the intercept
Step14: The cost at the initial theta is
Step15: Part 2
Step16: At the optimal theta value, the accuracy is
Step17: The decision boundary
|
8,782
|
<ASSISTANT_TASK:>
Python Code:
import pymysql
db = pymysql.connect(
"db.fastcamp.us",
"root",
"dkstncks",
"sakila",
charset='utf8',
)
film_df = pd.read_sql("SELECT * FROM film;", db)
film_df.head(1)
SQL_QUERY =
SELECT *
FROM film
WHERE
(release_year = 2006 OR release_year = 2007)
AND (rating = "PG" OR rating = "G")
;
pd.read_sql(SQL_QUERY, db)
SQL_QUERY =
SELECT COUNT(*)
FROM film
WHERE
release_year IN (2006, 2007)
AND rating IN ("PG", "G")
;
pd.read_sql(SQL_QUERY, db)
is_pg_or_g = film_df.rating.isin(["PG", "G"])
is_2006_or_2007 = film_df.release_year.isin([2006, 2007])
film_df[is_pg_or_g & is_2006_or_2007]
film_df[is_pg_or_g & is_2006_or_2007].count()
film_df.head(1)
SQL_QUERY =
SELECT title, description, rental_rate
FROM film
WHERE
description LIKE "%Boring%"
AND rental_rate = 0.99
;
pd.read_sql(SQL_QUERY, db)
is_099 = film_df.rental_rate == 0.99
is_boring = film_df.description.str.contains("Boring")
film_df[is_099 & is_boring].count()
film_df.rental_rate.unique()
SQL_QUERY =
SELECT
DISTINCT rental_rate
FROM film
ORDER BY rental_rate
;
pd.read_sql(SQL_QUERY, db)
SQL_QUERY =
SELECT rating, COUNT(*) "total_films", AVG(rental_rate) "average_rental_rate"
FROM film
GROUP BY rating
ORDER BY average_rental_rate
;
pd.read_sql(SQL_QUERY, db)
SQL_QUERY =
SELECT
rating,
COUNT(*) "total_films",
AVG(rental_rate) "average_rental_rate"
FROM film
GROUP BY
1
ORDER BY 3
;
pd.read_sql(SQL_QUERY, db)
film_df.groupby("rating").agg({
"film_id": {"total films": np.size},
"rental_rate": {"average_rental_rate": np.mean},
})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 2T_데이터 분석을 위한 SQL 실습 (1) - WHERE IN, LIKE, JOIN
Step3: pandas
Step5: film 테이블에서 설명에 "Boring"이라는 텍스트가 포함되면서, 렌탈 비용이 0.99인
Step7: rental_rate 에 unique한 값들은 어떤게 있었을까? 0.99, 1.99, 2.99...
Step10: film테이블에서 등급으로 그룹을 묶어서, 각 등급별 갯수, 평균 렌탈 비용을 모두 출력하세요
|
8,783
|
<ASSISTANT_TASK:>
Python Code:
!pip install -U tensorflow
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
import math
from mnist_viz import *
!pip install -U okpy
from client.api.notebook import Notebook
ok = Notebook('lab14.ok')
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
print("Training set shape: {}\nValidation set shape: {}\nTest set shape: {}"
.format(mnist.train.images.shape, mnist.validation.images.shape, mnist.test.images.shape))
image_size = 28
num_features = image_size**2
num_labels = mnist.train.labels.shape[1]
# Indices of some example images from the training set:
examples_to_show = np.array([0, 5100, 10200, 15300, 20400, 25500, 30600, 35700, 40800, 45900])
show_flat_images(mnist.train.images[examples_to_show], ncols=5,
title="Some examples from the training set",
image_titles=examples_to_show)
print("Labels for printed examples:\n{}".format(mnist.train.labels[examples_to_show]))
# Variables for our images and labels:
x = tf.placeholder(tf.float32, [None, num_features])
y_ = tf.placeholder(tf.float32, [None, ...])
# Variables for parameters:
theta = tf.Variable(tf.zeros([..., num_labels]))
b = tf.Variable(tf.zeros([num_labels]))
# Variable for the output of our classifier (not a hard
# classification, but a number between 0 and 1, the result
# of the softmax function):
y = tf.nn.softmax(tf.matmul(..., ...) + ...)
# Define the regularization penalty. We didn't do this
# last time, but it's important.
alpha = 4e-4
l2_regularizer = tf.contrib.layers.l2_regularizer(scale=alpha, scope=None)
regularization_penalty = tf.contrib.layers.apply_regularization(l2_regularizer, [theta])
# Variable for the loss suffered for each training example.
# This is the cross-entropy loss, which is the negative log
# likelihood our model assigns to the true labels, if we
# think of the output of the softmax function as a
# probability.
loss = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))\
+ regularization_penalty
# Operator to perform a step of gradient descent on our loss:
step_size = 1.0
train_step = tf.train.GradientDescentOptimizer(step_size).minimize(...)
def create_session_and_optimize(train_step, num_iterations=2000, batch_size=100):
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(num_iterations):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
return sess
sess = create_session_and_optimize(train_step)
model_classifications = tf.argmax(y,1)
true_classes = tf.argmax(y_,1)
correct_prediction = tf.equal(model_classifications, true_classes)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Accuracy on the validation set:")
print(sess.run(accuracy, feed_dict={x: mnist.validation.images, y_: mnist.validation.labels}))
print("Model classifications and true classes:")
print(sess.run([model_classifications, true_classes], feed_dict={x: mnist.validation.images, y_: mnist.validation.labels}))
theta_2 = ...
b_2 = ...
theta_2, b_2
...
display_mistakes(mnist, sess, x, y_, theta, model_classifications, true_classes, correct_prediction)
def create_model_4(alpha=1e-6, num_filters_per_label=2, step_size=1.0):
# Variables for parameters:
filters = tf.Variable(tf.random_normal([num_features, num_labels, num_filters_per_label], stddev=0.4))
b = tf.Variable(tf.zeros([num_labels, num_filters_per_label]))
# This time the classifier has several intermediate steps.
filters_out = tf.matmul(x, tf.reshape(filters, [-1, num_labels*num_filters_per_label]))
# The score for each class, a matrix with dimensions [N, 10].
combined_score = tf.reduce_max(tf.reshape(filters_out, [-1, num_labels, num_filters_per_label]) + b, reduction_indices=[2])
# The output of the classifier, a matrix with dimensions [N, 10].
# The result of applying the softmax function to the score for
# each class.
y = ...
# The regularization penalty. We regularize all the filters,
# but not the bias term b.
l2_regularizer = tf.contrib.layers.l2_regularizer(scale=alpha, scope=None)
regularization_penalty = tf.contrib.layers.apply_regularization(l2_regularizer, [filters])
# The same cross-entropy loss as in our first model. Be sure to
# include the regularization penalty. Check the code for the
# first model, but think about whether there should be any
# differences.
loss = ...
# Operator to perform a step of gradient descent on our loss.
# Be sure to use the variable step_size passed as an argument
# to this function.
train_step = ...
return (filters, y, train_step)
filters_4, output_4, train_step_4 = create_model_4(alpha=4e-4, step_size=2)
sess_4 = train_and_display(mnist, x, y_, filters_4, output_4, train_step_4, num_iterations=20000)
def compute_confusion_matrix(classifications, true_classes, num_classes):
Compute the confusion matrix for a given list of classifications
that were computed by a classifier on some dataset.
A confusion matrix tells us how often the classifier "confused" each
class for each other class. So it contains one number for each
ordered pair of classes. If the documentation in this function isn't
clear enough, see here for a tutorial on confusion matrices:
http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
Args:
classifications (ndarray): A 1D array of integers, each between
0 and num_classes-1. These are the classifications produced by a
model on some dataset. Element i is the classification for example
i in the dataset.
true_classes (ndarray): A 1D array of integers, each between
0 and num_classes-1. These are the true classes of the examples in
some dataset. Element i is the true class for example i in the
dataset.
num_classes (int): The number of classes. Each class is a number
between 0 and num_classes-1.
Returns:
(ndarray): A 2-dimensional array of numbers. Element [i,j] is the
proportion of examples in the dataset that had true class i and
were assigned class j by the classifier.
# This is just a recommended skeleton; you can delete it.
...
counts = ...
...
# Run this cell after you've written compute_confusion_matrix.
# You may find that there are so few errors that it's hard to
# see them.
display_confusion_matrix(compute_confusion_matrix, mnist, sess_4, x, y_, output_4, num_labels)
this_was_the_true_class = ...
but_this_was_the_classification = ...
i_finished_the_lab = False
_ = ok.grade('qcompleted')
_ = ok.backup()
_ = ok.submit()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Today's lab is a reprise of TensorFlow and a brief foray into a more advanced topic in machine learning
Step2: Run the next cell to display some of the digit images.
Step3: Now we can define a pipeline to classify these images.
Step4: Run the next cell to run several steps of gradient descent (actually, batch stochastic gradient descent) on the training set.
Step5: Now run the next cell to apply your model to the validation images.
Step6: To compute the outputs of a model, we pass the output variable(s) as arguments to the method sess.run. The cell above does this twice, so check that for the syntax. The code cell before that did it thousands of times, and each run of train_step in that cell had the side effect of updating theta and x. (The state of each variable is held in sess.)
Step7: Question 3
Step9: You should find that the black and grey parts of the first image look a little bit like a 0 and the second a little bit like a 1! The resemblance is far from perfect, especially for other numbers; we'll worry about that next.
Step10: A big problem with the linear classifier is that it only allows a single filter for each number. That means the single filter for 7 has to match a 7 at the left or right side of the image, for example. And it also has to match a 7 that's very skinny or a 7 that's tilted to the right a bit. The result is that the filters match some common features of each number, but they look generally fuzzy.
Step11: Run the next cell to train this new model.
Step13: Another way to look at errors
Step14: Question 6
Step15: Submitting your assignment
|
8,784
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
import numpy as np
import pandas as pd
from scipy.stats import multivariate_normal, wishart
from itertools import product, starmap
import thinkbayes2
import thinkplot
%matplotlib inline
a = np.array([122.8, 115.5, 102.5, 84.7, 154.2, 83.7,
122.1, 117.6, 98.1, 111.2, 80.3, 110.0,
117.6, 100.3, 107.8, 60.2])
b = np.array([82.6, 99.1, 74.6, 51.9, 62.3, 67.2,
82.4, 97.2, 68.9, 77.9, 81.5, 87.4,
92.4, 80.8, 74.7, 42.1])
n = len(a)
n
thinkplot.Scatter(a, b, alpha=0.7)
X = np.array([a, b])
x̄ = X.mean(axis=1)
print(x̄)
std = X.std(axis=1)
print(std)
S = np.cov(X)
print(S)
corrcoef = np.corrcoef(a, b)
print(corrcoef)
def make_array(center, stderr, m=11, factor=3):
return np.linspace(center-factor*stderr,
center+factor*stderr, m)
μ_a = x̄[0]
μ_b = x̄[1]
σ_a = std[0]
σ_b = std[1]
ρ = corrcoef[0][1]
μ_a_array = make_array(μ_a, σ_a / np.sqrt(n))
μ_b_array = make_array(μ_b, σ_b / np.sqrt(n))
σ_a_array = make_array(σ_a, σ_a / np.sqrt(2 * (n-1)))
σ_b_array = make_array(σ_b, σ_b / np.sqrt(2 * (n-1)))
#ρ_array = make_array(ρ, np.sqrt((1 - ρ**2) / (n-2)))
ρ_array = make_array(ρ, 0.15)
def min_max(array):
return min(array), max(array)
print(min_max(μ_a_array))
print(min_max(μ_b_array))
print(min_max(σ_a_array))
print(min_max(σ_b_array))
print(min_max(ρ_array))
class Params:
def __init__(self, μ, Σ):
self.μ = μ
self.Σ = Σ
def __lt__(self, other):
return (self.μ, self.Σ) < (self.μ, self.Σ)
def pack(μ_a, μ_b, σ_a, σ_b, ρ):
μ = np.array([μ_a, μ_b])
cross = ρ * σ_a * σ_b
Σ = np.array([[σ_a**2, cross],
[cross, σ_b**2]])
return Params(μ, Σ)
mesh = product(μ_a_array, μ_b_array,
σ_a_array, σ_b_array, ρ_array)
mesh = starmap(pack, mesh)
class MultiNorm(thinkbayes2.Suite):
def Likelihood(self, data, hypo):
x̄, S, n = data
dist_x̄ = multivariate_normal(hypo.μ, hypo.Σ/n)
dist_S = wishart(n-1, hypo.Σ)
return dist_x̄.pdf(x̄) * dist_S.pdf((n-1) * S)
suite = MultiNorm(mesh)
%time suite.Update((x̄, S, n))
sample = suite.MakeCdf().Sample(300)
def generate(μ, Σ, sample_size):
return np.random.multivariate_normal(μ, Σ, sample_size)
# run an example using sample stats
fake_X = generate(x̄, S, 300)
def conditional_probs(sample):
df = pd.DataFrame(sample, columns=['a', 'b'])
pA = df[(91.9 <= df.a) & (df.a <= 158.3)]
pB = df[(56.4 <= df.b) & (df.b <= 100)]
pBoth = pA.index.intersection(pB.index)
pAgivenB = len(pBoth) / len(pB)
pBgivenA = len(pBoth) / len(pA)
return pAgivenB, pBgivenA
conditional_probs(fake_X)
def make_predictive_distributions(sample):
pmf = thinkbayes2.Joint()
for params in sample:
fake_X = generate(params.μ, params.Σ, 300)
probs = conditional_probs(fake_X)
pmf[probs] += 1
pmf.Normalize()
return pmf
predictive = make_predictive_distributions(sample)
thinkplot.Cdf(predictive.Marginal(0).MakeCdf())
predictive.Marginal(0).Mean()
thinkplot.Cdf(predictive.Marginal(1).MakeCdf())
predictive.Marginal(1).Mean()
def unpack(μ, Σ):
μ_a = μ[0]
μ_b = μ[1]
σ_a = np.sqrt(Σ[0, 0])
σ_b = np.sqrt(Σ[1, 1])
ρ = Σ[0, 1] / σ_a / σ_b
return μ_a, μ_b, σ_a, σ_b, ρ
def make_marginals(suite):
joint = thinkbayes2.Joint()
for params, prob in suite.Items():
t = unpack(params.μ, params.Σ)
joint[t] = prob
return joint
marginals = make_marginals(suite)
thinkplot.Cdf(marginals.Marginal(0).MakeCdf())
thinkplot.Cdf(marginals.Marginal(1).MakeCdf());
thinkplot.Cdf(marginals.Marginal(2).MakeCdf())
thinkplot.Cdf(marginals.Marginal(3).MakeCdf());
thinkplot.Cdf(marginals.Marginal(4).MakeCdf());
raise Exception("YouShallNotPass")
def estimate(X):
return X.mean(axis=1), np.cov(X)
estimate(generate(x̄, S, n).transpose())
def z_prime(r):
return 0.5 * np.log((1+r) / (1-r))
def sampling_distributions(stats, cov, n):
sig1, sig2, _ = std_rho(cov)
array = np.zeros((len(stats), 8))
for i, (x̄, S) in enumerate(stats):
array[i, 0:2] = x̄
s1, s2, r = std_rho(S)
array[i, 2] = s1
array[i, 3] = s2
array[i, 4] = r
array[i, 5] = (n-1) * S[0, 0] / cov[0, 0]
array[i, 6] = (n-1) * S[1, 1] / cov[1, 1]
array[i, 7] = z_prime(r)
return array
dists = sampling_distributions(stats, cov, n)
cdf0 = thinkbayes2.Cdf(dists[:, 0])
cdf1 = thinkbayes2.Cdf(dists[:, 1])
thinkplot.Cdfs([cdf0, cdf1])
cdf2 = thinkbayes2.Cdf(dists[:, 2])
cdf3 = thinkbayes2.Cdf(dists[:, 3])
thinkplot.Cdfs([cdf2, cdf3])
cdf4 = thinkbayes2.Cdf(dists[:, 4])
thinkplot.Cdfs([cdf4])
cdf5 = thinkbayes2.Cdf(dists[:, 5])
cdf6 = thinkbayes2.Cdf(dists[:, 6])
thinkplot.Cdfs([cdf5, cdf6])
cdf7 = thinkbayes2.Cdf(dists[:, 7])
thinkplot.Cdfs([cdf7])
def sampling_dist_mean(i, mean, cov, cdf):
sampling_dist = scipy.stats.norm(loc=mean[i], scale=np.sqrt(cov[i, i]/n))
xs = cdf.xs
ys = sampling_dist.cdf(xs)
thinkplot.plot(xs, ys)
thinkplot.Cdf(cdf)
sampling_dist_mean(0, mean, cov, cdf0)
sampling_dist_mean(1, mean, cov, cdf1)
def sampling_dist_std(i, mean, cov, cdf):
sampling_dist = scipy.stats.chi2(df=n)
xs = cdf.xs
ys = sampling_dist.cdf(xs)
thinkplot.plot(xs, ys)
thinkplot.Cdf(cdf)
sampling_dist_std(5, mean, cov, cdf5)
sampling_dist_std(6, mean, cov, cdf6)
def sampling_dist_r(i, mean, cov, cdf):
_, _, rho = std_rho(cov)
sampling_dist = scipy.stats.norm(loc=z_prime(rho), scale=1/np.sqrt(n-3))
xs = cdf.xs
ys = sampling_dist.cdf(xs)
thinkplot.plot(xs, ys)
thinkplot.Cdf(cdf)
sampling_dist_r(7, mean, cov, cdf7)
pdf_X = scipy.stats.multivariate_normal(mean, cov/n)
pdf_X.pdf(mean) - pdf_X.pdf(mean-0.1)
def make_multi_norm_marginal(index, mean, cov, n):
sigmas = std_rho(cov)
width = 6 * sigmas[index] / np.sqrt(n)
xs = np.linspace(mean[index]-width/2, mean[index]+width/2, 101)
array = np.tile(mean, (len(xs), 1))
array[:, index] = xs
pdf_X = scipy.stats.multivariate_normal(mean, cov/n)
ys = pdf_X.pdf(array)
pmf = thinkbayes2.Pmf(dict(zip(xs, ys)))
pmf.Normalize()
return pmf
pmf = make_multi_norm_marginal(0, mean, cov, n)
thinkplot.Pdf(pmf)
pmf = make_multi_norm_marginal(1, mean, cov, n)
thinkplot.Pdf(pmf)
def generate_statistics(mean, cov, n, iters):
return [estimate(generate(mean, cov, n)) for _ in range(iters)]
stats = generate_statistics(mean, cov, n, 1000)
s0 = np.zeros(len(stats))
s1 = np.zeros(len(stats))
for i, (x̄, S) in enumerate(stats):
sigmas = std_rho(S)
s0[i] = sigmas[0]
s1[i] = sigmas[1]
thinkplot.Scatter(s0, s1)
s0 = np.zeros(len(stats))
s1 = np.zeros(len(stats))
for i, (x̄, S) in enumerate(stats):
s0[i] = (n-1) * S[0][0]
s1[i] = (n-1) * S[1][1]
thinkplot.Scatter(s0, s1)
pdf_S = wishart(df=n-1, scale=cov)
stats = pdf_S.rvs(1000)
s0 = np.zeros(len(stats))
s1 = np.zeros(len(stats))
for i, S in enumerate(stats):
s0[i] = S[0][0]
s1[i] = S[1][1]
thinkplot.Scatter(s0, s1)
sigmas = std_rho(cov)
width = 6 * sigmas[0] / np.sqrt(2 * (n-1))
X = np.linspace(sigmas[0]-width/2, sigmas[0]+width/2, 101)
width = 6 * sigmas[1] / np.sqrt(2 * (n-1))
Y = np.linspace(sigmas[1]-width/2, sigmas[1]+width/2, 101)
Z = np.zeros((len(X), len(Y)))
pdf_S = wishart(df=n-1, scale=cov)
for i, x in enumerate(X):
for j, y in enumerate(Y):
S = cov.copy()
S[0, 0] = x**2
S[1, 1] = y**2
try:
density = pdf_S.pdf((n-1) * S)
Z[i, j] = density
except:
Z[i, j] = np.nan
thinkplot.Scatter(s0, s1)
plt.contour(X, Y, Z)
pmf_0 = thinkbayes2.Pmf()
for i, (x̄, S) in enumerate(stats):
sig1, sig2, rho = std_rho(S)
density = pdf_S.pdf((n-1) * S)
pmf_0[sig1] += 1
thinkplot.Cdf(pmf_0.MakeCdf())
pdf_S = wishart(df=n-1, scale=cov)
pdf_S.pdf(cov)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This notebook contains a solution to a problem posted on Reddit; here's the original statement of the problem
Step2: And make a scatter plot
Step3: It looks like modeling this data with a bi-variate normal distribution is a reasonable choice.
Step4: And compute the sample mean
Step5: Sample standard deviation
Step6: Covariance matrix
Step7: And correlation coefficient
Step8: Now, let's start thinking about this as a Bayesian estimation problem.
Step9: Although the mesh is constructed in 5 dimensions, for doing the Bayesian update, I want to express the parameters in terms of a vector of means, μ, and a covariance matrix, Σ.
Step10: Now we can make a prior distribution. First, mesh is the Cartesian product of the parameter arrays. Since there are 5 dimensions with 11 points each, the total number of points is 11**5 = 161,051.
Step11: The result is an iterator. We can use itertools.starmap to apply pack to each of the points in the mesh
Step12: Now we need an object to encapsulate the mesh and perform the Bayesian update. MultiNorm represents a map from each Param object to its probability.
Step13: Now we can initialize the suite with the mesh.
Step14: And update it using the data (the return value is the total probability of the data, aka the normalizing constant). This takes a minute or two on my machine (with m=11).
Step15: Now to answer the original question, about the conditional probabilities of A and B, we can either enumerate the parameters in the posterior or draw a sample from the posterior.
Step16: For a given pair of values, μ and Σ, in the sample, we can generate a simulated dataset.
Step17: The following function takes a sample of $a$ and $b$ and computes the conditional probabilites P(A|B) and P(B|A)
Step18: Now we can loop through the sample of parameters, generate simulated data for each, and compute the conditional probabilities
Step19: Then pull out the posterior predictive marginal distribution of P(A|B), and print the posterior predictive mean
Step20: And then pull out the posterior predictive marginal distribution of P(B|A), with the posterior predictive mean
Step21: We don't really care about the posterior distributions of the parameters, but it's good to take a look and make sure they are not crazy.
Step22: So we can iterate through the posterior distribution and make a joint posterior distribution of the parameters
Step23: And here are the posterior marginal distributions for μ_a and μ_b
Step24: And here are the posterior marginal distributions for σ_a and σ_b
Step25: Finally, the posterior marginal distribution for the correlation coefficient, ρ
Step26: You can ignore everything after this, which is my development code and some checks.
|
8,785
|
<ASSISTANT_TASK:>
Python Code:
# setup SymPy
from sympy import *
x, y, z, t = symbols('x y z t')
init_printing()
# a vector is a special type of matrix (an n-vector is either a nx1 or a 1xn matrix)
Vector = Matrix # define alias Vector so I don't have to explain this during video
# setup plotting
%matplotlib inline
import matplotlib.pyplot as mpl
from plot_helpers import plot_vec, plot_vecs, plot_line, plot_plane, autoscale_arrows
# define two vectors
u = Vector([1,1])
v = Vector([1,-1])
u
v
plot_vecs(u, v)
autoscale_arrows()
# graphical
plot_vecs(u,v)
plot_vec(v, at=u, color='b')
plot_vec(u+v, color='r')
autoscale_arrows()
# algebraic
u+v
u.norm()
uhat = u/u.norm()
plot_vecs(u, uhat)
uhat
u = Vector([2,2])
v = Vector([3,0])
plot_vecs(u,v)
autoscale_arrows()
u.dot(v)
# split the vector u into two parts:
u_parallel_to_v = Vector([2,0])
u_perp_to_v = Vector([0,2])
plot_vecs(u, v, u_parallel_to_v, u_perp_to_v)
autoscale_arrows()
u == u_parallel_to_v + u_perp_to_v
# the dot product uses only the part of u that is parallel to v
u.dot(v) == u_parallel_to_v.dot(v) == u_parallel_to_v.norm()*v.norm()
# two vetors that are perpendicular, have zero dot product together
u_perp_to_v.dot(v)
def proj(v, d):
Computes the projection of vector `v` onto direction `d`.
return v.dot( d/d.norm() )*( d/d.norm() )
v = Vector([2,2])
d = Vector([3,0])
proj_v_on_d = proj(v,d)
plot_vecs(d, v, proj_v_on_d)
autoscale_arrows()
# The line with equation y = x can also be written as a paramteric equation
# [x,y] = [0,0] + s*[1,1] where d = [1,1] is called the direction vector the line
d = Vector([1,1])
plot_line(d,[0,0])
# want a function that computes the projection onto the line with equation y = x for any vec
def P(vec):
Compute the projection of vector `vec` onto line y=x.
return proj(vec, d)
v = Vector([5,0])
plot_line(d,[0,0])
plot_vecs(v, P(v))
P(v)
ihat = Vector([1,0])
jhat = Vector([0,1])
Pihat = P(ihat)
Pjhat = P(jhat)
Pihat, Pjhat
def P2(vec):
Compute the projection of vector `vec` onto line y=x.
return vec[0]*Pihat + vec[1]*Pjhat
v = Vector([5,0])
plot_line(d,[0,0])
plot_vecs(v, P2(v))
M_P = Matrix([[1,1],
[1,1]])/2
M_P
def P3(vec):
Compute the projection of vector `vec` onto the line y=x.
return M_P*vec
v = Vector([4,0])
plot_line(d, [0,0])
plot_vecs(v, P3(v))
M_P.shape
A = Matrix([[1,2],
[3,4],
[5,6]])
A
A.shape
a_11, a_12, a_21, a_22, a_31, a_32 = symbols('a_11 a_12 a_21 a_22 a_31 a_32')
x_1, x_2 = symbols('x_1 x_2')
A = Matrix([
[a_11, a_12],
[a_21, a_22],
[a_31, a_32]])
x = Vector([x_1,x_2])
A*x
b_11, b_12, b_21, b_22 = symbols('b_11 b_12 b_21 b_22')
B = Matrix([[b_11, b_12],
[b_21, b_22]])
A*B
# (AB)_ij = dot product of ith row of A with jth col of B
(A*B)[2,1] == A[2,:].dot( B[:,1])
A*(B*x)
expand( A*(B*x) ) == expand( (A*B)*x )
# analogy with ordinary functions...
x = symbols('x')
def f(x):
return 2*x
def g(x):
return 3*x
f(g(x))
def h(x):
return 6*x
h(x)
A = Matrix([[1,2],
[3,9]])
A.inv()
A.inv()*A
A = Matrix([[1,2],
[3,9]])
b = Vector([5,21])
x = A.inv()*b
x
# verify A*x == b
A*x
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prerequisites
Step2: Vector addition
Step3: Vector length $\|\vec{u}\|$
Step4: Unit-length vectors $\hat{u}$
Step5: Dot product
Step6: Intuition
Step8: Projections
Step9: Projections play an important role in physics. For example, when solving a two dimensional projectile problem we often decompose vector quantities like forces $\vec{F}$, velocities $\vec{v}$, and momenta $\vec{p}$ into their $x$- and $y$-components
Step11: Take 1
Step13: Vector functions
Step15: Take 3
Step16: Equivalence relationship between linear transformstions $T$ and matrices $M_T$
Step17: Matrix operations
Step18: Matrix-matrix product
Step19: The matrix-matrix product implements composition of linear transformations
Step20: Matrix inverse
Step21: Matrix equations
|
8,786
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import gym
import numpy as np
import math
import reinforcement_learning as rl
# TensorFlow
tf.__version__
# OpenAI Gym
gym.__version__
env_name = 'Breakout-v0'
# env_name = 'SpaceInvaders-v0'
rl.checkpoint_base_dir = 'checkpoints_tutorial16/'
rl.update_paths(env_name=env_name)
# rl.maybe_download_checkpoint(env_name=env_name)
agent = rl.Agent(env_name=env_name,
training=True,
render=True,
use_logging=False)
model = agent.model
replay_memory = agent.replay_memory
agent.run(num_episodes=1)
log_q_values = rl.LogQValues()
log_reward = rl.LogReward()
log_q_values.read()
log_reward.read()
plt.plot(log_reward.count_states, log_reward.episode, label='Episode Reward')
plt.plot(log_reward.count_states, log_reward.mean, label='Mean of 30 episodes')
plt.xlabel('State-Count for Game Environment')
plt.legend()
plt.show()
plt.plot(log_q_values.count_states, log_q_values.mean, label='Q-Value Mean')
plt.xlabel('State-Count for Game Environment')
plt.legend()
plt.show()
agent.epsilon_greedy.epsilon_testing
agent.training = False
agent.reset_episode_rewards()
agent.render = True
agent.run(num_episodes=1)
agent.reset_episode_rewards()
agent.render = False
agent.run(num_episodes=30)
rewards = agent.episode_rewards
print("Rewards for {0} episodes:".format(len(rewards)))
print("- Min: ", np.min(rewards))
print("- Mean: ", np.mean(rewards))
print("- Max: ", np.max(rewards))
print("- Stdev: ", np.std(rewards))
_ = plt.hist(rewards, bins=30)
def print_q_values(idx):
Print Q-values and actions from the replay-memory at the given index.
# Get the Q-values and action from the replay-memory.
q_values = replay_memory.q_values[idx]
action = replay_memory.actions[idx]
print("Action: Q-Value:")
print("====================")
# Print all the actions and their Q-values.
for i, q_value in enumerate(q_values):
# Used to display which action was taken.
if i == action:
action_taken = "(Action Taken)"
else:
action_taken = ""
# Text-name of the action.
action_name = agent.get_action_name(i)
print("{0:12}{1:.3f} {2}".format(action_name, q_value,
action_taken))
# Newline.
print()
def plot_state(idx, print_q=True):
Plot the state in the replay-memory with the given index.
# Get the state from the replay-memory.
state = replay_memory.states[idx]
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(1, 2)
# Plot the image from the game-environment.
ax = axes.flat[0]
ax.imshow(state[:, :, 0], vmin=0, vmax=255,
interpolation='lanczos', cmap='gray')
# Plot the motion-trace.
ax = axes.flat[1]
ax.imshow(state[:, :, 1], vmin=0, vmax=255,
interpolation='lanczos', cmap='gray')
# This is necessary if we show more than one plot in a single Notebook cell.
plt.show()
# Print the Q-values.
if print_q:
print_q_values(idx=idx)
num_used = replay_memory.num_used
num_used
q_values = replay_memory.q_values[0:num_used, :]
q_values_min = q_values.min(axis=1)
q_values_max = q_values.max(axis=1)
q_values_dif = q_values_max - q_values_min
idx = np.argmax(replay_memory.rewards)
idx
for i in range(-5, 3):
plot_state(idx=idx+i)
idx = np.argmax(q_values_max)
idx
for i in range(0, 5):
plot_state(idx=idx+i)
idx = np.argmax(replay_memory.end_life)
idx
for i in range(-10, 0):
plot_state(idx=idx+i)
idx = np.argmax(q_values_dif)
idx
for i in range(0, 5):
plot_state(idx=idx+i)
idx = np.argmin(q_values_dif)
idx
for i in range(0, 5):
plot_state(idx=idx+i)
def plot_layer_output(model, layer_name, state_index, inverse_cmap=False):
Plot the output of a convolutional layer.
:param model: An instance of the NeuralNetwork-class.
:param layer_name: Name of the convolutional layer.
:param state_index: Index into the replay-memory for a state that
will be input to the Neural Network.
:param inverse_cmap: Boolean whether to inverse the color-map.
# Get the given state-array from the replay-memory.
state = replay_memory.states[state_index]
# Get the output tensor for the given layer inside the TensorFlow graph.
# This is not the value-contents but merely a reference to the tensor.
layer_tensor = model.get_layer_tensor(layer_name=layer_name)
# Get the actual value of the tensor by feeding the state-data
# to the TensorFlow graph and calculating the value of the tensor.
values = model.get_tensor_value(tensor=layer_tensor, state=state)
# Number of image channels output by the convolutional layer.
num_images = values.shape[3]
# Number of grid-cells to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_images))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids, figsize=(10, 10))
print("Dim. of each image:", values.shape)
if inverse_cmap:
cmap = 'gray_r'
else:
cmap = 'gray'
# Plot the outputs of all the channels in the conv-layer.
for i, ax in enumerate(axes.flat):
# Only plot the valid image-channels.
if i < num_images:
# Get the image for the i'th output channel.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap=cmap)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
idx = np.argmax(q_values_max)
plot_state(idx=idx, print_q=False)
plot_layer_output(model=model, layer_name='layer_conv1', state_index=idx, inverse_cmap=False)
plot_layer_output(model=model, layer_name='layer_conv2', state_index=idx, inverse_cmap=False)
plot_layer_output(model=model, layer_name='layer_conv3', state_index=idx, inverse_cmap=False)
def plot_conv_weights(model, layer_name, input_channel=0):
Plot the weights for a convolutional layer.
:param model: An instance of the NeuralNetwork-class.
:param layer_name: Name of the convolutional layer.
:param input_channel: Plot the weights for this input-channel.
# Get the variable for the weights of the given layer.
# This is a reference to the variable inside TensorFlow,
# not its actual value.
weights_variable = model.get_weights_variable(layer_name=layer_name)
# Retrieve the values of the weight-variable from TensorFlow.
# The format of this 4-dim tensor is determined by the
# TensorFlow API. See Tutorial #02 for more details.
w = model.get_variable_value(variable=weights_variable)
# Get the weights for the given input-channel.
w_channel = w[:, :, input_channel, :]
# Number of output-channels for the conv. layer.
num_output_channels = w_channel.shape[2]
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w_channel)
w_max = np.max(w_channel)
# This is used to center the colour intensity at zero.
abs_max = max(abs(w_min), abs(w_max))
# Print statistics for the weights.
print("Min: {0:.5f}, Max: {1:.5f}".format(w_min, w_max))
print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w_channel.mean(),
w_channel.std()))
# Number of grids to plot.
# Rounded-up, square-root of the number of output-channels.
num_grids = math.ceil(math.sqrt(num_output_channels))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i < num_output_channels:
# Get the weights for the i'th filter of this input-channel.
img = w_channel[:, :, i]
# Plot image.
ax.imshow(img, vmin=-abs_max, vmax=abs_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
plot_conv_weights(model=model, layer_name='layer_conv1', input_channel=0)
plot_conv_weights(model=model, layer_name='layer_conv1', input_channel=1)
plot_conv_weights(model=model, layer_name='layer_conv2', input_channel=0)
plot_conv_weights(model=model, layer_name='layer_conv3', input_channel=0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The main source-code for Reinforcement Learning is located in the following module
Step2: This was developed using Python 3.6.0 (Anaconda) with package versions
Step3: Game Environment
Step4: This is the base-directory for the TensorFlow checkpoints as well as various log-files.
Step5: Once the base-dir has been set, you need to call this function to set all the paths that will be used. This will also create the checkpoint-dir if it does not already exist.
Step6: Download Pre-Trained Model
Step7: I believe the webserver is located in Denmark. If you are having problems downloading the files using the automatic function above, then you can try and download the files manually in a webbrowser or using wget or curl. Or you can download from Google Drive, where you will get an anti-virus warning that is awkward to bypass automatically
Step8: The Neural Network is automatically instantiated by the Agent-class. We will create a direct reference for convenience.
Step9: Similarly, the Agent-class also allocates the replay-memory when training==True. The replay-memory will require more than 3 GB of RAM, so it should only be allocated when needed. We will need the replay-memory in this Notebook to record the states and Q-values we observe, so they can be plotted further below.
Step10: Training
Step11: In training-mode, this function will output a line for each episode. The first counter is for the number of episodes that have been processed. The second counter is for the number of states that have been processed. These two counters are stored in the TensorFlow checkpoint along with the weights of the Neural Network, so you can restart the training e.g. if you only have one computer and need to train during the night.
Step12: We can now read the logs from file
Step13: Training Progress
Step14: Training Progress
Step15: Testing
Step16: We will now instruct the agent that it should no longer perform training by setting this boolean
Step17: We also reset the previous episode rewards.
Step18: We can render the game-environment to screen so we can see the agent playing the game, by setting this boolean
Step19: We can now run a single episode by calling the run() function again. This should open a new window that shows the game being played by the agent. At the time of this writing, it was not possible to resize this tiny window, and the developers at OpenAI did not seem to care about this feature which should obviously be there.
Step20: Mean Reward
Step21: We disable the screen-rendering so the game-environment runs much faster.
Step22: We can now run 30 episodes. This records the rewards for each episode. It might have been a good idea to disable the output so it does not print all these lines - you can do this as an exercise.
Step23: We can now print some statistics for the episode rewards, which vary greatly from one episode to the next.
Step24: We can also plot a histogram with the episode rewards.
Step26: Example States
Step28: This helper-function plots a state from the replay-memory and optionally prints the Q-values.
Step29: The replay-memory has room for 200k states but it is only partially full from the above call to agent.run(num_episodes=1). This is how many states are actually used.
Step30: Get the Q-values from the replay-memory that are actually used.
Step31: For each state, calculate the min / max Q-values and their difference. This will be used to lookup interesting states in the following sections.
Step32: Example States
Step33: This state is where the ball hits the wall so the agent scores a point.
Step34: Example
Step35: Example
Step36: Example
Step37: Example
Step39: Output of Convolutional Layers
Step40: Game State
Step41: Output of Convolutional Layer 1
Step42: Output of Convolutional Layer 2
Step43: Output of Convolutional Layer 3
Step45: Weights for Convolutional Layers
Step46: Weights for Convolutional Layer 1
Step47: We can also plot the convolutional weights for the second input channel, that is, the motion-trace of the game-environment. Once again we see that the negative weights (blue) have a much greater magnitude than the positive weights (red).
Step48: Weights for Convolutional Layer 2
Step49: Weights for Convolutional Layer 3
|
8,787
|
<ASSISTANT_TASK:>
Python Code:
# Imports
import numpy as np
import pandas as pd
from sklearn.datasets import load_boston
#Ploting
import seaborn as sns
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
# %matplotlib notebook
def read_sklearn_dataset():
data = load_boston()
df_data = pd.DataFrame(data=data['data'], columns=data['feature_names'])
df_target = pd.DataFrame(data=data['target'], columns=["Target"])
return df_data, df_target
df_data, df_target = read_sklearn_dataset()
colormap = plt.cm.viridis
plt.figure(figsize=(11,8))
plt.title('Attributes Pairwise Correlation', y=1.03, size=18)
sns.heatmap(pd.concat([df_data, df_target], axis=1).astype(float).corr(),linewidths=0.5, square=True,
cmap=colormap, linecolor='white',annot=True)
plt.show()
fig = plt.figure(figsize=(10, 7))
plt.subplot(121)
sns.regplot(x=df_data['LSTAT'], y=df_target['Target'], color="r")
plt.title('LSTAT');
plt.subplot(122)
sns.regplot(x=df_data['RM'], y=df_target['Target'], color="g")
plt.title('RM');
plt.show()
m = len(df_data)
X = np.array([np.ones(m), df_data['LSTAT'].as_matrix()]).T
y = np.array(df_target['Target'])
betaHat = np.linalg.solve(X.T.dot(X), X.T.dot(y))
fig = plt.figure(figsize=(10, 7))
xx = np.linspace(min(df_data['LSTAT'])-3, max(df_data['LSTAT'])+3, 2)
yy = np.array(betaHat[0] + betaHat[1] * xx)
plt.subplot(121)
plt.plot(xx, yy.T, color='r')
plt.scatter(df_data['LSTAT'], df_target['Target'], color='r')
plt.title('LSTAT');
X = np.array([np.ones(m), df_data['RM'].as_matrix()]).T
y = np.array(df_target['Target'])
betaHat = np.linalg.solve(X.T.dot(X), X.T.dot(y))
xx = np.linspace(min(df_data['RM'])-1, max(df_data['RM'])+1, 2)
yy = np.array(betaHat[0] + betaHat[1] * xx)
plt.subplot(122)
plt.plot(xx, yy.T, color='g')
plt.scatter(df_data['RM'], df_target['Target'], color='g')
plt.title('RM');
plt.show()
x1 = np.linspace(min(df_data['LSTAT']), max(df_data['LSTAT']), 20)
x2 = np.linspace(min(df_data['RM']), max(df_data['RM']), 20)
X1, X2 = np.meshgrid(x1, x2)
X = np.array([np.ones(m), df_data['LSTAT'].as_matrix(), df_data['RM'].as_matrix()]).T
y = np.array(df_target['Target'])
betaHat = np.linalg.solve(X.T.dot(X), X.T.dot(y))
yy = np.array(betaHat[0] + (X1 * betaHat[1] + X2 * betaHat[2]))
# F1_score = 2*(X*Y)/(X+Y)
fig = plt.figure(figsize=(10, 7))
ax = plt.axes(projection='3d')
ax.plot_surface(X1, X2, yy, alpha=0.25)
ax.scatter(df_data['LSTAT'], df_data['RM'], df_target['Target'], c='r');
ax.set_xlabel('LSTAT')
ax.set_ylabel('RM')
ax.set_zlabel('Target')
plt.show()
%qt
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the dataset
Step2: Feature Selection Steps (in practice)
Step3: <b> The SEAPORN regression plot </b> can instantly confirms our claims regarding the the most influential attributes (LSTAT, RM).
Step4: Here is How we can calculate the regression using Normal Equation
Step5: Same procedure for multiple linear regression.
|
8,788
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
poly_sframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# then assign poly_sframe[name] to the appropriate power of feature
poly_sframe[name] = feature.apply(lambda x : x**power)
return poly_sframe
import matplotlib.pyplot as plt
%matplotlib inline
sales = graphlab.SFrame('kc_house_data.gl/')
sales = sales.sort(['sqft_living','price'])
l2_small_penalty = 1e-5
poly15_data = polynomial_sframe(sales['sqft_living'], 15)
my_features = poly15_data.column_names() # get the name of the features
poly15_data['price'] = sales['price'] # add price to the data since it's the target
model15 = graphlab.linear_regression.create(poly15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=l2_small_penalty
)
model15.get("coefficients")
(semi_split1, semi_split2) = sales.random_split(.5,seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
set_1_15_data = polynomial_sframe(set_1['sqft_living'], 15)
my_features = set_1_15_data.column_names() # get the name of the features
set_1_15_data['price'] = set_1['price'] # add price to the data since it's the target
model_set_1_15 = graphlab.linear_regression.create(
set_1_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=l2_small_penalty
)
plt.plot(set_1_15_data['power_1'],set_1_15_data['price'],'.',
set_1_15_data['power_1'], model_set_1_15.predict(set_1_15_data),'-')
print "set_1"
model_set_1_15.get("coefficients").print_rows(16)
set_2_15_data = polynomial_sframe(set_2['sqft_living'], 15)
my_features = set_2_15_data.column_names() # get the name of the features
set_2_15_data['price'] = set_2['price'] # add price to the data since it's the target
model_set_2_15 = graphlab.linear_regression.create(
set_2_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=l2_small_penalty
)
plt.plot(set_2_15_data['power_1'],set_2_15_data['price'],'.',
set_2_15_data['power_1'], model_set_2_15.predict(set_2_15_data),'-')
print "set_2"
model_set_2_15.get("coefficients").print_rows(16)
set_3_15_data = polynomial_sframe(set_3['sqft_living'], 15)
my_features = set_3_15_data.column_names() # get the name of the features
set_3_15_data['price'] = set_3['price'] # add price to the data since it's the target
model_set_3_15 = graphlab.linear_regression.create(
set_3_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=l2_small_penalty
)
plt.plot(set_3_15_data['power_1'],set_3_15_data['price'],'.',
set_3_15_data['power_1'], model_set_3_15.predict(set_3_15_data),'-')
print "set_3"
model_set_3_15.get("coefficients").print_rows(16)
set_4_15_data = polynomial_sframe(set_4['sqft_living'], 15)
my_features = set_4_15_data.column_names() # get the name of the features
set_4_15_data['price'] = set_4['price'] # add price to the data since it's the target
model_set_4_15 = graphlab.linear_regression.create(
set_4_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=l2_small_penalty
)
plt.plot(set_4_15_data['power_1'],set_4_15_data['price'],'.',
set_4_15_data['power_1'], model_set_4_15.predict(set_4_15_data),'-')
print "set_4"
model_set_4_15.get("coefficients").print_rows(16)
print model_set_1_15.get("coefficients")['value'][1]
print model_set_2_15.get("coefficients")['value'][1]
print model_set_3_15.get("coefficients")['value'][1]
print model_set_4_15.get("coefficients")['value'][1]
set_1_15_data = polynomial_sframe(set_1['sqft_living'], 15)
my_features = set_1_15_data.column_names() # get the name of the features
set_1_15_data['price'] = set_1['price'] # add price to the data since it's the target
model_set_1_15 = graphlab.linear_regression.create(
set_1_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=1e5
)
plt.plot(set_1_15_data['power_1'],set_1_15_data['price'],'.',
set_1_15_data['power_1'], model_set_1_15.predict(set_1_15_data),'-')
print "set_1"
model_set_1_15.get("coefficients").print_rows(16)
set_2_15_data = polynomial_sframe(set_2['sqft_living'], 15)
my_features = set_2_15_data.column_names() # get the name of the features
set_2_15_data['price'] = set_2['price'] # add price to the data since it's the target
model_set_2_15 = graphlab.linear_regression.create(
set_2_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=1e5
)
plt.plot(set_2_15_data['power_1'],set_2_15_data['price'],'.',
set_2_15_data['power_1'], model_set_2_15.predict(set_2_15_data),'-')
print "set_2"
model_set_2_15.get("coefficients").print_rows(16)
set_3_15_data = polynomial_sframe(set_3['sqft_living'], 15)
my_features = set_3_15_data.column_names() # get the name of the features
set_3_15_data['price'] = set_3['price'] # add price to the data since it's the target
model_set_3_15 = graphlab.linear_regression.create(
set_3_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=1e5
)
plt.plot(set_3_15_data['power_1'],set_3_15_data['price'],'.',
set_3_15_data['power_1'], model_set_3_15.predict(set_3_15_data),'-')
print "set_3"
model_set_3_15.get("coefficients").print_rows(16)
set_4_15_data = polynomial_sframe(set_4['sqft_living'], 15)
my_features = set_4_15_data.column_names() # get the name of the features
set_4_15_data['price'] = set_4['price'] # add price to the data since it's the target
model_set_4_15 = graphlab.linear_regression.create(
set_4_15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty=1e5
)
plt.plot(set_4_15_data['power_1'],set_4_15_data['price'],'.',
set_4_15_data['power_1'], model_set_4_15.predict(set_4_15_data),'-')
print "set_4"
model_set_4_15.get("coefficients").print_rows(16)
print round(model_set_1_15.get("coefficients")['value'][1], 2)
print model_set_2_15.get("coefficients")['value'][1]
print model_set_3_15.get("coefficients")['value'][1]
print round(model_set_4_15.get("coefficients")['value'][1], 2)
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
train_valid_shuffled[0:10] # rows 0 to 9
start = (n*3)/k
end = (n*4)/k-1
validation4 = train_valid_shuffled[start:end+1]
print int(round(validation4['price'].mean(), 0))
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)
train4 = train_valid_shuffled[0:start].append(train_valid_shuffled[end+1:n])
print int(round(train4['price'].mean(), 0))
def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list):
trained_models_history = []
validation_rss_history = []
n = len(data)
# loop over the values of the k
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
# obtain validation and train set
validation = data[start:end+1]
train = data[0:start].append(data[end+1:n])
# train model on train data
model = graphlab.linear_regression.create(
train,
target = output_name,
features = features_list,
validation_set = None,
l2_penalty=l2_penalty,
verbose=False
)
trained_models_history.append(model)
# find validation error
prediction = model.predict(validation[features_list])
error = prediction - validation['price']
error_squared = error * error
rss = error_squared.sum()
#print "Fold " + str(i) + " validation rss = " + str(rss)
validation_rss_history.append(rss)
return trained_models_history, validation_rss_history
import numpy as np
np.logspace(1, 7, num=13)
poly15_data = polynomial_sframe(train_valid_shuffled['sqft_living'], 15)
my_features = poly15_data.column_names() # get the name of the features
poly15_data['price'] = train_valid_shuffled['price'] # add price to the data since it's the target
k = 10
validation_rss_avg_list = []
for l2_penalty in np.logspace(1, 7, num=13):
model_list, validation_rss_list = k_fold_cross_validation(k, l2_penalty, poly15_data, 'price', my_features)
validation_rss_avg_list.append(np.mean(validation_rss_list))
validation_rss_avg_list
print np.logspace(1, 7, num=13)[4]
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
plt.plot(np.logspace(1, 7, num=13),validation_rss_avg_list,'k-')
plt.xlabel('$\ell_2$ penalty')
plt.ylabel('LOO cross validation error')
plt.xscale('log')
plt.yscale('log')
# train model on train data
poly15_model = graphlab.linear_regression.create(
poly15_data,
target = 'price',
features = my_features,
validation_set = None,
l2_penalty= 1000
)
round(103.090927005, 2)
# find test rss
poly15_test_data = polynomial_sframe(test['sqft_living'], 15)
prediction = poly15_model.predict(poly15_test_data)
error = prediction - test['price']
error_squared = error * error
rss = error_squared.sum()
print rss
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Polynomial regression, revisited
Step2: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
Step3: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
Step4: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5
Step5: Note
Step6: QUIZ QUESTION
Step7: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Step8: The four curves should differ from one another a lot, as should the coefficients you learned.
Step9: Ridge regression comes to rescue
Step10: These curves should vary a lot less, now that you applied a high degree of regularization.
Step11: Selecting an L2 penalty via cross-validation
Step12: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
Step13: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
Step14: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Step15: To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
Step16: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0
Step17: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
Step18: To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
Step19: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
Step20: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following
Step21: QUIZ QUESTIONS
Step22: You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
Step23: Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
Step24: QUIZ QUESTION
|
8,789
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('kc_house_data.gl/')
train_data,test_data = sales.random_split(.8,seed=0)
example_features = ['sqft_living', 'bedrooms', 'bathrooms']
example_model = graphlab.linear_regression.create(train_data, target = 'price', features = example_features,
validation_set = None)
example_weight_summary = example_model.get("coefficients")
print example_weight_summary
example_predictions = example_model.predict(train_data)
print example_predictions[0] # should be 271789.505878
def get_residual_sum_of_squares(model, data, outcome):
# First get the predictions
prediction = model.predict(data)
# Then compute the residuals/errors
residual = prediction - outcome
# Then square and add them up
RSS = 0
for error in residual:
RSS = RSS + (error**2)
return(RSS)
rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price'])
print rss_example_train # should be 2.7376153833e+14
from math import log
train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2)
test_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2)
# create the remaining 3 features in both TEST and TRAIN data
# bed_bath
train_data['bed_bath_rooms'] = train_data['bedrooms']*train_data['bathrooms']
test_data['bed_bath_rooms'] = test_data['bedrooms']*test_data['bathrooms']
# log_sqft_living
train_data['log_sqft_living'] = train_data['sqft_living'].apply(lambda x: log(x))
test_data['log_sqft_living'] = test_data['sqft_living'].apply(lambda x: log(x))
#lat_plus_long
train_data['lat_plus_long'] = train_data['lat']+train_data['long']
test_data['lat_plus_long'] = test_data['lat']+test_data['long']
test_data['bedrooms_squared']+test_data['bed_bath_rooms']+test_data['log_sqft_living']+test_data['lat_plus_long']
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long']
model_2_features = model_1_features + ['bed_bath_rooms']
model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']
# Learn the three models: (don't forget to set validation_set = None)
model_1 = graphlab.linear_regression.create(train_data, target = 'price', features = model_1_features,
validation_set = None)
##
model_2 = graphlab.linear_regression.create(train_data, target = 'price', features = model_2_features,
validation_set = None)
##
model_3 = graphlab.linear_regression.create(train_data, target = 'price', features = model_3_features,
validation_set = None)
# Examine/extract each model's coefficients:
model_1_coefficients = model_1.get("coefficients")
print" Model_1", model_1_coefficients
##
model_2_coefficients = model_2.get("coefficients")
print " Model_2", model_2_coefficients
##
model_3_coefficients = model_3.get("coefficients")
print " Model_3", model_3_coefficients
# Compute the RSS on TRAINING data for each of the three models and record the values:
## Model_1
model_1_rss = get_residual_sum_of_squares(model_1, train_data, train_data['price'])
print " model_1 rss",model_1_rss
## Model_2
model_2_rss = get_residual_sum_of_squares(model_2, train_data, train_data['price'])
print " model_2 rss",model_2_rss
##
model_3_rss = get_residual_sum_of_squares(model_3, train_data, train_data['price'])
print " model_3 rss",model_3_rss
# Compute the RSS on TESTING data for each of the three models and record the values:
## Model_1
model_1_rss = get_residual_sum_of_squares(model_1, test_data, test_data['price'])
print " model_1 rss",model_1_rss
## Model_2
model_2_rss = get_residual_sum_of_squares(model_2, test_data, test_data['price'])
print " model_2 rss",model_2_rss
##
model_3_rss = get_residual_sum_of_squares(model_3, test_data, test_data['price'])
print " model_3 rss",model_3_rss
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: Split data into training and testing.
Step3: Learning a multiple regression model
Step4: Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows
Step5: Making Predictions
Step6: Compute RSS
Step7: Test your function by computing the RSS on TEST data for the example model
Step8: Create some new features
Step9: Next create the following 4 new features as column in both TEST and TRAIN data
Step10: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.
Step11: Learning Multiple Models
Step12: Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients
Step13: Quiz Question
Step14: Quiz Question
|
8,790
|
<ASSISTANT_TASK:>
Python Code:
from nams import load_data as cf
books = cf.load_game_of_thrones_data()
# We also add this weight_inv to our dataset.
# Why? we will discuss it in a later section.
books['weight_inv'] = 1/books.weight
books.head()
robbstark = (
books.query("book == 3")
.query("Source == 'Robb-Stark' or Target == 'Robb-Stark'")
)
robbstark.head()
# example of creating a MultiGraph
# all_books_multigraph = nx.from_pandas_edgelist(
# books, source='Source', target='Target',
# edge_attr=['weight', 'book'],
# create_using=nx.MultiGraph)
# we create a list of graph objects using
# nx.from_pandas_edgelist and specifying
# the edge attributes.
graphs = [nx.from_pandas_edgelist(
books[books.book==i],
source='Source', target='Target',
edge_attr=['weight', 'weight_inv'])
for i in range(1, 6)]
# The Graph object associated with the first book.
graphs[0]
# To access the relationship edges in the graph with
# the edge attribute weight data (data=True)
relationships = list(graphs[0].edges(data=True))
relationships[0:3]
# We use the in-built degree_centrality method
deg_cen_book1 = nx.degree_centrality(graphs[0])
deg_cen_book5 = nx.degree_centrality(graphs[4])
deg_cen_book1['Daenerys-Targaryen']
# The following expression sorts the dictionary by
# degree centrality and returns the top 5 from a graph
sorted(deg_cen_book1.items(),
key=lambda x:x[1],
reverse=True)[0:5]
sorted(deg_cen_book5.items(),
key=lambda x:x[1],
reverse=True)[0:5]
plt.hist(deg_cen_book1.values(), bins=30)
plt.show()
# A log-log plot to show the "signature" of power law in graphs.
from collections import Counter
hist = Counter(deg_cen_book1.values())
plt.scatter(np.log2(list(hist.keys())),
np.log2(list(hist.values())),
alpha=0.9)
plt.show()
from nams.solutions.got import weighted_degree
plt.hist(list(weighted_degree(graphs[0], 'weight').values()), bins=30)
plt.show()
sorted(weighted_degree(graphs[0], 'weight').items(), key=lambda x:x[1], reverse=True)[0:5]
# First check unweighted (just the structure)
sorted(nx.betweenness_centrality(graphs[0]).items(),
key=lambda x:x[1], reverse=True)[0:10]
# Let's care about interactions now
sorted(nx.betweenness_centrality(graphs[0],
weight='weight_inv').items(),
key=lambda x:x[1], reverse=True)[0:10]
# by default weight attribute in PageRank is weight
# so we use weight=None to find the unweighted results
sorted(nx.pagerank_numpy(graphs[0],
weight=None).items(),
key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.pagerank_numpy(
graphs[0], weight='weight').items(),
key=lambda x:x[1], reverse=True)[0:10]
from nams.solutions.got import correlation_centrality
correlation_centrality(graphs[0])
evol = [nx.degree_centrality(graph)
for graph in graphs]
evol_df = pd.DataFrame.from_records(evol).fillna(0)
evol_df[['Eddard-Stark',
'Tyrion-Lannister',
'Jon-Snow']].plot()
plt.show()
set_of_char = set()
for i in range(5):
set_of_char |= set(list(
evol_df.T[i].sort_values(
ascending=False)[0:5].index))
set_of_char
from nams.solutions.got import evol_betweenness
evol_betweenness(graphs)
sorted(nx.degree_centrality(graphs[4]).items(),
key=lambda x:x[1], reverse=True)[:5]
sorted(nx.betweenness_centrality(graphs[4]).items(),
key=lambda x:x[1], reverse=True)[:5]
nx.draw(nx.barbell_graph(5, 1), with_labels=True)
nx.betweenness_centrality(nx.barbell_graph(5, 1))
import nxviz as nv
from nxviz import annotate
plt.figure(figsize=(8, 8))
partition = community.best_partition(graphs[0], randomize=False)
# Annotate nodes' partitions
for n in graphs[0].nodes():
graphs[0].nodes[n]["partition"] = partition[n]
graphs[0].nodes[n]["degree"] = graphs[0].degree(n)
nv.matrix(graphs[0], group_by="partition", sort_by="degree", node_color_by="partition")
annotate.matrix_block(graphs[0], group_by="partition", color_by="partition")
annotate.matrix_group(graphs[0], group_by="partition", offset=-8)
# louvain community detection find us 8 different set of communities
partition_dict = {}
for character, par in partition.items():
if par in partition_dict:
partition_dict[par].append(character)
else:
partition_dict[par] = [character]
len(partition_dict)
partition_dict[2]
nx.draw(nx.subgraph(graphs[0], partition_dict[3]))
nx.draw(nx.subgraph(graphs[0],partition_dict[1]))
nx.density(nx.subgraph(
graphs[0], partition_dict[4])
)/nx.density(graphs[0])
from nams.solutions.got import most_important_node_in_partition
most_important_node_in_partition(graphs[0], partition_dict)
from nams.solutions import got
import inspect
print(inspect.getsource(got))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The resulting DataFrame books has 5 columns
Step2: From the above data we can see that the characters Addam Marbrand and Tywin Lannister have interacted 6 times in the first book.
Step3: As you can see this data easily translates to a network problem. Now it's time to create a network.
Step4: Finding the most important node i.e character in these networks.
Step5: degree_centrality returns a dictionary and to access the results we can directly use the name of the character.
Step6: Top 5 important characters in the first book according to degree centrality.
Step7: Top 5 important characters in the fifth book according to degree centrality.
Step8: To visualize the distribution of degree centrality let's plot a histogram of degree centrality.
Step9: The above plot shows something that is expected, a high portion of characters aren't connected to lot of other characters while some characters are highly connected all through the network. A close real world example of this is a social network like Twitter where a few people have millions of connections(followers) but majority of users aren't connected to that many other users. This exponential decay like property resembles power law in real life networks.
Step10: Exercise
Step11: Betweeness centrality
Step12: We can see there are some differences between the unweighted and weighted centrality measures. Another thing to note is that we are using the weight_inv attribute instead of weight(the number of interactions between characters). This decision is based on the way we want to assign the notion of "importance" of a character. The basic idea behind betweenness centrality is to find nodes which are essential to the structure of the network. As betweenness centrality computes shortest paths underneath, in the case of weighted betweenness centrality it will end up penalising characters with high number of interactions. By using weight_inv we will prop up the characters with high interactions with other characters.
Step13: Exercise
Step14: Evolution of importance of characters over the books
Step15: Exercise
Step16: So what's up with Stannis Baratheon?
Step17: As we know the a higher betweenness centrality means that the node is crucial for the structure of the network, and in the case of Stannis Baratheon in the fifth book it seems like Stannis Baratheon has characterstics similar to that of node 5 in the above example as it seems to be the holding the network together.
Step18: Community detection in Networks
Step19: A common defining quality of a community is that
Step20: If we plot these communities of the network we see a denser network as compared to the original network which contains all the characters.
Step21: We can test this by calculating the density of the network and the community.
Step22: Exercise
Step23: Solutions
|
8,791
|
<ASSISTANT_TASK:>
Python Code:
import regionmask
regionmask.__version__
import xarray as xr
import numpy as np
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
from matplotlib import colors as mplc
from shapely.geometry import Polygon
color1 = "#9ecae1"
color2 = "#fc9272"
color3 = "#cab2d6"
cmap1 = mplc.ListedColormap([color1])
cmap2 = mplc.ListedColormap([color2])
cmap3 = mplc.ListedColormap([color3])
cmap12 = mplc.ListedColormap([color1, color2])
outline = np.array([[-80.0, 50.0], [-80.0, 28.0], [-100.0, 28.0], [-100.0, 50.0]])
region = regionmask.Regions([outline])
ds_US = regionmask.core.utils.create_lon_lat_dataarray_from_bounds(
*(-161, -29, 2), *(75, 13, -2)
)
print(ds_US)
mask_rasterize = region.mask(ds_US, method="rasterize")
mask_shapely = region.mask(ds_US, method="shapely")
mask_pygeos = region.mask(ds_US, method="pygeos")
f, axes = plt.subplots(1, 3, subplot_kw=dict(projection=ccrs.PlateCarree()))
opt = dict(add_colorbar=False, ec="0.5", lw=0.5, transform=ccrs.PlateCarree())
mask_rasterize.plot(ax=axes[0], cmap=cmap1, **opt)
mask_shapely.plot(ax=axes[1], cmap=cmap2, **opt)
mask_pygeos.plot(ax=axes[2], cmap=cmap3, **opt)
for ax in axes:
ax = region.plot_regions(ax=ax, add_label=False)
ax.set_extent([-105, -75, 25, 55], ccrs.PlateCarree())
ax.coastlines(lw=0.5)
ax.plot(
ds_US.LON, ds_US.lat, "*", color="0.5", ms=0.5, transform=ccrs.PlateCarree()
)
axes[0].set_title("rasterize")
axes[1].set_title("shapely")
axes[2].set_title("pygeos");
ds_GLOB = regionmask.core.utils.create_lon_lat_dataarray_from_bounds(
*(-180, 181, 2), *(90, -91, -2)
)
srex = regionmask.defined_regions.srex
srex_new = srex.mask(ds_GLOB)
f, ax = plt.subplots(1, 1, subplot_kw=dict(projection=ccrs.PlateCarree()))
opt = dict(add_colorbar=False, cmap="viridis_r")
srex_new.plot(ax=ax, ec="0.7", lw=0.25, **opt)
srex.plot_regions(ax=ax, add_label=False, line_kws=dict(lw=1))
ax.set_extent([-135, -50, 24, 51], ccrs.PlateCarree())
ax.coastlines(resolution="50m", lw=0.25)
ax.plot(
ds_GLOB.LON, ds_GLOB.lat, "*", color="0.5", ms=0.5, transform=ccrs.PlateCarree()
)
sel = ((ds_GLOB.LON == -105) | (ds_GLOB.LON == -85)) & (ds_GLOB.LAT > 28)
ax.plot(
ds_GLOB.LON.values[sel],
ds_GLOB.LAT.values[sel],
"*",
color="r",
ms=0.5,
transform=ccrs.PlateCarree(),
)
ax.set_title("edge points are assigned to the left polygon", fontsize=9);
# almost 360 to avoid wrap-around for the plot
lon_max = 360.0 - 1e-10
outline_global = np.array([[0, 90], [0, -90], [lon_max, -90], [lon_max, 90]])
region_global = regionmask.Regions([outline_global])
lon = np.arange(0, 360, 30)
lat = np.arange(90, -91, -30)
LON, LAT = np.meshgrid(lon, lat)
# setting `wrap_lon=False` turns this feature off
mask_global_nontreat = region_global.mask(LON, LAT, wrap_lon=False)
mask_global = region_global.mask(LON, LAT)
proj = ccrs.PlateCarree(central_longitude=180)
f, axes = plt.subplots(1, 2, subplot_kw=dict(projection=proj))
f.subplots_adjust(wspace=0.05)
opt = dict(add_colorbar=False, ec="0.2", lw=0.25, transform=ccrs.PlateCarree())
ax = axes[0]
mask_global_nontreat.plot(ax=ax, cmap=cmap1, x="lon", y="lat", **opt)
ax.set_title("Not treating points at 0°E and -90°N", size=6)
ax.set_title("(a)", loc="left", size=6)
ax = axes[1]
mask_global.plot(ax=ax, cmap=cmap1, x="lon", y="lat", **opt)
ax.set_title("Treating points at 0°E and -90°N", size=6)
ax.set_title("(b)", loc="left", size=6)
for ax in axes:
ax = region_global.plot(
ax=ax,
line_kws=dict(lw=2, color="#b15928"),
add_label=False,
)
ax.plot(LON, LAT, "o", color="0.3", ms=1, transform=ccrs.PlateCarree(), zorder=5)
ax.outline_patch.set_visible(False)
outline_global1 = np.array([[-180.0, 60.0], [-180.0, -60.0], [0.0, -60.0], [0.0, 60.0]])
outline_global2 = np.array([[0.0, 60.0], [0.0, -60.0], [180.0, -60.0], [180.0, 60.0]])
region_global_2 = regionmask.Regions([outline_global1, outline_global2])
mask_global_2regions = region_global_2.mask(lon, lat)
ax = region_global_2.plot(
line_kws=dict(color="#b15928", zorder=3, lw=1.5),
add_label=False,
)
ax.plot(
LON, LAT, "o", color="0.3", lw=0.25, ms=2, transform=ccrs.PlateCarree(), zorder=5
)
mask_global_2regions.plot(ax=ax, cmap=cmap12, **opt)
ax.set_title("Points at -180°E are mapped to 180°E", size=6)
ax.outline_patch.set_lw(0.25)
ax.outline_patch.set_zorder(1);
interior = np.array(
[
[-86.0, 44.0],
[-86.0, 34.0],
[-94.0, 34.0],
[-94.0, 44.0],
[-86.0, 44.0],
]
)
poly = Polygon(outline, holes=[interior])
region_with_hole = regionmask.Regions([poly])
mask_hole_rasterize = region_with_hole.mask(ds_US, method="rasterize")
mask_hole_shapely = region_with_hole.mask(ds_US, method="shapely")
mask_hole_pygeos = region_with_hole.mask(ds_US, method="pygeos")
f, axes = plt.subplots(1, 3, subplot_kw=dict(projection=ccrs.PlateCarree()))
opt = dict(add_colorbar=False, ec="0.5", lw=0.5)
mask_hole_rasterize.plot(ax=axes[0], cmap=cmap1, **opt)
mask_hole_shapely.plot(ax=axes[1], cmap=cmap2, **opt)
mask_hole_pygeos.plot(ax=axes[2], cmap=cmap3, **opt)
for ax in axes:
region_with_hole.plot_regions(ax=ax, add_label=False, line_kws=dict(lw=1))
ax.set_extent([-105, -75, 25, 55], ccrs.PlateCarree())
ax.coastlines(lw=0.25)
ax.plot(
ds_US.LON, ds_US.lat, "o", color="0.5", ms=0.5, transform=ccrs.PlateCarree()
)
axes[0].set_title("rasterize")
axes[1].set_title("shapely")
axes[2].set_title("pygeos");
land110 = regionmask.defined_regions.natural_earth_v5_0_0.land_110
mask_land110 = land110.mask(ds_GLOB)
f, ax = plt.subplots(1, 1, subplot_kw=dict(projection=ccrs.PlateCarree()))
mask_land110.plot(ax=ax, cmap=cmap2, add_colorbar=False)
ax.set_extent([15, 75, 25, 50], ccrs.PlateCarree())
ax.coastlines(resolution="50m", lw=0.5)
ax.plot(
ds_GLOB.LON, ds_GLOB.lat, ".", color="0.5", ms=0.5, transform=ccrs.PlateCarree()
)
ax.text(52, 43.5, "Caspian Sea", transform=ccrs.PlateCarree())
ax.set_title("Polygon interiors are unmasked");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Other imports
Step2: Define some colors
Step3: Methods
Step4: Let's create a mask with each of these methods
Step5: Plot the masked regions
Step6: Points indicate the grid cell centers (lon and lat), lines the grid cell borders, colored grid cells are selected to be part of the region. The top and right grid cells now belong to the region while the left and bottom grid cells do not. This choice is arbitrary but follows what rasterio.features.rasterize does. This avoids spurious columns of unassigned grid points as the following example shows.
Step7: Not assigning the grid cells falling exactly on the border of a region (red points) would leave vertical stripes of unassigned cells.
Step8: Create the masks
Step9: And illustrate the issue
Step10: In the example the region spans the whole globe and there are gridpoints at 0°E and -90°N. Just applying the approach above leads to gridpoints that are not assigned to any region even though the region is global (as shown in a). Therefore, points at -180°E (or 0°E) and -90°N are treated specially (b)
Step11: Polygon interiors
Step12: Note how the edge behavior of the interior is inverse to the edge behavior of the outerior.
|
8,792
|
<ASSISTANT_TASK:>
Python Code:
import cashflows as cf
##
## Se tienen cuatro fuentes de capital con diferentes costos
## sus datos se almacenarar en las siguientes listas:
##
monto = [0] * 4
interes = [0] * 4
## emision de acciones
## --------------------------------------
monto[0] = 4000
interes[0] = 25.0 / 1.0 # tasa de descueto de la accion
## préstamo 1.
## -------------------------------------------------------
##
nrate = cf.nominal_rate(const_value=20, nper=5)
credito1 = cf.fixed_ppal_loan(amount = 2000, # monto
nrate = nrate, # tasa de interés
orgpoints = 50/2000) # costos de originación
credito1
## flujo de caja para el crédito antes de impuestos
credito1.to_cashflow(tax_rate = 30.0)
## la tasa efectiva pagada por el crédito es
## aquella que hace el valor presente cero para
## el flujo de caja anterior (antes o después de
## impuestos)
credito1.true_rate(tax_rate = 30.0)
## se almacenna los datos para este credito
monto[1] = 2000
interes[1] = credito1.true_rate(tax_rate = 30.0)
## préstamo 2.
## -------------------------------------------------------
##
credito2 = cf.fixed_rate_loan(amount = 1000, # monto
nrate = 20, # tasa de interés
start = None,
grace = 0,
life = 4, # número de cuotas
dispoints = 0.24) # costos de originación
credito2
credito2.to_cashflow(tax_rate = 30)
credito2.true_rate(tax_rate = 30)
## se almacenna los datos para este credito
monto[2] = 1000
interes[2] = credito2.true_rate(tax_rate = 30)
## préstamo 3.
## -------------------------------------------------------
##
nrate = cf.nominal_rate(const_value=7, nper=5)
credito3 = cf.bullet_loan(amount = 5000, # monto
nrate = nrate, # tasa de interés
orgpoints = 0.01, # costos de originación
dispoints = 0.20) # puntos de descuento
credito3
credito3.to_cashflow(tax_rate = 30.0) ### malo
credito3.true_rate(tax_rate = 30.0)
## se almacenan los datos de este crédito
monto[3] = 5000
interes[3] = credito3.true_rate(tax_rate = 30.0)
## montos
monto
## tasas
interes
## Costo ponderado del capital (WACC)
## -------------------------------------------------------------
## es el promdio ponderado de las tasas por
## el porcentaje de capital correspondiente a cada fuente
##
s = sum(monto) # capital total
wacc = sum([x*r/s for x, r in zip(monto, interes)])
wacc
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: En la modelación de créditos con cashflow se consideran dos tipos de costos
|
8,793
|
<ASSISTANT_TASK:>
Python Code:
class Contig:
def __init__(self, name, seq):
self.name = name
self.seq = seq
def __repr__(self):
return '< "%s" %i nucleotides>' % (self.name, len(self.seq))
def read_contigs(input_file_path):
contigs = []
current_name = ""
seq_collection = []
# Pre-read generates an array of contigs with labels and sequences
with open(input_file_path, 'r') as streamFASTAFile:
for read in streamFASTAFile.read().splitlines():
if read == "":
continue
if read[0] == ">":
# If we have sequence gathered and we run into a second (or more) block
if len(seq_collection) > 0:
sequence = "".join(seq_collection)
seq_collection = [] # clear
contigs.append(Contig(current_name, sequence))
current_name = read[1:] # remove >
else:
# collects the sequence to be stored in the contig, constant time performance don't concat strings!
seq_collection.append(read.upper())
# add the last contig to the list
sequence = "".join(seq_collection)
contigs.append(Contig(current_name, sequence))
return contigs
from collections import Counter
species = read_contigs('9927_alignment.fasta')
for s in species:
s.name = s.name[:6]
informative_columns = {}
consensus_sequence = []
for col in range(len(species[0].seq)):
letters = []
for entry in species:
letters.append(entry.seq[col])
column_seq = ''.join(letters)
consensusing = Counter(column_seq)
consensus_sequence.append(consensusing.most_common()[0][0])
if column_seq != letters[0] * len(species) and col > 200 and col < 1500:
informative_columns[col] = column_seq
print(column_seq, col+1)
species.append(Contig('Consen', ''.join(consensus_sequence)))
with open('9927_informative_positions.csv', 'w') as csv_out:
csv_out.write('Positions,' + ','.join([str(x+1) for x in sorted(informative_columns.keys())]))
csv_out.write('\n')
for entry in species:
csv_out.write(entry.name[:6] + ",")
for col in range(len(species[0].seq)):
if col in informative_columns:
csv_out.write(entry.seq[col] + ",")
csv_out.write('\n')
seq_length = len(species[0].seq)
similarity_scores = {}
for target in species:
for query in species:
if target != query:
name = (target.name, query.name)
score = sum([target.seq[i] != query.seq[i] for i in range(250,1500)])
similarity_scores[name] = score
min(similarity_scores)
with open('9927_differentiability.csv', 'w') as csv_out:
csv_out.write(',' + ','.join([s.name for s in species]))
for target in species: # rows
csv_out.write(target.name +',')
for query in species: # cells
if target != query:
name = (target.name, query.name)
csv_out.write(str(similarity_scores[name]) + ',')
else:
csv_out.write(',')
csv_out.write('\n')
min(similarity_scores.values())
for k,v in similarity_scores.items():
if v < 4:
print(','.join(k))
base_command = "java -cp CONTEXT-.jar uk.ac.qmul.sbcs.evolution.convergence.runners.BasicAlignmentStats "
data_directory = './Data/'
from glob import glob
for filename in glob(data_directory + '*'):
print(base_command + filename)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate a fasta with informative columns
Step2: Pair wise table
Step3: Iterate over all the sequences at the same time
|
8,794
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
old_LCIA = pd.read_excel('put_the_path_to_your_old_LCIA_implementation_file.xls_here','CFs')
incomplete_LCIA = pd.read_excel('put_the_path_to_your_incomplete_LCIA_implementation_file.xls_here','CFs')
complete_LCIA = incomplete_LCIA.merge(old_LCIA,how='left')
# drop obsolete columns
complete_LCIA = complete_LCIA.drop([i for i in old_LCIA.columns if i not in incomplete_LCIA.columns
and i != 'exchange unit'],axis=1)
# m2*year for land use/land occupation categories
complete_LCIA.loc[[i for i in complete_LCIA.index if 'land' in complete_LCIA.category[i]
and type(complete_LCIA.loc[i,'exchange unit']) == float],'exchange unit'] = 'm2*year'
# kBq for pollutant linked to radioactivity
complete_LCIA.loc[[i for i in complete_LCIA.index if (complete_LCIA.category[i] == 'ionising radiation'
or complete_LCIA.category[i] == 'radioactive waste to deposit')
and type(complete_LCIA.loc[i,'exchange unit']) == float],'exchange unit'] = 'kBq'
# m3 for amounts of water
complete_LCIA.loc[[i for i in complete_LCIA.index if complete_LCIA.category[i] == 'water depletion'
and type(complete_LCIA.loc[i,'exchange unit']) == float],'exchange unit'] = 'm3'
# kg for the rest
complete_LCIA.loc[[i for i in complete_LCIA.index if type(complete_LCIA.loc[i,'exchange unit']) == float],
'exchange unit'] = 'kg'
complete_LCIA.to_excel('put_the_path_where_you_want_this_completed_version_to_be_stored.xls')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pollutants which were already present in your old version will have their exchange unit introduced in the incomplete_LCIA. New pollutants of the recent LCIA_implementation file however, will still have NaN as their unit (i.e., no unit specified).
Step2: Export the completed version of the LCIA_implmentation file where you want (e.g., in your ecoinvent folder with datasets and MasterData)
|
8,795
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
def read_file(filename):
# ...
return something
def histogram(texte):
# ...
return something
def normalize(hist):
# ...
return something
from pyensae.datasource import download_data
texts = download_data("articles.zip")
texts[:5]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: L'objectif est de distinguer un texte anglais d'un texte français sans avoir à le lire. Le premier réflexe consisterait à chercher la présence de mots typiquement anglais ou français. Cette direction est sans doute un bon choix lorsque le texte considéré est une oeuvre littéraire. Mais sur Internet, les contenus mélangent fréquemment les deux langues
Step2: Q2
Step3: Q3
Step4: Q4
|
8,796
|
<ASSISTANT_TASK:>
Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Condicional If
if 5 > 2:
print("Python funciona!")
# Statement If...Else
if 5 < 2:
print("Python funciona!")
else:
print("Algo está errado!")
6 > 3
3 > 7
4 < 8
4 >= 4
if 5 == 5:
print("Testando Python!")
if True:
print('Parece que Python funciona!')
# Atenção com a sintaxe
if 4 > 3
print("Tudo funciona!")
# Atenção com a sintaxe
if 4 > 3:
print("Tudo funciona!")
idade = 18
if idade > 17:
print("Você pode dirigir!")
Nome = "Bob"
if idade > 13:
if Nome == "Bob":
print("Ok Bob, você está autorizado a entrar!")
else:
print("Desculpe, mas você não pode entrar!")
idade = 13
Nome = "Bob"
if idade >= 13 and Nome == "Bob":
print("Ok Bob, você está autorizado a entrar!")
idade = 12
Nome = "Bob"
if (idade >= 13) or (Nome == "Bob"):
print("Ok Bob, você está autorizado a entrar!")
dia = "Terça"
if dia == "Segunda":
print("Hoje fará sol!")
else:
print("Hoje vai chover!")
if dia == "Segunda":
print("Hoje fará sol!")
elif dia == "Terça":
print("Hoje vai chover!")
else:
print("Sem previsão do tempo para o dia selecionado")
idade = 18
nome = "Bob"
if idade > 17:
print("Você pode dirigir!")
idade = 18
if idade > 17 and nome == "Bob":
print("Autorizado!")
# Usando mais de uma condição na cláusula if
disciplina = input('Digite o nome da disciplina: ')
nota_final = input('Digite a nota final (entre 0 e 100): ')
if disciplina == 'Geografia' and nota_final >= '70':
print('Você foi aprovado!')
else:
print('Lamento, acho que você precisa estudar mais!')
# Usando mais de uma condição na cláusula if e introduzindo Placeholders
disciplina = input('Digite o nome da disciplina: ')
nota_final = input('Digite a nota final (entre 0 e 100): ')
semestre = input('Digite o semestre (1 a 4): ')
if disciplina == 'Geografia' and nota_final >= '50' and int(semestre) != 1:
print('Você foi aprovado em %s com média final %r!' %(disciplina, nota_final))
else:
print('Lamento, acho que você precisa estudar mais!')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Condicional If
Step2: Condicionais Aninhados
Step3: Elif
Step4: Operadores Lógicos
|
8,797
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from datetime import datetime
%load_ext tensorboard
class SimpleModule(tf.Module):
def __init__(self, name=None):
super().__init__(name=name)
self.a_variable = tf.Variable(5.0, name="train_me")
self.non_trainable_variable = tf.Variable(5.0, trainable=False, name="do_not_train_me")
def __call__(self, x):
return self.a_variable * x + self.non_trainable_variable
simple_module = SimpleModule(name="simple")
simple_module(tf.constant(5.0))
# All trainable variables
print("trainable variables:", simple_module.trainable_variables)
# Every variable
print("all variables:", simple_module.variables)
class Dense(tf.Module):
def __init__(self, in_features, out_features, name=None):
super().__init__(name=name)
self.w = tf.Variable(
tf.random.normal([in_features, out_features]), name='w')
self.b = tf.Variable(tf.zeros([out_features]), name='b')
def __call__(self, x):
y = tf.matmul(x, self.w) + self.b
return tf.nn.relu(y)
class SequentialModule(tf.Module):
def __init__(self, name=None):
super().__init__(name=name)
self.dense_1 = Dense(in_features=3, out_features=3)
self.dense_2 = Dense(in_features=3, out_features=2)
def __call__(self, x):
x = self.dense_1(x)
return self.dense_2(x)
# You have made a model!
my_model = SequentialModule(name="the_model")
# Call it, with random results
print("Model results:", my_model(tf.constant([[2.0, 2.0, 2.0]])))
print("Submodules:", my_model.submodules)
for var in my_model.variables:
print(var, "\n")
class FlexibleDenseModule(tf.Module):
# Note: No need for `in_features`
def __init__(self, out_features, name=None):
super().__init__(name=name)
self.is_built = False
self.out_features = out_features
def __call__(self, x):
# Create variables on first call.
if not self.is_built:
self.w = tf.Variable(
tf.random.normal([x.shape[-1], self.out_features]), name='w')
self.b = tf.Variable(tf.zeros([self.out_features]), name='b')
self.is_built = True
y = tf.matmul(x, self.w) + self.b
return tf.nn.relu(y)
# Used in a module
class MySequentialModule(tf.Module):
def __init__(self, name=None):
super().__init__(name=name)
self.dense_1 = FlexibleDenseModule(out_features=3)
self.dense_2 = FlexibleDenseModule(out_features=2)
def __call__(self, x):
x = self.dense_1(x)
return self.dense_2(x)
my_model = MySequentialModule(name="the_model")
print("Model results:", my_model(tf.constant([[2.0, 2.0, 2.0]])))
chkp_path = "my_checkpoint"
checkpoint = tf.train.Checkpoint(model=my_model)
checkpoint.write(chkp_path)
!ls my_checkpoint*
tf.train.list_variables(chkp_path)
new_model = MySequentialModule()
new_checkpoint = tf.train.Checkpoint(model=new_model)
new_checkpoint.restore("my_checkpoint")
# Should be the same result as above
new_model(tf.constant([[2.0, 2.0, 2.0]]))
class MySequentialModule(tf.Module):
def __init__(self, name=None):
super().__init__(name=name)
self.dense_1 = Dense(in_features=3, out_features=3)
self.dense_2 = Dense(in_features=3, out_features=2)
@tf.function
def __call__(self, x):
x = self.dense_1(x)
return self.dense_2(x)
# You have made a model with a graph!
my_model = MySequentialModule(name="the_model")
print(my_model([[2.0, 2.0, 2.0]]))
print(my_model([[[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]]))
# Set up logging.
stamp = datetime.now().strftime("%Y%m%d-%H%M%S")
logdir = "logs/func/%s" % stamp
writer = tf.summary.create_file_writer(logdir)
# Create a new model to get a fresh trace
# Otherwise the summary will not see the graph.
new_model = MySequentialModule()
# Bracket the function call with
# tf.summary.trace_on() and tf.summary.trace_export().
tf.summary.trace_on(graph=True)
tf.profiler.experimental.start(logdir)
# Call only one tf.function when tracing.
z = print(new_model(tf.constant([[2.0, 2.0, 2.0]])))
with writer.as_default():
tf.summary.trace_export(
name="my_func_trace",
step=0,
profiler_outdir=logdir)
#docs_infra: no_execute
%tensorboard --logdir logs/func
tf.saved_model.save(my_model, "the_saved_model")
# Inspect the SavedModel in the directory
!ls -l the_saved_model
# The variables/ directory contains a checkpoint of the variables
!ls -l the_saved_model/variables
new_model = tf.saved_model.load("the_saved_model")
isinstance(new_model, SequentialModule)
print(my_model([[2.0, 2.0, 2.0]]))
print(my_model([[[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]]))
class MyDense(tf.keras.layers.Layer):
# Adding **kwargs to support base Keras layer arguments
def __init__(self, in_features, out_features, **kwargs):
super().__init__(**kwargs)
# This will soon move to the build step; see below
self.w = tf.Variable(
tf.random.normal([in_features, out_features]), name='w')
self.b = tf.Variable(tf.zeros([out_features]), name='b')
def call(self, x):
y = tf.matmul(x, self.w) + self.b
return tf.nn.relu(y)
simple_layer = MyDense(name="simple", in_features=3, out_features=3)
simple_layer([[2.0, 2.0, 2.0]])
class FlexibleDense(tf.keras.layers.Layer):
# Note the added `**kwargs`, as Keras supports many arguments
def __init__(self, out_features, **kwargs):
super().__init__(**kwargs)
self.out_features = out_features
def build(self, input_shape): # Create the state of the layer (weights)
self.w = tf.Variable(
tf.random.normal([input_shape[-1], self.out_features]), name='w')
self.b = tf.Variable(tf.zeros([self.out_features]), name='b')
def call(self, inputs): # Defines the computation from inputs to outputs
return tf.matmul(inputs, self.w) + self.b
# Create the instance of the layer
flexible_dense = FlexibleDense(out_features=3)
flexible_dense.variables
# Call it, with predictably random results
print("Model results:", flexible_dense(tf.constant([[2.0, 2.0, 2.0], [3.0, 3.0, 3.0]])))
flexible_dense.variables
try:
print("Model results:", flexible_dense(tf.constant([[2.0, 2.0, 2.0, 2.0]])))
except tf.errors.InvalidArgumentError as e:
print("Failed:", e)
class MySequentialModel(tf.keras.Model):
def __init__(self, name=None, **kwargs):
super().__init__(**kwargs)
self.dense_1 = FlexibleDense(out_features=3)
self.dense_2 = FlexibleDense(out_features=2)
def call(self, x):
x = self.dense_1(x)
return self.dense_2(x)
# You have made a Keras model!
my_sequential_model = MySequentialModel(name="the_model")
# Call it on a tensor, with random results
print("Model results:", my_sequential_model(tf.constant([[2.0, 2.0, 2.0]])))
my_sequential_model.variables
my_sequential_model.submodules
inputs = tf.keras.Input(shape=[3,])
x = FlexibleDense(3)(inputs)
x = FlexibleDense(2)(x)
my_functional_model = tf.keras.Model(inputs=inputs, outputs=x)
my_functional_model.summary()
my_functional_model(tf.constant([[2.0, 2.0, 2.0]]))
my_sequential_model.save("exname_of_file")
reconstructed_model = tf.keras.models.load_model("exname_of_file")
reconstructed_model(tf.constant([[2.0, 2.0, 2.0]]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 모듈, 레이어 및 모델 소개
Step2: TensorFlow에서 모델 및 레이어 정의하기
Step3: 모듈과 더 나아가 레이어는 "객체"에 대한 딥 러닝 용어입니다. 내부 상태와 해당 상태를 사용하는 메서드가 있습니다.
Step4: 다음은 모듈로 구성된 2개 레이어 선형 레이어 모델의 예입니다.
Step5: 다음은 두 개의 레이어 인스턴스를 만들고 적용하는 전체 모델입니다.
Step6: tf.Module 인스턴스는 tf.Variable 또는 할당된 tf.Module 인스턴스를 재귀적으로 자동으로 수집합니다. 이를 통해 단일 모델 인스턴스로 tf.Module 모음을 관리하고 전체 모델을 저장 및 로드할 수 있습니다.
Step7: 변수 생성 연기하기
Step8: 이러한 유연성으로 인해 TensorFlow 레이어는 종종 입력 및 출력 크기가 아닌 tf.keras.layers.Dense에서와 같이 출력의 형상만 지정하면 됩니다.
Step9: 체크포인트는 데이터 자체와 메타데이터용 인덱스 파일이라는 두 가지 종류의 파일로 구성됩니다. 인덱스 파일은 실제로 저장된 항목과 체크포인트의 번호를 추적하는 반면 체크포인트 데이터에는 변수 값과 해당 속성 조회 경로가 포함됩니다.
Step10: 체크포인트 내부를 살펴보면 전체 변수 모음이 저장되고 변수 모음이 포함된 Python 객체별로 정렬되어 있는지 확인할 수 있습니다.
Step11: 분산 (다중 머신) 훈련 중에 변수 모음이 샤딩될 수 있으므로 번호가 매겨집니다(예
Step12: 참고
Step13: 여러분이 만든 모듈은 이전과 똑같이 동작합니다. 함수에 전달된 각 고유 서명은 별도의 그래프를 생성합니다. 자세한 내용은 그래프 및 함수 소개 가이드를 참조하세요.
Step14: TensorBoard 요약 내에서 그래프를 추적하여 그래프를 시각화할 수 있습니다.
Step15: Tensorboard를 실행하여 결과 추적을 확인합니다.
Step16: SavedModel 생성하기
Step17: saved_model.pb 파일은 함수형 <code>tf.Graph</code>를 설명하는 <a>프로토콜 버퍼</a>입니다.
Step18: 저장된 모델을 로드하여 생성된 new_model은 클래스 지식이 없는 내부 TensorFlow 사용자 객체입니다. SequentialModule 유형이 아닙니다.
Step19: 이 새 모델은 이미 정의된 입력 서명에서 동작합니다. 이와 같이 복원된 모델에 더 많은 서명을 추가할 수 없습니다.
Step20: 따라서, SavedModel을 사용하면 tf.Module을 사용하여 TensorFlow 가중치와 그래프를 저장한 다음 다시 로드할 수 있습니다.
Step21: Keras 레이어에는 다음 섹션에서 설명하는 몇 가지 부기(bookkeeping)를 수행한 다음 call()을 호출하는 고유한 __call__이 있습니다. 기능에 변화가 없는 것을 알 수 있습니다.
Step22: build 단계
Step23: 이 시점에서는 모델이 빌드되지 않았으므로 변수가 없습니다.
Step24: 함수를 호출하면 적절한 크기의 변수가 할당됩니다.
Step25: build는 한 번만 호출되므로 입력 형상이 레이어의 변수와 호환되지 않으면 입력이 거부됩니다.
Step26: Keras 레이어에는 다음과 같은 더 많은 추가 기능이 있습니다.
Step27: 추적 변수 및 하위 모듈을 포함하여 같은 기능을 모두 사용할 수 있습니다.
Step28: tf.keras.Model을 재정의하는 것은 TensorFlow 모델을 빌드하는 Python다운 접근 방식입니다. 다른 프레임워크에서 모델을 마이그레이션하는 경우 매우 간단할 수 있습니다.
Step29: 여기서 가장 큰 차이점은 입력 형상이 함수형 구성 프로세스의 일부로 미리 지정된다는 것입니다. 이 경우 input_shape 인수를 완전히 지정할 필요는 없습니다. 일부 차원은 None으로 남겨 둘 수 있습니다.
Step30: 쉽게 다시 로드할 수 있습니다.
Step31: Keras SavedModels는 또한 메트릭, 손실 및 옵티마아저 상태를 저장합니다.
|
8,798
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import time
%matplotlib inline
from matplotlib.colors import ListedColormap
# Another messy looking function to make pretty plots of basketball courts
def visualize_court(log_reg_model, coord_type='cart', court_image = '../data/nba/nba_court.jpg'):
two_class_cmap = ListedColormap(['#FFAAAA', '#AAFFAA']) # light red for miss, light green for make
x_min, x_max = 0, 50 #width (feet) of NBA court
y_min, y_max = 0, 47 #length (feet) of NBA half-court
grid_step_size = 0.2
grid_x, grid_y = np.meshgrid(np.arange(x_min, x_max, grid_step_size), np.arange(y_min, y_max, grid_step_size))
features = np.c_[grid_x.ravel(), grid_y.ravel()]
# change coordinate system
if coord_type == 'polar':
features = np.c_[grid_x.ravel(), grid_y.ravel()]
hoop_location = np.array([25., 0.])
features -= hoop_location
dists = np.sqrt(np.sum(features**2, axis=1))
angles = np.arctan2(features[:,1], features[:,0])
features = np.hstack([dists[np.newaxis].T, angles[np.newaxis].T])
grid_predictions = log_reg_model.predict(features)
grid_predictions = grid_predictions.reshape(grid_x.shape)
fig, ax = plt.subplots()
court_image = plt.imread(court_image)
ax.imshow(court_image, interpolation='bilinear', origin='lower',extent=[x_min,x_max,y_min,y_max])
ax.imshow(grid_predictions, cmap=two_class_cmap, interpolation = 'nearest',
alpha = 0.60, origin='lower',extent=[x_min,x_max,y_min,y_max])
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.title( "Make / Miss Prediction Boundaries" )
plt.show()
### function for shuffling the data and labels
def shuffle_in_unison(features, labels):
rng_state = np.random.get_state()
np.random.shuffle(features)
np.random.set_state(rng_state)
np.random.shuffle(labels)
### calculate classification errors
# return a percentage: (number misclassified)/(total number of datapoints)
def calc_classification_error(predictions, class_labels):
n = predictions.size
num_of_errors = 0.
for idx in xrange(n):
if (predictions[idx] >= 0.5 and class_labels[idx]==0) or (predictions[idx] < 0.5 and class_labels[idx]==1):
num_of_errors += 1
return num_of_errors/n
# load a dataset of recent movies and their ratings across several websites
movie_data = pd.read_csv('../data/movie_ratings.csv')
# reduce it to just the ratings categories
movie_data = movie_data[['FILM','RottenTomatoes','RottenTomatoes_User','Metacritic','Metacritic_User','Fandango_Ratingvalue', 'IMDB']]
movie_data.head()
movie_data.describe()
from sklearn.linear_model import LogisticRegression
# set the random number generator for reproducability
np.random.seed(123)
# let's try to predict the IMDB rating from the others
features = movie_data[['RottenTomatoes','RottenTomatoes_User','Metacritic','Metacritic_User','Fandango_Ratingvalue']].as_matrix()
# create classes: more or less that 7/10 rating
labels = (movie_data['IMDB'] >= 7.).astype('int').tolist()
shuffle_in_unison(features, labels)
### Your Code Goes Here ###
# initialize and train a logistic regression model
# compute error on training data
model_LogReg = LogisticRegression()
model_LogReg.fit(features, labels)
predicted_labels = model_LogReg.predict(features)
train_error_rate = calc_classification_error(predicted_labels, labels)
###########################
print "Classification error on training set: %.2f%%" %(train_error_rate*100)
# compute the baseline error since the classes are imbalanced
print "Baseline Error: %.2f%%" %((sum(labels)*100.)/len(labels))
# perform z-score scaling
features_mu = np.mean(features, axis=0)
features_sigma = np.std(features, axis=0)
std_features = (features - features_mu)/features_sigma
# re-train model
lm = LogisticRegression()
lm.fit(std_features, labels)
### compute error on training data
predictions = lm.predict(std_features)
print "Classification error on training set: %.3f%%" %(calc_classification_error(predictions, labels)*100)
# compute the baseline error since the classes are imbalanced
print "Baseline Error: %.2f%%" %((sum(labels)*100.)/len(labels))
nba_shot_data = pd.read_csv('../data/nba/NBA_xy_features.csv')
nba_shot_data.head()
# split data into train and test
train_set_size = int(.80*len(nba_shot_data))
train_features = nba_shot_data.ix[:train_set_size,['x_Coordinate','y_Coordinate']].as_matrix()
test_features = nba_shot_data.ix[train_set_size:,['x_Coordinate','y_Coordinate']].as_matrix()
train_class_labels = nba_shot_data.ix[:train_set_size,['shot_outcome']].as_matrix()
test_class_labels = nba_shot_data.ix[train_set_size:,['shot_outcome']].as_matrix()
#Train logistic regression model
start_time = time.time()
lm.fit(train_features, np.ravel(train_class_labels))
end_time = time.time()
print "Training ended after %.2f seconds." %(end_time-start_time)
# compute the classification error on training data
predictions = lm.predict(test_features)
print "Classification Error on the Test Set: %.2f%%" %(calc_classification_error(predictions, np.array(test_class_labels)) * 100)
# compute the baseline error since the classes are imbalanced
print "Baseline Error: %.2f%%" %(np.sum(test_class_labels)/len(test_class_labels)*100)
# visualize the boundary on the basketball court
visualize_court(lm)
### Transform coordinate system
# radius coordinate: calculate distance from point to hoop
hoop_location = np.array([25.5, 0.])
train_features -= hoop_location
test_features -= hoop_location
train_dists = np.sqrt(np.sum(train_features**2, axis=1))
test_dists = np.sqrt(np.sum(test_features**2, axis=1))
# angle coordinate: use arctan2 function
train_angles = np.arctan2(train_features[:,1], train_features[:,0])
test_angles = np.arctan2(test_features[:,1], test_features[:,0])
# combine vectors into polar coordinates
polar_train_features = np.hstack([train_dists[np.newaxis].T, train_angles[np.newaxis].T])
polar_test_features = np.hstack([test_dists[np.newaxis].T, test_angles[np.newaxis].T])
pd.DataFrame(polar_train_features, columns=["Radius","Angle"]).head()
#Train model
start_time = time.time()
lm.fit(polar_train_features, np.ravel(train_class_labels))
end_time = time.time()
print "Training ended after %.2f seconds." %(end_time-start_time)
# compute the classification error on test data
predictions = lm.predict(polar_test_features)
print "Classification Error on the Test Set: %.2f%%" %(calc_classification_error(predictions, np.array(test_class_labels)) * 100)
# compute the baseline error since the classes are imbalanced
print "Baseline Error: %.2f%%" %(np.sum(test_class_labels)/len(test_class_labels)*100)
# visualize the boundary on the basketball court
visualize_court(lm, coord_type='polar')
from sklearn.linear_model import LinearRegression
# load (x,y) where y is the mystery data
x = np.arange(0, 30, .2)[np.newaxis].T
y = np.load(open('../data/mystery_data.npy','rb'))
### transformation goes here ###
x = np.cos(x)
################################
# initialize regression model
lm = LinearRegression()
lm.fit(x,y)
y_hat = lm.predict(x)
squared_error = np.sum((y - y_hat)**2)
if not np.isclose(squared_error,0):
print "The squared error should be zero! Yours is %.8f." %(squared_error)
else:
print "You found the secret transformation! Your squared error is %.8f." %(squared_error)
# un-zip the paintings file
import zipfile
zipper = zipfile.ZipFile('../data/bob_ross/bob_ross_paintings.npy.zip')
zipper.extractall('../data/bob_ross/')
# load the 403 x 360,000 matrix
br_paintings = np.load(open('../data/bob_ross/bob_ross_paintings.npy','rb'))
print "Dataset size: %d x %d"%(br_paintings.shape)
# subplot containing first image
ax1 = plt.subplot(1,2,1)
br_painting = br_paintings[70,:]
ax1.imshow(np.reshape(br_painting, (300, 400, 3)))
# subplot containing second image
ax2 = plt.subplot(1,2,2)
br_painting = br_paintings[33,:]
ax2.imshow(np.reshape(br_painting, (300, 400, 3)))
plt.show()
from sklearn.decomposition import PCA
pca = PCA(n_components=400)
start_time = time.time()
reduced_paintings = pca.fit_transform(br_paintings)
end_time = time.time()
print "Training took a total of %.2f seconds." %(end_time-start_time)
print "Preserved percentage of original variance: %.2f%%" %(pca.explained_variance_ratio_.sum() * 100)
print "Dataset is now of size: %d x %d"%(reduced_paintings.shape)
img_idx = 70
reconstructed_img = pca.inverse_transform(reduced_paintings[img_idx,:])
original_img = br_paintings[70,:]
# subplot for original image
ax1 = plt.subplot(1,2,1)
ax1.imshow(np.reshape(original_img, (300, 400, 3)))
ax1.set_title("Original Painting")
# subplot for reconstruction
ax2 = plt.subplot(1,2,2)
ax2.imshow(np.reshape(reconstructed_img, (300, 400, 3)))
ax2.set_title("Reconstruction")
plt.show()
# get the transformation matrix
transformation_mat = pca.components_ # This is the W^T matrix
# two components to show
comp1 = 13
comp2 = 350
# subplot
ax1 = plt.subplot(1,2,1)
filter1 = transformation_mat[comp1-1,:]
ax1.imshow(np.reshape(filter1, (300, 400, 3)))
ax1.set_title("%dth Principal Component"%(comp1))
# subplot
ax2 = plt.subplot(1,2,2)
filter2 = transformation_mat[comp2-1,:]
ax2.imshow(np.reshape(filter2, (300, 400, 3)))
ax2.set_title("%dth Principal Component"%(comp2))
plt.show()
# get the movie features
movie_features = movie_data[['RottenTomatoes','RottenTomatoes_User','Metacritic','Metacritic_User','Fandango_Ratingvalue']].as_matrix()
# perform standard scaling again but via SciKit-Learn
from sklearn.preprocessing import StandardScaler
z_scaler = StandardScaler()
movie_features = z_scaler.fit_transform(movie_features)
pca = PCA(n_components=2)
start_time = time.time()
movie_2d_proj = pca.fit_transform(movie_features)
end_time = time.time()
print "Training took a total of %.4f seconds." %(end_time-start_time)
print "Preserved percentage of original variance: %.2f%%" %(pca.explained_variance_ratio_.sum() * 100)
print "Dataset is now of size: %d x %d"%(movie_2d_proj.shape)
labels = movie_data['FILM'].tolist()
classes = movie_data['IMDB'].tolist()
# color the points by IMDB ranking
labels_to_show = []
colors = []
for idx, c in enumerate(classes):
if c > 7.25:
colors.append('g')
if c > 8.:
labels_to_show.append(labels[idx])
else:
colors.append('r')
if c < 4.75:
labels_to_show.append(labels[idx])
# plot data
plt.scatter(movie_2d_proj[:, 0], movie_2d_proj[:, 1], marker = 'o', c = colors, s = 150, alpha = .6)
# add movie title annotations
for label, x, y in zip(labels, movie_2d_proj[:, 0].tolist(), movie_2d_proj[:, 1].tolist()):
if label not in labels_to_show:
continue
if x < 0:
text_x = -20
else:
text_x = 150
plt.annotate(label.decode('utf-8'),xy = (x, y), xytext = (text_x, 40),
textcoords = 'offset points', ha = 'right', va = 'bottom',
arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3, rad=0'),
bbox = dict(boxstyle = 'round,pad=0.5', fc = 'b', alpha = 0.2))
plt.title('PCA Projection of Movies')
plt.show()
from sklearn.datasets import fetch_olivetti_faces
faces_dataset = fetch_olivetti_faces(shuffle=True)
faces = faces_dataset.data # 400 flattened 64x64 images
person_ids = faces_dataset.target # denotes the identity of person (40 total)
print "Dataset size: %d x %d" %(faces.shape)
print "And the images look like this..."
plt.imshow(np.reshape(faces[200,:], (64, 64)), cmap='Greys_r')
plt.show()
?PCA
### Your code goes here ###
# train PCA model on 'faces'
from sklearn.decomposition import PCA
pca = PCA(n_components=100)
start_time = time.time()
faces_reduced = pca.fit_transform(faces)
end_time = time.time()
###########################
print "Training took a total of %.2f seconds." %(end_time-start_time)
print "Preserved percentage of original variance: %.2f%%" %(pca.explained_variance_ratio_.sum() * 100)
print "Dataset is now of size: %d x %d"%(faces_reduced.shape)
### Your code goes here ###
# Use learned transformation matrix to project back to the original 4096-dimensional space
# Remember you need to use np.reshape()
###########################
img_idx = 70
reconstructed_img = pca.inverse_transform(faces_reduced[img_idx,:])
original_img = faces[70,:]
# subplot for original image
ax1 = plt.subplot(1,2,1)
ax1.imshow(np.reshape(original_img, (64, 64)), cmap='Greys_r')
ax1.set_title("Original Image")
# subplot for reconstruction
ax2 = plt.subplot(1,2,2)
ax2.imshow(np.reshape(reconstructed_img, (64, 64)), cmap='Greys_r')
ax2.set_title("Reconstruction")
plt.show()
### Your code goes here ###
# Now visualize one of the principal components
# Again, remember you need to use np.reshape()
###########################
transformation_mat = pca.components_
# two components to show
comp1 = 5
comp2 = 90
# subplot
ax1 = plt.subplot(1,2,1)
filter1 = transformation_mat[comp1,:]
ax1.imshow(np.reshape(filter1, (64, 64)), cmap='Greys_r')
ax1.set_title("%dth Principal Component"%(comp1))
# subplot
ax2 = plt.subplot(1,2,2)
filter2 = transformation_mat[comp2,:]
ax2.imshow(np.reshape(filter2, (64, 64)), cmap='Greys_r')
ax2.set_title("%dth Principal Component"%(comp2))
plt.show()
### Your code goes here ###
# Train another PCA model to project the data into two dimensions
# Bonus: color the scatter plot according to the person_ids to see if any structure can be seen
# Run PCA for 2 components
# Generate plot
###########################
pca = PCA(n_components=2)
start_time = time.time()
faces_2d_proj = pca.fit_transform(faces)
end_time = time.time()
print "Training took a total of %.2f seconds." %(end_time-start_time)
print "Preserved percentage of original variance: %.2f%%" %(pca.explained_variance_ratio_.sum() * 100)
print "Dataset is now of size: %d x %d"%(faces_2d_proj.shape)
# Generate plot
# color the points by the person ids
colors = [plt.cm.Set1((c+1)/40.) for c in person_ids]
# plot data
plt.scatter(faces_2d_proj[:, 0], faces_2d_proj[:, 1], marker = 'o', c = colors, s = 175, alpha = .6)
plt.title('2D Projection of Faces Dataset')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I've created a function that we'll use later to create visualizations. It is a bit messy and not essential to the material so don't worry about understanding it. I'll be happy to explain it to anyone interested during a break or after the session.
Step2: We also need functions for shuffling the data and calculating classification errrors.
Step3: 1. Warm-up
Step4: Logistic Regression Review
Step5: 2. Feature Transformations
Step6: Standard Normal scaling is a common and usually default first step, especially when you know the data in measured in different units.
Step7: And let's run logistic regression on the data just as we did before...
Step8: Now let's transform the Cartesian coordinates into polar coordinates
Step9: <span style="color
Step10: 3. Dimensionality Reduction
Step11: and then visualize two of the images...
Step12: 3.2 Principal Component Analysis
Step13: Let's visualize two of the paintings...
Step14: We can also visualize the transformation matrix $\mathbf{W}^{T}$. It's rows act as 'filters' or 'feature detectors'...
Step15: 3.3 PCA for Visualization
Step16: <span style="color
Step17: This dataset contains 400 64x64 pixel images of 40 people each exhibiting 10 facial expressions. The images are in gray-scale, not color, and therefore flattened vectors contain 4096 dimensions.
Step18: <span style="color
Step19: Your output should look something like what's below (although could be a different face)
Step20: Your output should look something like what's below (although could have differently ranked components)
|
8,799
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
import tensorflow as tf
from rl_coach.architectures.tensorflow_components.heads.head import Head
from rl_coach.architectures.head_parameters import HeadParameters
from rl_coach.base_parameters import AgentParameters
from rl_coach.core_types import QActionStateValue
from rl_coach.spaces import SpacesDefinition
class CategoricalQHeadParameters(HeadParameters):
def __init__(self, activation_function: str ='relu', name: str='categorical_q_head_params'):
super().__init__(parameterized_class=CategoricalQHead, activation_function=activation_function, name=name)
class CategoricalQHead(Head):
def __init__(self, agent_parameters: AgentParameters, spaces: SpacesDefinition, network_name: str,
head_idx: int = 0, loss_weight: float = 1., is_local: bool = True, activation_function: str ='relu'):
super().__init__(agent_parameters, spaces, network_name, head_idx, loss_weight, is_local, activation_function)
self.name = 'categorical_dqn_head'
self.num_actions = len(self.spaces.action.actions)
self.num_atoms = agent_parameters.algorithm.atoms
self.return_type = QActionStateValue
def _build_module(self, input_layer):
self.actions = tf.placeholder(tf.int32, [None], name="actions")
self.input = [self.actions]
values_distribution = tf.layers.dense(input_layer, self.num_actions * self.num_atoms, name='output')
values_distribution = tf.reshape(values_distribution, (tf.shape(values_distribution)[0], self.num_actions,
self.num_atoms))
# softmax on atoms dimension
self.output = tf.nn.softmax(values_distribution)
# calculate cross entropy loss
self.distributions = tf.placeholder(tf.float32, shape=(None, self.num_actions, self.num_atoms),
name="distributions")
self.target = self.distributions
self.loss = tf.nn.softmax_cross_entropy_with_logits(labels=self.target, logits=values_distribution)
tf.losses.add_loss(self.loss)
from rl_coach.agents.dqn_agent import DQNNetworkParameters
class CategoricalDQNNetworkParameters(DQNNetworkParameters):
def __init__(self):
super().__init__()
self.heads_parameters = [CategoricalQHeadParameters()]
from rl_coach.agents.dqn_agent import DQNAlgorithmParameters
from rl_coach.exploration_policies.e_greedy import EGreedyParameters
from rl_coach.schedules import LinearSchedule
class CategoricalDQNAlgorithmParameters(DQNAlgorithmParameters):
def __init__(self):
super().__init__()
self.v_min = -10.0
self.v_max = 10.0
self.atoms = 51
class CategoricalDQNExplorationParameters(EGreedyParameters):
def __init__(self):
super().__init__()
self.epsilon_schedule = LinearSchedule(1, 0.01, 1000000)
self.evaluation_epsilon = 0.001
from rl_coach.agents.value_optimization_agent import ValueOptimizationAgent
from rl_coach.base_parameters import AgentParameters
from rl_coach.core_types import StateType
from rl_coach.memories.non_episodic.experience_replay import ExperienceReplayParameters
class CategoricalDQNAgentParameters(AgentParameters):
def __init__(self):
super().__init__(algorithm=CategoricalDQNAlgorithmParameters(),
exploration=CategoricalDQNExplorationParameters(),
memory=ExperienceReplayParameters(),
networks={"main": CategoricalDQNNetworkParameters()})
@property
def path(self):
return 'agents.categorical_dqn_agent:CategoricalDQNAgent'
from typing import Union
# Categorical Deep Q Network - https://arxiv.org/pdf/1707.06887.pdf
class CategoricalDQNAgent(ValueOptimizationAgent):
def __init__(self, agent_parameters, parent: Union['LevelManager', 'CompositeAgent']=None):
super().__init__(agent_parameters, parent)
self.z_values = np.linspace(self.ap.algorithm.v_min, self.ap.algorithm.v_max, self.ap.algorithm.atoms)
def distribution_prediction_to_q_values(self, prediction):
return np.dot(prediction, self.z_values)
# prediction's format is (batch,actions,atoms)
def get_all_q_values_for_states(self, states: StateType):
prediction = self.get_prediction(states)
return self.distribution_prediction_to_q_values(prediction)
def learn_from_batch(self, batch):
network_keys = self.ap.network_wrappers['main'].input_embedders_parameters.keys()
# for the action we actually took, the error is calculated by the atoms distribution
# for all other actions, the error is 0
distributed_q_st_plus_1, TD_targets = self.networks['main'].parallel_prediction([
(self.networks['main'].target_network, batch.next_states(network_keys)),
(self.networks['main'].online_network, batch.states(network_keys))
])
# only update the action that we have actually done in this transition
target_actions = np.argmax(self.distribution_prediction_to_q_values(distributed_q_st_plus_1), axis=1)
m = np.zeros((self.ap.network_wrappers['main'].batch_size, self.z_values.size))
batches = np.arange(self.ap.network_wrappers['main'].batch_size)
for j in range(self.z_values.size):
tzj = np.fmax(np.fmin(batch.rewards() +
(1.0 - batch.game_overs()) * self.ap.algorithm.discount * self.z_values[j],
self.z_values[self.z_values.size - 1]),
self.z_values[0])
bj = (tzj - self.z_values[0])/(self.z_values[1] - self.z_values[0])
u = (np.ceil(bj)).astype(int)
l = (np.floor(bj)).astype(int)
m[batches, l] = m[batches, l] + (distributed_q_st_plus_1[batches, target_actions, j] * (u - bj))
m[batches, u] = m[batches, u] + (distributed_q_st_plus_1[batches, target_actions, j] * (bj - l))
# total_loss = cross entropy between actual result above and predicted result for the given action
TD_targets[batches, batch.actions()] = m
result = self.networks['main'].train_and_sync_networks(batch.states(network_keys), TD_targets)
total_loss, losses, unclipped_grads = result[:3]
return total_loss, losses, unclipped_grads
from rl_coach.agents.categorical_dqn_agent import CategoricalDQNAgentParameters
agent_params = CategoricalDQNAgentParameters()
agent_params.network_wrappers['main'].learning_rate = 0.00025
from rl_coach.environments.gym_environment import Atari, atari_deterministic_v4
env_params = Atari(level='BreakoutDeterministic-v4')
from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager
from rl_coach.base_parameters import VisualizationParameters
from rl_coach.environments.gym_environment import atari_schedule
graph_manager = BasicRLGraphManager(agent_params=agent_params, env_params=env_params,
schedule_params=atari_schedule, vis_params=VisualizationParameters())
graph_manager.visualization_parameters.render = True
# let the adventure begin
graph_manager.improve()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's define a class - CategoricalQHead class. Each class in Coach has a complementary Parameters class which defines its constructor parameters. So we will additionally define the CategoricalQHeadParameters class. The network structure should be defined in the _build_module function, which gets the previous layer output as an argument. In this function there are several variables that should be defined
Step2: The Agent
Step3: Next we'll define the algorithm parameters, which are the same as the DQN algorithm parameters, with the addition of the Categorical DQN specific v_min, v_max and number of atoms.
Step4: Now let's define the agent parameters class which contains all the parameters to be used by the agent - the network, algorithm and exploration parameters that we defined above, and also the parameters of the memory module to be used, which is the default experience replay buffer in this case.
Step5: The last step is to define the agent itself - CategoricalDQNAgent - which is a type of value optimization agent so it will inherit the ValueOptimizationAgent class. It could have also inheritted DQNAgent, which would result in the same functionality. Our agent will implement the learn_from_batch function which updates the agent's networks according to an input batch of transitions.
Step6: Some important things to notice here
Step7: Now, let's define the environment parameters. We will use the default Atari parameters (frame skip of 4, taking the max over subsequent frames, etc.), and we will select the 'Breakout' game level.
Step8: Connecting all the dots together - we'll define a graph manager with the Categorial DQN agent parameters, the Atari environment parameters, and the scheduling and visualization parameters
Step9: Running the Preset
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.