text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get started with TensorBoard
Step2: Using the MNIST dataset as the example, normalize the data and write a function that creates a simple Keras model for classifying the images into 10 classes.
Step3: Using TensorBoard with Keras Model.fit()
Step4: Start TensorBoard through the command line or within a notebook experience. The two interfaces are generally the same. In notebooks, use the %tensorboard line magic. On the command line, run the same command without "%".
Step5: <!-- <img class="tfo-display-only-on-site" src="https
Step6: The training code follows the advanced quickstart tutorial, but shows how to log metrics to TensorBoard. Choose loss and optimizer
Step7: Create stateful metrics that can be used to accumulate values during training and logged at any point
Step8: Define the training and test functions
Step9: Set up summary writers to write the summaries to disk in a different logs directory
Step10: Start training. Use tf.summary.scalar() to log metrics (loss and accuracy) during training/testing within the scope of the summary writers to write the summaries to disk. You have control over which metrics to log and how often to do it. Other tf.summary functions enable logging other types of data.
Step11: Open TensorBoard again, this time pointing it at the new log directory. We could have also started TensorBoard to monitor training while it progresses.
Step12: <!-- <img class="tfo-display-only-on-site" src="https
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Load the TensorBoard notebook extension
%load_ext tensorboard
import tensorflow as tf
import datetime
# Clear any logs from previous runs
!rm -rf ./logs/
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback])
%tensorboard --logdir logs/fit
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
train_dataset = train_dataset.shuffle(60000).batch(64)
test_dataset = test_dataset.batch(64)
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
# Define our metrics
train_loss = tf.keras.metrics.Mean('train_loss', dtype=tf.float32)
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('train_accuracy')
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy('test_accuracy')
def train_step(model, optimizer, x_train, y_train):
with tf.GradientTape() as tape:
predictions = model(x_train, training=True)
loss = loss_object(y_train, predictions)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_loss(loss)
train_accuracy(y_train, predictions)
def test_step(model, x_test, y_test):
predictions = model(x_test)
loss = loss_object(y_test, predictions)
test_loss(loss)
test_accuracy(y_test, predictions)
current_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
train_log_dir = 'logs/gradient_tape/' + current_time + '/train'
test_log_dir = 'logs/gradient_tape/' + current_time + '/test'
train_summary_writer = tf.summary.create_file_writer(train_log_dir)
test_summary_writer = tf.summary.create_file_writer(test_log_dir)
model = create_model() # reset our model
EPOCHS = 5
for epoch in range(EPOCHS):
for (x_train, y_train) in train_dataset:
train_step(model, optimizer, x_train, y_train)
with train_summary_writer.as_default():
tf.summary.scalar('loss', train_loss.result(), step=epoch)
tf.summary.scalar('accuracy', train_accuracy.result(), step=epoch)
for (x_test, y_test) in test_dataset:
test_step(model, x_test, y_test)
with test_summary_writer.as_default():
tf.summary.scalar('loss', test_loss.result(), step=epoch)
tf.summary.scalar('accuracy', test_accuracy.result(), step=epoch)
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print (template.format(epoch+1,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100))
# Reset metrics every epoch
train_loss.reset_states()
test_loss.reset_states()
train_accuracy.reset_states()
test_accuracy.reset_states()
%tensorboard --logdir logs/gradient_tape
!tensorboard dev upload \
--logdir logs/fit \
--name "(optional) My latest experiment" \
--description "(optional) Simple comparison of several hyperparameters" \
--one_shot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we'll load the text file and convert it into integers for our network to use.
Step3: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Step4: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
Step5: Hyperparameters
Step6: Write out the graph for TensorBoard
Step7: Training
Step8: Sampling
|
<ASSISTANT_TASK:>
Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
def split_data(chars, batch_size, num_steps, split_frac=0.9):
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
preds = tf.nn.softmax(logits, name='predictions')
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 1
learning_rate = 0.001
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/1', sess.graph)
!mkdir -p checkpoints/anna
epochs = 1
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sheet music consists of notes. A note has several properties including pitch, timbre, loudness, and duration.
Step2: A pitch class (Wikipedia) is the set of all notes that are an integer number of octaves apart. For example, the set of all Cs, {..., C1, C2, ...} is one pitch class, and the set of all Ds, {..., D1, D2, ...} is another pitch class. Here is the pitch class for C
|
<ASSISTANT_TASK:>
Python Code:
ipd.SVG("https://upload.wikimedia.org/wikipedia/commons/2/27/MozartExcerptK331.svg")
ipd.YouTubeVideo('dP9KWQ8hAYk')
ipd.Image("https://upload.wikimedia.org/wikipedia/commons/a/a5/Perfect_octave_on_C.png")
ipd.Image("https://upload.wikimedia.org/wikipedia/commons/thumb/9/98/Pitch_class_on_C.png/187px-Pitch_class_on_C.png")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Benchmark vs ePSproc_wignerD.m (Matlab)
Step2: > For conjugate version, differences on order of 1e-15. OK.
Step3: > Compare with Matlab code - massively faster...
Step4: Note slight difference in overall fn. evals (probably an extraneous extra loop in Matlab code), but won't make a significant difference here.
Step5: Benchmark vs. ePSproc_3j.m (Matlab, Zare defns.)
Step6: > Differences on order 1e-16. OK.
Step7: > Compare with Matlab code - massively faster...
|
<ASSISTANT_TASK:>
Python Code:
# Imports
import numpy as np
import spherical_functions as sf
import quaternion
import matplotlib.pyplot as plt
#%% Basic test with Euler angles
alpha, beta, gamma = 0.1, 0.2, 0.3
ell,mp,m = 3,2,1
wD_euler = sf.Wigner_D_element(alpha, beta, gamma, ell, mp, m)
print(wD_euler)
#%% With quaternion
R = np.quaternion(1,2,3,4).normalized()
wD_quat = sf.Wigner_D_element(R, ell, mp, m)
print(wD_quat)
#%% With quaternion defined by Euler angles
R_euler = quaternion.from_euler_angles(alpha, beta, gamma)
wD_eQuat = sf.Wigner_D_element(R_euler, ell, mp, m)
print(wD_eQuat)
# Check
print(wD_eQuat - wD_euler)
#%% WignerD bench
# Adapted directly from Matlab code
# Set QNs for calculation, (l,m,mp)
Lmax = 6
QNs = []
for l in np.arange(0, Lmax+1):
for m in np.arange(-l, l+1):
for mp in np.arange(-l, l+1):
QNs.append([l, m, mp])
QNs = np.array(QNs)
# Set a range of Eugler angles for testing
Nangs = 1000
pRot = np.linspace(0,180,Nangs)
tRot = np.linspace(0,90,Nangs)
cRot = np.linspace(0,180,Nangs)
eAngs = np.array([pRot, tRot, cRot,])*np.pi/180
# Convert to quaternions
R = quaternion.from_euler_angles(pRot*np.pi/180, tRot*np.pi/180, cRot*np.pi/180)
#****** wignerD vectorised for QN OR angles
wD_QNs = []
for n in np.arange(0, QNs.shape[0]):
wD_QNs.append([QNs[n,:], R, sf.Wigner_D_element(R, QNs[n,0], QNs[n,1], QNs[n,2])])
#%% Compare with Matlab results
from scipy.io import loadmat
x = loadmat(r'wignerD_bench_090819.mat')
wD_QNsMatlab = x['wD_QNs']
def wD_sortf(wD_QNs, conjPy = True):
# Sort to array for comparison
wD_sort=[]
if conjPy:
# With conjugate on Python results
[wD_sort.extend(np.c_[np.tile(wD[0],(wD[2].shape[0],1)), wD[2].conj()]) for wD in wD_QNs]
else:
# For QN looped case
[wD_sort.extend(np.c_[np.tile(wD[0],(wD[2].shape[0],1)), wD[2]]) for wD in wD_QNs]
# For angle looped case
# [wD_sort.extend(np.c_[wD[0], wD[2]]) for wD in wD_QNs]
return np.asarray(wD_sort)
conjFlag = True
wD_sort = wD_sortf(wD_QNs, conjPy = conjFlag)
# Subtract and plot
wD_test = wD_sort - wD_QNsMatlab[:,0:4]
# Set plot params
pRange = np.arange(1,10000)
sPlots = 4
colPlot = 3 # Set as index into python results
fig, ax = plt.subplots(2, 1, sharex='col', dpi=180)
ax[0].plot(np.c_[np.real(wD_sort[pRange,colPlot]), np.imag(wD_sort[pRange,colPlot])])
ax[0].set_title('Moble (Wikipedia) Wigner D, conjFlag = ' + str(conjFlag))
ax[1].plot(np.c_[np.real(wD_QNsMatlab[pRange,colPlot]), np.imag(wD_QNsMatlab[pRange,colPlot])])
ax[1].set_title('ePSproc (Zare) Wigner D')
fig, ax = plt.subplots(2, 1, sharex='col', dpi=180)
ax[0].plot(np.c_[np.real(wD_test[pRange,colPlot]), np.imag(wD_test[pRange,colPlot])])
ax[0].set_title('Difference')
#plt.subplot(sPlots,1,4)
ax[1].plot(np.abs(wD_test[pRange,:]))
ax[1].set_title('Abs Difference')
plt.show()
# Check values
wD_test.max()
# Try timing as wall clock for more flexibility...
# https://www.techbeamers.com/python-time-functions-usage-examples/
# Get similar results for time.time(), time.clock() and time.perf_counter()
import time
tS = time.perf_counter()
tBench = []
fEvals = []
# Set QNs for calculation, (l,m,mp)
for Lmax in np.arange(0,11):
QNs = []
for l in np.arange(0, Lmax+1):
for m in np.arange(-l, l+1):
for mp in np.arange(-l, l+1):
QNs.append([l, m, mp])
QNs = np.array(QNs)
tQN = time.perf_counter()
#****** wignerD vectorised for QN OR angles
wD_QNs = []
for n in np.arange(0, QNs.shape[0]):
wD_QNs.append([QNs[n,:], R, sf.Wigner_D_element(R, QNs[n,0], QNs[n,1], QNs[n,2])])
tLoop = time.perf_counter()
wD_QNs = []
[wD_QNs.append([QN, R, sf.Wigner_D_element(R, QN[0], QN[1], QN[2])]) for QN in QNs]
tBench.append([tQN, tLoop, time.perf_counter()])
fEvals.append([Lmax, len(wD_QNs)*len(R)])
tBench = np.asarray(tBench)
tDeltas = np.c_[tBench[:,1]-tBench[:,0], tBench[:,2]-tBench[:,1]]
# print(tDeltas)
# Plot
fEvals = np.asarray(fEvals)
plt.figure(dpi = 180)
plt.plot(fEvals[:,1], tDeltas)
plt.xlabel('Array size (func evals)')
plt.ylabel('t/s')
plt.show()
# Compare with Matlab benchmarks (not even funny...!)
x = loadmat(r'wignerD_bench_t_090819.mat')
wDBenchMatlab = x['wDbench']
plt.figure(dpi = 180)
# plt.plot(fEvals[:,1],np.c_[tDeltas[:,0], wDBenchMatlab[1:,3]])
plt.plot(fEvals[:,1], tDeltas[:,0])
plt.plot(wDBenchMatlab[1:,2], wDBenchMatlab[1:,3])
plt.xlabel('Array size (func evals)')
plt.ylabel('t/s')
plt.show()
print('Check fEvals for consistency:')
print(wDBenchMatlab[1:,2] - fEvals[:,1])
#%% Basic tests
# See https://github.com/moble/spherical_functions/blob/master/Wigner3j.py
# Syntax Wigner3j(j_1, j_2, j_3, m_1, m_2, m_3)
# Integer values only, and single values only?
print(sf.Wigner3j(2, 6, 4, 0, 0, 0))
print(sf.Wigner3j(2, 6, 4, 0, 0, 1))
#%% Test set of values
Lmax = 6
QNs = []
for l in np.arange(0, Lmax+1):
for lp in np.arange(0, Lmax+1):
for m in np.arange(-l, l+1):
for mp in np.arange(-lp, lp+1):
for L in np.arange(0, l+lp+1):
M = -(m+mp)
QNs.append([l, lp, L, m, mp, M])
QNs = np.array(QNs)
# Test vector compatibility - NOPE... but pretty fast anyway
# test = sf.Wigner3j(QNs[:,0], QNs[:,1], QNs[:,2], QNs[:,3], QNs[:,4], QNs[:,5])
# LOOP OVER QNs and calculate
w3j_QNs = []
for n in np.arange(0, QNs.shape[0]):
w3j_QNs.append([QNs[n,:], sf.Wigner3j(QNs[n,0], QNs[n,1], QNs[n,2], QNs[n,3], QNs[n,4], QNs[n,5])])
#%% Test vs. Matlab reference
from scipy.io import loadmat
x = loadmat(r'wigner3j_bench_L6_130819.mat')
w3j_QNsMatlab = x['QNs_3j']
# Sort to array for comparison - WORKS, but UGLY. Should be a better way?
w3j_sort=[]
[w3j_sort.append(np.r_[wD[0], wD[1]]) for wD in w3j_QNs]
w3j_sort = np.array(w3j_sort)
# Subtract and plot
w3j_test = w3j_sort - w3j_QNsMatlab[:,0:7]
# Set plot params
pRange = np.arange(1,1000)
sPlots = 3
colPlot = 6 # Set as index into python results
fig, ax = plt.subplots(sPlots, 1, sharex='col', dpi=180)
ax[0].plot(w3j_sort[pRange,colPlot])
ax[0].set_title('Moble (Wikipedia) Wigner 3j')
ax[1].plot(w3j_QNsMatlab[pRange,colPlot])
ax[1].set_title('ePSproc (Zare) Wigner 3j')
ax[2].plot(w3j_test[pRange,colPlot])
ax[2].set_title('Difference')
# plt.subplot(sPlots,1,4)
# plt.plot(np.abs(w3j_test))
# plt.title('Abs Difference')
plt.show()
# Check values
wD_test.max()
# Try timing as wall clock for more flexibility...
# https://www.techbeamers.com/python-time-functions-usage-examples/
# Get similar results for time.time(), time.clock() and time.perf_counter()
import time
tS = time.perf_counter()
tBench = []
fEvals = []
# Set QNs for calculation, (l,m,mp)
for Lmax in np.arange(0,11):
QNs = []
for l in np.arange(0, Lmax+1):
for lp in np.arange(0, Lmax+1):
for m in np.arange(-l, l+1):
for mp in np.arange(-lp, lp+1):
for L in np.arange(0, l+lp+1):
M = -(m+mp)
QNs.append([l, lp, L, m, mp, M])
QNs = np.array(QNs)
tQN = time.perf_counter()
#****** LOOP OVER QNs and calculate
w3j_QNs = []
for n in np.arange(0, QNs.shape[0]):
w3j_QNs.append([QNs[n,:], sf.Wigner3j(QNs[n,0], QNs[n,1], QNs[n,2], QNs[n,3], QNs[n,4], QNs[n,5])])
tLoop = time.perf_counter()
tBench.append([tQN, tLoop])
fEvals.append([Lmax, len(QNs)])
tBench = np.asarray(tBench)
# print(tBench)
tDeltas = np.c_[tBench[:,1]-tBench[:,0]]
# print(tDeltas)
# Plot
fEvals = np.asarray(fEvals)
plt.figure(dpi = 180)
plt.plot(fEvals[:,1], tDeltas)
plt.xlabel('Array size (func evals)')
plt.ylabel('t/s')
plt.show()
# Compare with Matlab benchmarks (not even funny...!)
x = loadmat(r'wigner3j_bench_t_090819.mat')
w3jBenchMatlab = x['w3jbench']
plt.figure(dpi = 180)
plt.plot(fEvals[:,1], tDeltas)
plt.plot(w3jBenchMatlab[1:,1], w3jBenchMatlab[1:,2])
plt.xlabel('Array size')
plt.ylabel('t/s')
plt.show()
print('Check fEvals for consistency:')
print(w3jBenchMatlab[1:,1] - fEvals[:,1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If slope is large, then that feature is more important. If slope is zero then the feature does is not important and prediction does not depend on it
Step2: Boston dataset
|
<ASSISTANT_TASK:>
Python Code:
xs = np.array([1, 2, 3, 4, 5, 6])
ys = np.array([5, 4, 6, 5, 6, 7])
plt.scatter(xs, ys)
plt.show()
from sklearn.linear_model import LinearRegression
#we always assume that x is a 2d array of datapoints and features and y is a 1D array of outputs
xs = xs.reshape((6, 1)) # 6 datapoints and 1 feature
clf = LinearRegression() # classifier
clf.fit(xs, ys) # fit the input and output
xp = np.array([[0], [3.5], [10]])
print(xp.shape)
yp = clf.predict(xp) # predict the output for the input data
plt.plot(xp, yp , "b", marker = "o")
plt.plot(xs, ys, 'g^')
clf.coef_, clf.intercept_ # coefficient is m (array bcoz multiple m1x1 + m2x2 + m3x3) and intercept is b
xs2 = np.array([1, 2, 3, 4, 5, 6, 3, 5, 6, 6, 1, 8])
xs2 = xs2.reshape((6, 2))
print(xs2.shape)
ys2 = np.array([5, 4, 6, 5, 6, 7])
clf.fit(xs2, ys2)
xp2 = np.array([0, 2])
xp2 = xp2.reshape((1, 2))
print(xp2)
yp2 = clf.predict(xp2)
yp2
from sklearn import datasets
import pandas as pd
boston = datasets.load_boston()
print(boston.data.shape, boston.target.shape)
boston.feature_names
df = pd.DataFrame(boston.data)
df.columns = boston.feature_names
df.head()
boston.DESCR
df.describe()
clf2 = LinearRegression(normalize=True)
clf2.fit(boston.data, boston.target)
clf2.coef_, clf2.intercept_
df.describe()
clf2.score(boston.data, boston.target)
from sklearn.model_selection import train_test_split #Now we're splitting the data using probabilitiy( we can also shuffle the data and then split simply)
x_train, x_test, y_train, y_test = train_test_split(boston.data, boston. target, test_size = 0.20, random_state = 2) # random numbers will reamain same
x_train.shape, x_test.shape
clf2.fit(x_train, y_train)
clf2.score(x_test, y_test), clf2.score(x_train, y_train)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Inserts for Jupyter
Step3: Import data
Step4: Main
Step5: Make use of IPython stuff
Step6: For more nested json's or dictionaries, it's best to use something interactive like RenderJSON.
Step8: Use slides, and decouple declaration from run (if you're not using "Hide codecell inputs").
Step9: Use tqdm for every run that takes more than a couple of seconds and can be tracked by some iterator.
|
<ASSISTANT_TASK:>
Python Code:
# BASE ------------------------------------
from datetime import datetime as dt
nb_start = dt.now()
# Be mindful when you have this activated.
# import warnings
# warnings.filterwarnings('ignore')
import json
from pathlib import Path
from time import sleep
# Display libs
from IPython.display import display, HTML
from tqdm import tqdm, tqdm_notebook
tqdm.pandas()
SEED = 24
%%time
# ETL ------------------------------------
import numpy as np
import pandas as pd
# VIZ ------------------------------------
import matplotlib.cm as cm
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import plotly.graph_objs as go
import plotly.figure_factory as ff
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
import plotly.express as px
import plotly.io as pio
from plotly.tools import mpl_to_plotly
# NETWORK ANALYSIS ------------------------------------
import networkx as nx
import community as community_louvain
# https://stackoverflow.com/a/37124230
import uuid
from IPython.display import display_javascript, display_html, display
import json
class RenderJSON(object):
def __init__(self, json_data):
if isinstance(json_data, dict):
self.json_str = json.dumps(json_data)
else:
self.json_str = json_data
self.uuid = str(uuid.uuid4())
def _ipython_display_(self):
display_html('<div id="{}" style="height: 100%; width:100%;"></div>'.format(self.uuid), raw=True)
display_javascript(
require(["https://rawgit.com/caldwell/renderjson/master/renderjson.js"], function() {
document.getElementById('%s').appendChild(renderjson(%s))
});
% (self.uuid, self.json_str), raw=True)
%%javascript
/*Increase timeout to load properly*/
var rto = 120;
console.log('[Custom]: Increase require() timeout to', rto, 'seconds.');
window.requirejs.config({waitSeconds: rto});
%%html
<style>
/* font for TODO */
@import url('https://fonts.googleapis.com/css?family=Oswald&display=swap');
.hl {
padding: 0.25rem 0.3rem;
border-radius: 5px;
}
/* used: https://www.color-hex.com/color-palette/87453 */
.hl.hl-yellow { background-color: rgba(204,246,43,0.5); /*#fdef41;*/ }
.hl.hl-orange { background-color: rgba(255,150,42,0.5); }
.hl.hl-magenta { background-color: rgba(244,73,211,0.5); }
.hl.hl-blue { background-color: rgba(80,127,255,0.5); }
.hl.hl-violet { background-color: rgba(149,47,255,0.5); }
.todo {
font-family: 'Oswald', sans-serif;
font-size: 2rem;
}
input.checkmark {
height: 1.5rem;
margin-right: 0.5rem;
}
kbd.cr {
padding: 2px 3px;
background-color: red;
color: #FFF;
border-radius: 5px;
}
kbd.xmltag {
background-color: #ff8c8c;
color: #FFF;
}
kbd.xmltag.xmltag--subnode {
background-color: #9f8cff;
color: #FFF;
}
kbd.xmltag.xmltag--subsubnode {
background-color: #de8cff;
color: #FFF;
}
</style>
<!-- ========================================== -->
<h3 style="margin-top:1rem; margin-bottom:2rem"> Examples: </h3>
<div>Highlighted text in:
<span class="hl hl-yellow">yellow</span>,
<span class="hl hl-orange">orange</span>,
<span class="hl hl-magenta">magenta</span>,
<span class="hl hl-blue">blue</span>,
<span class="hl hl-violet">violet</span>,
</div>
<br/>
<div class="todo">TODO</div>
<input class="checkmark" type="checkbox" checked="checked" disabled>Finished TODO text.
<input class="checkmark" type="checkbox" disabled>TODO text.
<br/><br/>
Tags: <kbd class="cr">CR</kbd> (CR for Camera-Ready, graphs/sections that are important)
%%bash
ls -l tests/
%%time
# df = pd.read_csv()
# df.info()
# df.head()
%%bash
ls -l tests/
%%bash
ls data/raw | wc -l | xargs printf '%s files'
du -h data/raw | cut -f1 | xargs printf ', total of %s'
ls data/raw/ | head -n 4 | xargs printf '\n\t%s'
ls data/raw/ | tail -n 4 | xargs printf '\n\t%s'
tm = (dt.now() - nb_start).total_seconds()
display(HTML(f'Started notebook <span class="hl hl-yellow">{tm:.0f}s</span> ago.'))
# If you use type specifiers, don't put space after the specifier
# display(HTML(f'{ tm:.0f}')) # works
# display(HTML(f'{ tm:.0f }')) # breaks
RenderJSON({
'a': {
'c': 0
},
'b': 1
})
slide_1 = HTML(
<h3>Lex Fridman<br/><br/>
Deep Learning Basics: Introduction and Overview<br/> </h3>
<iframe width="560" height="315" src="https://www.youtube.com/embed/O5xeyoRL95U" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
)
slide_1
# For most simple stuff, use tqdm
files = list(range(10))
for file in tqdm_notebook(files):
sleep(0.1)
# When you need more control over the progress bar,
# use decoupled tqdm
with tqdm_notebook(total=len(files)) as pbar:
for file in files:
sleep(0.1)
pbar.update(1)
nb_end = dt.now()
print('Time elapsed: %s' % (nb_end - nb_start))
'Time elapsed: %.2f minutes' % (
(nb_end - nb_start).total_seconds() / 60
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Задача 1. Напишите функцию, реализующую матричное умножение. При вычислении разрешается создавать объекты размерности три. Запрещается пользоваться функциями, реализующими матричное умножение (numpy.dot, операция @, операция умножения в классе numpy.matrix). Авторское решение занимает одну строчку.
Step2: Проверьте правильность реализации на случайных матрицах. Должен получится ноль.
Step3: А вот в таком стиле вы присылали бы нам свои работы, если не стали бы делать это задание.
Step4: Проверьте, насколько быстрее работает ваш код по сравнению с неэффективной реализацией stupid_matrix_multiplication. Эффективный код должен работать почти в 200 раз быстрее. Для примера посмотрите также, насколько быстрее работают встроенные numpy-функции.
Step5: Ниже для примера приведена полная реализация функции. Ваc мы, конечно, не будем требовать проверять входные данные на корректность, но документации к функциям нужно писать.
Step6: Задача 2. Напишите функцию, которая по входной последовательности $X = (X_1, ..., X_n)$ строит последовательность $S = (S_1, ..., S_n)$, где $S_k = \frac{X_1 + ... + X_k}{k}$. Авторское решение занимает одну строчку.
Step7: Постройте график зависимости $S_k$ от $k$. График должен быть в виде ломанной линии с достаточно крупными точками. Размер фигуры 15 на 5, сетка в виде пунктирной линии.
Step8: Проверьте корректность работы реализации, а также ее эффективность. Эффективный код должен работать в 50 раз быстрее.
Step9: Задача 3. Дана матрица $A = (a_{ij})$ размера $n \times m$. Вычислите величину $$\frac{1}{m} \sum_{j=1}^m \min_{i=1, ..., n} a_{ij},$$ то есть средний минимум по столбцам. Авторское решение занимает одну строчку.
Step10: Проверьте корректность работы реализации, а также ее эффективность. Эффективный код должен работать почти в 200 раз быстрее. Обратите внимание, что разность чисел может быть не равна нулю из-за ошибок округления, но должна иметь малый порядок.
Step11: Задача 4. Дан массив $X$. Требуется построить новый массив, в котором все четные элементы $X$ заменить на число $v$ (если оно не указано, то на ноль). Все нечетные элементы исходного массива нужно возвести в квадрат и записать в обратном порядке относительно позиций этих элементов. Массив $X$ при этом должен остаться без изменений.
Step12: Проверьте корректность работы реализации, а также ее эффективность. Эффективный код должен работать в 20 раз быстрее.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy.stats as sps
import matplotlib.pyplot as plt
%matplotlib inline
def matrix_multiplication(A, B):
'''Возвращает матрицу, которая является результатом
матричного умножения матриц A и B.
'''
# Запоминаем размерности: мы умножаем матрицу размера nxm на матрицу размера mxk.
n, m, k = A.shape[0], A.shape[1], B.shape[1]
# Считаем внешнее произведение матриц. Заметим, что тогда получается матрица размера (nm)x(mk), таким образом,
# матрица разбивает на nk подматриц размера mxm.
C = np.array(np.outer(A, B.T))
# Давайте так и разобьем их на подматрицы. К сожалению, сразу сделать numpy.reshape не выдаст желаемого результата
# из-за "сканирования по строчкам". Сначала разобьем по вертикали на k столбцов нашу матрицу.
D = np.array(np.hsplit(C, k))
# Теперь можно сделать уже нормальный reshape, и все будет как надо. Мы получили матрицу nxk, элементами которой являются
# матрицы размера mxm. Единственное, она получилась транспонированной. Пока что так даже удобнее, поскольку транспонировать
# ее в таком виде будет криво (какие-то сложности из-за излишней многомерности).
E = D.reshape(k, n, m, m)
# Ну давайте посчитаем след каждой такой матрицы размера mxm. Совершенно случайно он оказывается элементом искомого
# произведения матриц (именно это было замечено при выполнении numpy.outer). Таким образом, все хорошо, разве что
# не забудем все же транспонировать матрицу.
# P. S. Заметим, что каждой операцией мы получаем матрицу, а потом только один раз ее используем. Так что при большом
# желании это все упихивается в одну никому не понятную строчку:
# np.trace(np.array(np.hsplit(np.array(np.outer(A, B.T)), B.shape[1])).
# reshape(B.shape[1], A.shape[0], A.shape[1], A.shape[1]), axis1=2, axis2=3).T
# Может, так будет быстрее.
return np.trace(E, axis1=2, axis2=3).T
A = sps.uniform.rvs(size=(10, 20))
B = sps.uniform.rvs(size=(20, 30))
np.abs(matrix_multiplication(A, B) - A @ B).sum()
def stupid_matrix_multiplication(A, B):
C = [[0 for j in range(len(B[0]))] for i in range(len(A))]
for i in range(len(A)):
for j in range(len(B[0])):
for k in range(len(B)):
C[i][j] += A[i][k] * B[k][j]
return C
A = sps.uniform.rvs(size=(400, 200))
B = sps.uniform.rvs(size=(200, 300))
%time C1 = matrix_multiplication(A, B)
%time C2 = A @ B # python 3.5
%time C3 = np.matrix(A) * np.matrix(B)
%time C4 = stupid_matrix_multiplication(A, B)
def matrix_multiplication(A, B):
'''Возвращает матрицу, которая является результатом
матричного умножения матриц A и B.
'''
# Если A или B имеют другой тип, нужно выполнить преобразование типов
A = np.array(A)
B = np.array(B)
# Проверка данных входных данных на корректность
assert A.ndim == 2 and B.ndim == 2, 'Размер матриц не равен 2'
assert A.shape[1] == B.shape[0], ('Матрицы размерностей '
'{} и {} неперемножаемы'.format(A.shape,
B.shape))
# Запоминаем размерности: мы умножаем матрицу размера nxm на матрицу размера mxk.
n, m, k = A.shape[0], A.shape[1], B.shape[1]
# Считаем внешнее произведение матриц. Заметим, что тогда получается матрица размера (nm)x(mk), таким образом,
# матрица разбивает на nk подматриц размера mxm.
C = np.array(np.outer(A, B.T))
# Давайте так и разобьем их на подматрицы. К сожалению, сразу сделать numpy.reshape не выдаст желаемого результата
# из-за "сканирования по строчкам". Сначала разобьем по вертикали на k столбцов нашу матрицу.
D = np.array(np.hsplit(C, k))
# Теперь можно сделать уже нормальный reshape, и все будет как надо. Мы получили матрицу nxk, элементами которой являются
# матрицы размера mxm. Единственное, она получилась транспонированной. Пока что так даже удобнее, поскольку транспонировать
# ее в таком виде будет криво (какие-то сложности из-за излишней многомерности).
E = D.reshape(k, n, m, m)
# Ну давайте посчитаем след каждой такой матрицы размера mxm. Совершенно случайно он оказывается элементом искомого
# произведения матриц (именно это было замечено при выполнении numpy.outer). Таким образом, все хорошо, разве что
# не забудем все же транспонировать матрицу.
# P. S. Заметим, что каждой операцией мы получаем матрицу, а потом только один раз ее используем. Так что при большом
# желании это все упихивается в одну никому не понятную строчку:
# np.trace(np.array(np.hsplit(np.array(np.outer(A, B.T)), B.shape[1])).
# reshape(B.shape[1], A.shape[0], A.shape[1], A.shape[1]), axis1=2, axis2=3).T
# Может, так будет быстрее.
return np.trace(E, axis1=2, axis2=3).T
def cumavg(X):
''' Возвращает по входной последовательности $X = (X_1, ..., X_n)$
последовательность $S = (S_1, ..., S_n)$, где $S_k = \frac{X_1 + ... + X_k}{k}$.
'''
# Считаем частичные суммы, а затем покоординатно делим их на числа 1, 2 и так далее по размеру входной последовательности.
return X.cumsum() / np.linspace(1, X.size, X.size)
S = cumavg(sps.uniform.rvs(size=100))
plt.figure(figsize=(15, 5)) # Размер фигуры 15 на 5.
plt.plot(S, '-o') # Ставим массив S на график, соединяем точки ломаной и ставим достаточно крупные точки.
plt.grid(linestyle='--') # Делаем решетку пунктирными линиям.
plt.show() # Показываем график.
def stupid_cumavg(X):
S = [0 for i in range(len(X))]
for i in range(len(X)):
S[i] = X[i] + S[i - 1]
for i in range(len(X)):
S[i] /= i + 1
return S
X = sps.uniform.rvs(size=10 ** 7)
%time S1 = cumavg(X)
%time S2 = stupid_cumavg(X)
np.abs(S1 - S2).sum()
def avgmin(A):
'''
Возвращает средний минимум по столбцам матрицы A.
'''
# Получаем массив минимумов по всем столбцам, из которого находим среднее значение.
return A.min(axis=0).mean()
def stupid_avgmin(A):
N, M = len(A), len(A[0])
min_col = [min([A[i][j] for i in range(N)]) for j in range(M)]
return sum(min_col) / M
N, M = 5000, 10000
A = sps.uniform.rvs(size=(N, M))
%time S1 = avgmin(A)
%time S2 = stupid_avgmin(A)
print(np.abs(S1 - S2))
def func4(X, v=0):
'''
Вовзращает новый массив по массиву X по правилу: все четные элементы $X$
заменены на число $v$ (если оно не указано, то на ноль). Все нечетные элементы исходного массива
возведены в квадрат и записаны в обратном порядке относительно позиций этих элементов.
'''
res = X.copy() # Копируем массив X в переменную res.
# Маска для нечетных чисел в массиве.
mask = X % 2 != 0
# Берем подматрицу их нечетных чисел, возводим все числа в ней в квадрат, а потом реверсим.
res[mask] = (X[mask] ** 2)[::-1]
# Оставляем все нечетные числа как есть, а четные превращаем в v; пользуемся v * True == v и v * False == 0.
return res * mask + v * np.logical_not(mask)
def stupid_func4(X, v=0):
odd = [elem ** 2 for elem in X if elem % 2]
new_X = []
j = len(odd) - 1
for i in range(len(X)):
if X[i] % 2:
new_X.append(odd[j])
j -= 1
else:
new_X.append(v)
return new_X
X = sps.randint.rvs(size=10 ** 7, low=0, high=100)
%time A1 = func4(X)
%time A2 = stupid_func4(X)
np.abs(A1 - A2).sum()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Make a feature matrix, n x p, where n = number of samples, p = number of features
Step2: 3. Standardize
Step3: 4. Sklearn PCA
Step4: 5. Matrix decomposition (see A User's Guide to Principal Components by Jackson, 1991.)
Step5: 6. Matlab's PCA
Step6: Check that all three give similar results
Step7: PCA with the entire IR spectrum.
Step8: Standardize matrix
|
<ASSISTANT_TASK:>
Python Code:
averagespectrum = PCAsynthetic.get_hyper_peaks(spectralmatrix, threshold = 0.01)
plt.plot(averagespectrum)
featurematrix = PCAsynthetic.makefeaturematrix(spectralmatrix, averagespectrum)
featurematrix[10:13,:]
featurematrix_std = PCAsynthetic.stdfeature(featurematrix, axis = 0)
#along axis 0 = running vertically downwards, across rows; 1 = columns
mean = featurematrix_std.mean(axis=0)
variance = featurematrix_std.std(axis=0)
print(mean, variance)
#define number of principal components
sklearn_pca = sklearnPCA(n_components=9)
#matrix with each sample in terms of the PCs
SkPC = sklearn_pca.fit_transform(featurematrix_std)
#covariance matrix
Skcov = sklearn_pca.get_covariance()
#score matrix
#Skscore = sklearn_pca.score_samples(featurematrix_std)
#explained variance
Skvariance = sklearn_pca.explained_variance_
Skvarianceratio = sklearn_pca.explained_variance_ratio_
Skvarianceratio
Skvariance
mean_vec = np.mean(featurematrix_std, axis=0)
#need to take transpose, since rowvar = true by default
cov_mat = np.cov(featurematrix_std.T)
#solve for characteristic roots and vectors
eig_vals, eig_vecs = np.linalg.eig(cov_mat)
#check that the loadings squared sum to 1:
Lsquared = sum(eig_vecs**2)
mlPCA = PCA(featurematrix_std)
#get projections of samples into PCA space
mltrans = mlPCA.Y
#reshape
mltransreshape = mltrans.reshape((256,256,9))
mlloadings = mlPCA.Wt
#mltrans[513,:] should be the same as mltransreshape[2,1,:]
mlloadings.shape
#projection of first sample, on to the first PC
P11 = np.dot(eig_vecs[:,0], featurematrix_std[0,:]-mean_vec)
mlP11 = mlPCA.Y[0,0]
SkP11 = SkPC[0,0]
P12 = np.dot(eig_vecs[:,1], featurematrix_std[0,:]-mean_vec)
mlP12 = mlPCA.Y[0,1]
SkP12 = SkPC[0,1]
P152 = np.dot(eig_vecs[:,1], featurematrix_std[15,:]-mean_vec)
mlP152 = mlPCA.Y[15,1]
SkP152 = SkPC[15,1]
print(P11, mlP11, SkP11)
print(P12, mlP12, SkP12)
print(P152, mlP152, SkP152)
print(mlloadings[0,7])
print(eig_vecs[0,7])
#Reshape spectral matrix
IRmatrix=spectralmatrix.reshape(65536,559)
print(IRmatrix[1,:].shape)
#make sure we've reshaped correctly
plt.plot(reshapespect[555,:])
IRmatrix=np.concatenate((IRmatrix[:,20:60], IRmatrix[:,230:270], IRmatrix[:,420:460], IRmatrix[:,100:140],IRmatrix[:,305:345], IRmatrix[:,470:510], IRmatrix[:,158:198], IRmatrix[:,354:394], IRmatrix[:,512:552] ), axis=1)
#IRmatrix=np.concatenate((IRmatrix[:,30:40], IRmatrix[:,240:260], IRmatrix[:,430:450], IRmatrix[:,90:130],IRmatrix[:,395:335], IRmatrix[:,460:500], IRmatrix[:,148:188], IRmatrix[:,364:384], IRmatrix[:,522:542] ), axis=1)
IRmatrix_std = PCAsynthetic.stdfeature(IRmatrix, axis = 0)
IRmean = IRmatrix_std.mean(axis=0)
IRvariance = IRmatrix_std.std(axis=0)
print(IRvariance)
IRmlPCA = PCA(IRmatrix_std)
#get projections of samples into PCA space
IRmltrans = IRmlPCA.Y
#reshape
IRmlloadings = IRmlPCA.Wt
IRmltrans.shape
IRmltransreshape=IRmltrans.reshape(256,256,360)
score1image = IRmltransreshape[:,:,0]
score2image = IRmltransreshape[:,:,1]
score3image = IRmltransreshape[:,:,2]
score4image = IRmltransreshape[:,:,3]
score5image = IRmltransreshape[:,:,4]
score6image = IRmltransreshape[:,:,5]
score7image = IRmltransreshape[:,:,6]
score8image = IRmltransreshape[:,:,7]
score9image = IRmltransreshape[:,:,8]
plt.imshow(syntheticspectra.Cmatrix)
plt.imshow(score1image)
plt.imshow(score2image)
plt.imshow(score3image)
plt.imshow(score4image)
plt.imshow(score5image)
plt.imshow(score6image)
plt.imshow(score7image)
plt.imshow(score8image)
plt.imshow(score9image)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Repeat
Step2: You might argue that 5 samples is too small...
Step3: We have added a "reproducible" mode, whereby we don't sample at random points, but rather on a regular sub-grid
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import open_cp.retrohotspot
import open_cp.data
import open_cp.predictors
import open_cp.evaluation
n = 1000
times = np.random.random(n) * 365
times = times * (np.timedelta64(1, "D") / np.timedelta64(1, "s")) * np.timedelta64(1, "s")
times = np.sort(np.datetime64("2016-01-01") + times)
points = open_cp.data.TimedPoints(times, np.random.random((2,n)) * 1000)
mask = [[False]*100]*100
grid = open_cp.data.MaskedGrid(10, 10, 0, 0, mask)
predictor = open_cp.retrohotspot.RetroHotSpot()
predictor.data = points
predictor.weight = open_cp.retrohotspot.TruncatedGaussian(100, 10)
cts_pred = predictor.predict(end_time=np.datetime64("2016-10-01"))
cts_pred.samples = 5
pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(cts_pred, grid)
fig, ax = plt.subplots(ncols=2, figsize=(14,8))
ax[0].pcolor(*pred.mesh_data(), pred.intensity_matrix, cmap="Greys")
p = open_cp.evaluation.top_slice_prediction(pred, 0.1)
ax[1].pcolor(*p.mesh_data(), p.intensity_matrix, cmap="Greys")
cts_pred = predictor.predict(end_time=np.datetime64("2016-10-01"))
cts_pred.samples = 5
pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(cts_pred, grid)
p1 = open_cp.evaluation.top_slice_prediction(pred, 0.1)
np.sum(p.intensity_matrix.mask ^ p.intensity_matrix.mask)
np.sum(p1.intensity_matrix.mask ^ p1.intensity_matrix.mask)
np.sum(p.intensity_matrix.mask ^ p1.intensity_matrix.mask)
cts_pred = predictor.predict(end_time=np.datetime64("2016-10-01"))
cts_pred.samples = 25
pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(cts_pred, grid)
p = open_cp.evaluation.top_slice_prediction(pred, 0.1)
cts_pred = predictor.predict(end_time=np.datetime64("2016-10-01"))
cts_pred.samples = 25
pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(cts_pred, grid)
p1 = open_cp.evaluation.top_slice_prediction(pred, 0.1)
np.sum(p.intensity_matrix.mask ^ p1.intensity_matrix.mask)
cts_pred = predictor.predict(end_time=np.datetime64("2016-10-01"))
cts_pred.samples = -5
pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(cts_pred, grid)
p = open_cp.evaluation.top_slice_prediction(pred, 0.1)
cts_pred = predictor.predict(end_time=np.datetime64("2016-10-01"))
cts_pred.samples = -5
pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(cts_pred, grid)
p1 = open_cp.evaluation.top_slice_prediction(pred, 0.1)
np.sum(p.intensity_matrix.mask ^ p1.intensity_matrix.mask)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a sample 2D Image
Step2: Find the center pixel of each peak
Step3: Use Gaussian fitting for sub-pixel fitting
Step4: Plot to compare the known and fitted coordinates
Step5: Find the error in the fitting
Step6: Fit a lattice to the peaks
Step7: Unit cell calculation
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
# Import these from ncempy.algo
from ncempy.algo import gaussND
from ncempy.algo import peak_find
# Create coordinates with a random offset
coords = peak_find.lattice2D_2((1, 0), (0, 1), 2, 2, (0, 0), (5, 5))
coords += np.random.rand(coords.shape[0], coords.shape[1]) / 10.0
coords = np.array(coords)*30 + (100, 100)
print('Coords shape = {}'.format(coords.shape))
# Create an image with the coordinates as gaussians
kernel_shape = (11, 11)
simIm = peak_find.peaksToImage(coords, (512, 512), (1.75, 2.75), kernel_shape)
fg, ax = plt.subplots(1, 2, sharex=True,sharey=True)
ax[0].imshow(simIm)
ax[1].imshow(simIm)
ax[1].scatter(coords[:,1], coords[:,0],c='r',marker='.')
fg.tight_layout()
coords_found = peak_find.peakFind2D(simIm, 0.5)
fg, ax = plt.subplots(1,1)
ax.imshow(simIm)
_ = ax.scatter(coords_found[:,1],coords_found[:,0],c='r',marker='x')
optPeaks, optI, fittingValues = peak_find.fit_peaks_gauss2d(simIm, coords_found, 5,
(1.5, 2.5), ((-1.5, -1.5,0,0),(1.5,1.5,3,3)))
# Plot the gaussian widths
f2, ax2 = plt.subplots(1, 2)
ax2[0].plot(optPeaks[:, 2],'go')
ax2[0].plot(optPeaks[:, 3],'ro')
ax2[0].set(title='Gaussian fit sigmas',xlabel='index sorted by peak intensity')
ax2[0].legend(labels=['width 0', 'width 1'])
stdMeans = np.mean(optPeaks[:, 2:4], axis=0)
# Print out the average of the fitted sigmas
print('Sigma means [s_0, s_1]: {}'.format(stdMeans))
# Plot the fitted center (relative from the intensity peak)
ax2[1].plot(fittingValues[:, 0], 'o')
ax2[1].plot(fittingValues[:, 1], 'o')
ax2[1].set(title="Gaussian fit relative centers", xlabel='index sorted by peak intensity')
_ = ax2[1].legend(labels=['center 0', 'center 1'])
ax2[1].set(ylim=(-0.5, 0.5))
ax2[1].set(yticks=(-0.5, -0.25, 0, 0.25, 0.5))
fg.tight_layout()
fg, ax = plt.subplots(1, 2)
ax[0].imshow(simIm)
ax[0].scatter(coords_found[:,1], coords_found[:,0],c='b',marker='o')
ax[0].scatter(optPeaks[:,1], optPeaks[:,0],c='r',marker='x')
ax[0].scatter(coords[:,1], coords[:,0],c='k',marker='+')
ax[0].legend(['integer', 'optimized', 'expected'])
ax[0].set(title='All peaks')
# Zoom in on one peak
ax[1].imshow(simIm)
ax[1].scatter(coords_found[:,1], coords_found[:,0],c='b',marker='o')
ax[1].scatter(optPeaks[:,1], optPeaks[:,0],c='r',marker='x')
ax[1].scatter(coords[:,1], coords[:,0],c='k',marker='+')
ax[1].axis((100,115,110,95))
ax[0].legend(['integer', 'optimized', 'expected'])
ax[1].set(title='One peak');
# Plot the RMS error for each fitted peak
# First sort each set of coordinates to match them
err = []
for a, b in zip(coords[np.argsort(coords[:,0]),:], optPeaks[np.argsort(optPeaks[:,0]),0:2]):
err.append(np.sqrt(np.sum(a - b)**2))
fg, ax = plt.subplots(1, 1)
ax.plot(err)
_ = ax.set(xlabel='coorindate', ylabel='RMS error');
# Find a lattice from the found Pb peak sites
p0 = optPeaks[:, 0:2].copy()
# Find the top left atom
corner = (0,0)
modelRR = np.sqrt(np.sum((p0 - corner)**2, axis=1)) #Distances from middle of particle
centerAtom = modelRR.argmin() #Minimum distance from NP middle
# Input starting guess at vectors
origin0 = p0[centerAtom,:].copy()
u0 = [60, 0] # verical in imshow()
v0 = [0, 60] # horizontal in imshow()
fraction = (1,1)
origin,u,v,ab = peak_find.refineLattice2D(origin0, u0, v0, p0,
refine_locally=True,
fraction=fraction)
# Normalize the vectors
uN = u/np.linalg.norm(u)
vN = v/np.linalg.norm(v)
# Find the angles between vectors
angleUV = np.arccos(np.dot(uN, vN))*180./np.pi
# Print out the results
print('origin: {}'.format(origin))
print('u: {}\nv: {}'.format(uN, vN))
print('|u|: {0[0]}, |v|: {0[1]} (pixels)'.format(np.linalg.norm([u, v],axis=1)))
print('angles: UV: {}'.format(angleUV))
# Plot all positions and the vectors
f124, ax124 = plt.subplots(1,1,clear=True)
ax124.imshow(simIm)
ax124.scatter(origin[1],origin[0],c='k',s=50,marker='x') #add 0,0,0 as a black X
ax124.scatter(p0[:,1], p0[:,0], s=25, label='input peaks',c='k')
#Plot original vectors guesses
ax124.arrow(origin0[1],origin0[0],u0[1],u0[0],color='r',ls='--')
ax124.arrow(origin0[1],origin0[0],v0[1],v0[0],color='b',ls='--')
#Plot a perfect verion of the {u,v,w} lattice
uvLattice = peak_find.lattice2D(u,v,1,1,origin,[5, 5])
ax124.scatter(uvLattice[:,1],uvLattice[:,0],s=15,color='r',label='peaks from fit')
#Plot optimized vectors
ax124.arrow(origin[1],origin[0],u[1],u[0],color='r')
ax124.arrow(origin[1],origin[0],v[1],v[0],color='b')
# Clean up the plot
ax124.set(xlabel='X',ylabel='Y',title='Fit details');
ax124.legend()
ax124.axis('equal');
peak_find.calculate_unit_cell
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-1', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: Shape signature for comparison
Step3: Autoencoder
Step4: Testing in new datasets
Step5: Pixel-based test
|
<ASSISTANT_TASK:>
Python Code:
## Functions
import sys,os
import copy
path = os.path.abspath('../dev/')
if path not in sys.path:
sys.path.append(path)
import bib_mri as FW
import numpy as np
import scipy as scipy
import scipy.misc as misc
import matplotlib as mpl
import matplotlib.pyplot as plt
from numpy import genfromtxt
import platform
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
%matplotlib inline
def sign_extract(seg, resols): #Function for shape signature extraction
splines = FW.get_spline(seg,smoothness)
sign_vect = np.array([]).reshape(0,points) #Initializing temporal signature vector
for resol in resols:
sign_vect = np.vstack((sign_vect, FW.get_profile(splines, n_samples=points, radius=resol)))
return sign_vect
def sign_fit(sig_ref, sig_fit): #Function for signature fitting
dif_curv = []
for shift in range(points):
dif_curv.append(np.abs(np.sum((sig_ref - np.roll(sig_fit[0],shift))**2)))
return np.apply_along_axis(np.roll, 1, sig_fit, np.argmin(dif_curv))
print "Python version: ", platform.python_version()
print "Numpy version: ", np.version.version
print "Scipy version: ", scipy.__version__
print "Matplotlib version: ", mpl.__version__
#Loading labeled segmentations
seg_label = genfromtxt('../../dataset/Seg_Watershed/watershed_label.csv', delimiter=',').astype('uint8')
list_masks = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 0] #Extracting segmentations
list_labels = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 1] #Extracting labels
ind_ex_err = list_masks[np.where(list_labels)[0]]
ind_ex_cor = list_masks[np.where(np.logical_not(list_labels))[0]]
print "Mask List", list_masks
print "Label List", list_labels
print "Correct List", ind_ex_cor
print "Erroneous List", ind_ex_err
mask_correct = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(ind_ex_cor[10]))
mask_error = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(ind_ex_err[10]))
plt.figure()
plt.axis('off')
plt.imshow(mask_correct,'gray',interpolation='none')
plt.title("Correct segmentation example")
plt.show()
plt.figure()
plt.axis('off')
plt.imshow(mask_error,'gray',interpolation='none')
plt.title("Erroneous segmentation example")
plt.show()
smoothness = 700 #Smoothness
degree = 5 #Spline degree
fit_res = 0.35
resols = np.arange(0.01,0.5,0.01) #Signature resolutions
resols = np.insert(resols,0,fit_res) #Insert resolution for signature fitting
points = 500 #Points of Spline reconstruction
prof_vec = np.empty((len(list_masks),resols.shape[0],points)) #Initializing correct signature vector
for ind, mask in enumerate(list_masks):
#Loading correct mask
mask_pn = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(mask))
refer_temp = sign_extract(mask_pn, resols) #Function for shape signature extraction
prof_vec[ind] = refer_temp
if mask > 0: #Fitting curves using the first one as basis
prof_ref = prof_vec[0]
prof_vec[ind] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting
ind_rel_cor = np.where(np.logical_not(list_labels))[0]
ind_rel_err = np.where(list_labels)[0]
print "Correct segmentations' vector: ", prof_vec[ind_rel_cor].shape
print "Erroneous segmentations' vector: ", prof_vec[ind_rel_err].shape
print(ind_rel_cor.shape)
print(ind_ex_cor.shape)
res_ex = 15
#for ind_ex, ind_rel in zip(ind_ex_cor, ind_rel_cor):
# plt.figure()
# f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
# ax1.plot(prof_vec[ind_rel,res_ex,:].T)
# ax1.set_title("Signature %i at res: %f"%(ind_ex, resols[res_ex]))
#
# mask_correct = np.load('../../dataset/Seg_Watershed/mask_wate_{}.npy'.format(ind_ex))
# ax2.axis('off')
# ax2.imshow(mask_correct,'gray',interpolation='none')
#
# plt.show()
plt.figure()
plt.plot(prof_vec[ind_rel_cor,res_ex,:].T)
plt.title("Correct signatures for res: %f"%(resols[res_ex]))
plt.show()
plt.figure()
plt.plot(prof_vec[ind_rel_err,res_ex,:].T)
plt.title("Erroneous signatures for res: %f"%(resols[res_ex]))
plt.show()
def train(model,train_loader,loss_fn,optimizer,epochs=100,patience=5,criteria_stop="loss"):
hist_train_loss = hist_val_loss = hist_train_acc = hist_val_acc = np.array([])
best_epoch = patience_count = 0
print("Training starts along %i epoch"%epochs)
for e in range(epochs):
correct_train = correct_val = total_train = total_val = 0
cont_i = loss_t_e = loss_v_e = 0
for data_train in train_loader:
var_inputs = Variable(data_train)
predict, encode = model(var_inputs)
loss = loss_fn(predict, var_inputs.view(-1, 500))
loss_t_e += loss.data[0]
optimizer.zero_grad()
loss.backward()
optimizer.step()
cont_i += 1
#Stacking historical
hist_train_loss = np.hstack((hist_train_loss, loss_t_e/(cont_i*1.0)))
print('Epoch: ', e, 'train loss: ', hist_train_loss[-1])
if(e == epochs-1):
best_epoch = e
best_model = copy.deepcopy(model)
print("Training stopped")
patience_count += 1
return(best_model, hist_train_loss, hist_val_loss)
class autoencoder(nn.Module):
def __init__(self):
super(autoencoder, self).__init__()
self.fc1 = nn.Linear(500, 200)
self.fc21 = nn.Linear(200, 2)
self.fc3 = nn.Linear(2, 200)
self.fc4 = nn.Linear(200, 500)
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()
def encode(self, x):
h1 = self.relu(self.fc1(x))
return self.fc21(h1)
def decode(self, z):
h3 = self.relu(self.fc3(z))
return self.sigmoid(self.fc4(h3))
def forward(self, x):
z = self.encode(x.view(-1, 500))
return self.decode(z), z
class decoder(nn.Module):
def __init__(self):
super(decoder, self).__init__()
self.fc3 = nn.Linear(2, 200)
self.fc4 = nn.Linear(200, 500)
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()
def decode(self, z):
h3 = self.relu(self.fc3(z))
return self.sigmoid(self.fc4(h3))
def forward(self, x):
return self.decode(x.view(-1, 2))
net = autoencoder()
print(net)
res_chs = res_ex
trainloader = prof_vec[:,res_chs,:]
val_norm = np.amax(trainloader).astype(float)
print val_norm
trainloader = trainloader / val_norm
trainloader = torch.FloatTensor(trainloader)
print trainloader.size()
loss_fn = torch.nn.MSELoss()
optimizer = torch.optim.Adam(net.parameters())
epochs = 20
patience = 5
max_batch = 64
criteria = "loss"
best_model, loss, loss_test = train(net, trainloader, loss_fn, optimizer, epochs = epochs,
patience = patience, criteria_stop = criteria)
plt.title('Loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.plot(loss, label='Train')
plt.legend()
plt.show()
decode, encode = net(Variable(trainloader))
out_decod = decode.data.numpy()
out_encod = encode.data.numpy()
print(out_decod.shape, out_encod.shape, list_labels.shape)
plt.figure(figsize=(7, 6))
plt.scatter(out_encod[:,0], out_encod[:,1], c=list_labels)
plt.show()
#Loading labeled segmentations
seg_label = genfromtxt('../../dataset/Seg_ROQS/roqs_label.csv', delimiter=',').astype('uint8')
list_masks = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 0] #Extracting segmentations
list_labels = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 1] #Extracting labels
ind_ex_err = list_masks[np.where(list_labels)[0]]
ind_ex_cor = list_masks[np.where(np.logical_not(list_labels))[0]]
prof_vec_roqs = np.empty((len(list_masks),resols.shape[0],points)) #Initializing correct signature vector
for ind, mask in enumerate(list_masks):
mask_pn = np.load('../../dataset/Seg_ROQS/mask_roqs_{}.npy'.format(mask)) #Loading mask
refer_temp = sign_extract(mask_pn, resols) #Function for shape signature extraction
prof_vec_roqs[ind] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting using Watershed as basis
ind_rel_cor = np.where(np.logical_not(list_labels))[0]
ind_rel_err = np.where(list_labels)[0]
print "Correct segmentations' vector: ", prof_vec_roqs[ind_rel_cor].shape
print "Erroneous segmentations' vector: ", prof_vec_roqs[ind_rel_err].shape
#for ind_ex, ind_rel in zip(ind_ex_err, ind_rel_err):
# plt.figure()
# f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
# ax1.plot(prof_vec_roqs[ind_rel,res_ex,:].T)
# ax1.set_title("Signature %i at res: %f"%(ind_ex, resols[res_ex]))
#
# mask_correct = np.load('../../dataset/Seg_ROQS/mask_roqs_{}.npy'.format(ind_ex))
# ax2.axis('off')
# ax2.imshow(mask_correct,'gray',interpolation='none')
#
# plt.show()
plt.figure()
plt.plot(prof_vec_roqs[ind_rel_cor,res_ex,:].T)
plt.title("Correct signatures for res: %f"%(resols[res_ex]))
plt.show()
plt.figure()
plt.plot(prof_vec_roqs[ind_rel_err,res_ex,:].T)
plt.title("Erroneous signatures for res: %f"%(resols[res_ex]))
plt.show()
trainloader = prof_vec_roqs[:,res_chs,:]
trainloader = trainloader / val_norm
trainloader = torch.FloatTensor(trainloader)
print trainloader.size()
decode, encode = net(Variable(trainloader))
out_decod = decode.data.numpy()
out_encod = encode.data.numpy()
print(out_decod.shape, out_encod.shape, list_labels.shape)
plt.figure(figsize=(7, 6))
plt.scatter(out_encod[:,0], out_encod[:,1], c=list_labels)
plt.show()
#Loading labeled segmentations
seg_label = genfromtxt('../../dataset/Seg_pixel/pixel_label.csv', delimiter=',').astype('uint8')
list_masks = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 0] #Extracting segmentations
list_labels = seg_label[np.logical_or(seg_label[:,1] == 0, seg_label[:,1] == 1), 1] #Extracting labels
ind_ex_err = list_masks[np.where(list_labels)[0]]
ind_ex_cor = list_masks[np.where(np.logical_not(list_labels))[0]]
prof_vec_pixe = np.empty((len(list_masks),resols.shape[0],points)) #Initializing correct signature vector
for ind, mask in enumerate(list_masks):
mask_pn = np.load('../../dataset/Seg_pixel/mask_pixe_{}.npy'.format(mask)) #Loading mask
refer_temp = sign_extract(mask_pn, resols) #Function for shape signature extraction
prof_vec_pixe[ind] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting using Watershed as basis
ind_rel_cor = np.where(np.logical_not(list_labels))[0]
ind_rel_err = np.where(list_labels)[0]
print "Correct segmentations' vector: ", prof_vec_pixe[ind_rel_cor].shape
print "Erroneous segmentations' vector: ", prof_vec_pixe[ind_rel_err].shape
#for ind_ex, ind_rel in zip(ind_ex_cor, ind_rel_cor):
# plt.figure()
# f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
# ax1.plot(prof_vec_pixe[ind_rel,res_ex,:].T)
# ax1.set_title("Signature %i at res: %f"%(ind_ex, resols[res_ex]))
#
# mask_correct = np.load('../../dataset/Seg_pixel/mask_pixe_{}.npy'.format(ind_ex))
# ax2.axis('off')
# ax2.imshow(mask_correct,'gray',interpolation='none')
#
# plt.show()
plt.figure()
plt.plot(prof_vec_pixe[ind_rel_cor,res_ex,:].T)
plt.title("Correct signatures for res: %f"%(resols[res_ex]))
plt.show()
plt.figure()
plt.plot(prof_vec_pixe[ind_rel_err,res_ex,:].T)
plt.title("Erroneous signatures for res: %f"%(resols[res_ex]))
plt.show()
trainloader = prof_vec_pixe[:,res_chs,:]
trainloader = trainloader / val_norm
trainloader = torch.FloatTensor(trainloader)
print trainloader.size()
decode, encode = net(Variable(trainloader))
out_decod = decode.data.numpy()
out_encod = encode.data.numpy()
print(out_decod.shape, out_encod.shape, list_labels.shape)
plt.figure(figsize=(7, 6))
plt.scatter(out_encod[:,0], out_encod[:,1], c=list_labels)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So there's one catch
Step2: Another way around this is to invoke the skip rows option when reading the CSV. If you look at the file we are importing, you see that the first 49 rows are comments, then comes our header row, and then the field type row that we don't want. So we want to skip rows 1 thru 49 and also line 51. If we create a list of these row numbers, we can pass that to the skip_rows parameter...
Step3: Now that we have this as a pandas data frame, we can analyze it here, or we can simply save a copy to our local machine. For the latter, pandas' to_csv() function works quite easily.
|
<ASSISTANT_TASK:>
Python Code:
# Import the pandas module
import pandas as pd
# Set the url as a variable; this is the URL we generated above
theURL = 'https://waterdata.usgs.gov/nc/nwis/water_use?format=rdb&rdb_compression=value&wu_area=County&wu_year=ALL&wu_county=ALL&wu_category=IN&wu_county_nms=--ALL%2BCounties--&wu_category_nms=Industrial'
# Read in the data as a pandas data frame and display the first 5 rows
# -Note we need to specify that it's a tab delimited file and uses '#' to indicated commments
dfNWIS = pd.read_csv(theURL, delimiter='\t', comment='#')
dfNWIS.head()
#Drop the first row, and again show the first 5 rows of data...
dfNWIS.drop(0,axis='rows',inplace=True)
dfNWIS.head()
#Create a list of numbers 0 thru 49, recalling Python lists are zero-indexed...
rowsToSkip = range(49)
#Append '51' to the list
rowsToSkip.append(50)
#Use the read_csv function as before, but skip the rows we want to skip
dfNWIS = pd.read_csv(theURL, delimiter='\t', skiprows=rowsToSkip)
dfNWIS.head()
dfNWIS.to_csv("NCWaterData.csv")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data
Step2: The aclImdb folder contains a train and test subfolder
Step3: The aclImdb/train/pos and aclImdb/train/neg folders contain text files, each of
Step4: We are only interested in the pos and neg subfolders, so let's delete the rest
Step5: You can use the utility tf.keras.preprocessing.text_dataset_from_directory to
Step6: Let's preview a few samples
Step7: Prepare the data
Step8: Two options to vectorize the data
Step9: Build a model
Step10: Train the model
Step11: Evaluate the model on the test set
Step12: Make an end-to-end model
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
!curl -O https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -xf aclImdb_v1.tar.gz
!ls aclImdb
!ls aclImdb/test
!ls aclImdb/train
!cat aclImdb/train/pos/6248_7.txt
!rm -r aclImdb/train/unsup
batch_size = 32
raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory(
"aclImdb/train",
batch_size=batch_size,
validation_split=0.2,
subset="training",
seed=1337,
)
raw_val_ds = tf.keras.preprocessing.text_dataset_from_directory(
"aclImdb/train",
batch_size=batch_size,
validation_split=0.2,
subset="validation",
seed=1337,
)
raw_test_ds = tf.keras.preprocessing.text_dataset_from_directory(
"aclImdb/test", batch_size=batch_size
)
print(f"Number of batches in raw_train_ds: {raw_train_ds.cardinality()}")
print(f"Number of batches in raw_val_ds: {raw_val_ds.cardinality()}")
print(f"Number of batches in raw_test_ds: {raw_test_ds.cardinality()}")
# It's important to take a look at your raw data to ensure your normalization
# and tokenization will work as expected. We can do that by taking a few
# examples from the training set and looking at them.
# This is one of the places where eager execution shines:
# we can just evaluate these tensors using .numpy()
# instead of needing to evaluate them in a Session/Graph context.
for text_batch, label_batch in raw_train_ds.take(1):
for i in range(5):
print(text_batch.numpy()[i])
print(label_batch.numpy()[i])
from tensorflow.keras.layers import TextVectorization
import string
import re
# Having looked at our data above, we see that the raw text contains HTML break
# tags of the form '<br />'. These tags will not be removed by the default
# standardizer (which doesn't strip HTML). Because of this, we will need to
# create a custom standardization function.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, "<br />", " ")
return tf.strings.regex_replace(
stripped_html, f"[{re.escape(string.punctuation)}]", ""
)
# Model constants.
max_features = 20000
embedding_dim = 128
sequence_length = 500
# Now that we have our custom standardization, we can instantiate our text
# vectorization layer. We are using this layer to normalize, split, and map
# strings to integers, so we set our 'output_mode' to 'int'.
# Note that we're using the default split function,
# and the custom standardization defined above.
# We also set an explicit maximum sequence length, since the CNNs later in our
# model won't support ragged sequences.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=max_features,
output_mode="int",
output_sequence_length=sequence_length,
)
# Now that the vocab layer has been created, call `adapt` on a text-only
# dataset to create the vocabulary. You don't have to batch, but for very large
# datasets this means you're not keeping spare copies of the dataset in memory.
# Let's make a text-only dataset (no labels):
text_ds = raw_train_ds.map(lambda x, y: x)
# Let's call `adapt`:
vectorize_layer.adapt(text_ds)
def vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return vectorize_layer(text), label
# Vectorize the data.
train_ds = raw_train_ds.map(vectorize_text)
val_ds = raw_val_ds.map(vectorize_text)
test_ds = raw_test_ds.map(vectorize_text)
# Do async prefetching / buffering of the data for best performance on GPU.
train_ds = train_ds.cache().prefetch(buffer_size=10)
val_ds = val_ds.cache().prefetch(buffer_size=10)
test_ds = test_ds.cache().prefetch(buffer_size=10)
from tensorflow.keras import layers
# A integer input for vocab indices.
inputs = tf.keras.Input(shape=(None,), dtype="int64")
# Next, we add a layer to map those vocab indices into a space of dimensionality
# 'embedding_dim'.
x = layers.Embedding(max_features, embedding_dim)(inputs)
x = layers.Dropout(0.5)(x)
# Conv1D + global max pooling
x = layers.Conv1D(128, 7, padding="valid", activation="relu", strides=3)(x)
x = layers.Conv1D(128, 7, padding="valid", activation="relu", strides=3)(x)
x = layers.GlobalMaxPooling1D()(x)
# We add a vanilla hidden layer:
x = layers.Dense(128, activation="relu")(x)
x = layers.Dropout(0.5)(x)
# We project onto a single unit output layer, and squash it with a sigmoid:
predictions = layers.Dense(1, activation="sigmoid", name="predictions")(x)
model = tf.keras.Model(inputs, predictions)
# Compile the model with binary crossentropy loss and an adam optimizer.
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
epochs = 3
# Fit the model using the train and test datasets.
model.fit(train_ds, validation_data=val_ds, epochs=epochs)
model.evaluate(test_ds)
# A string input
inputs = tf.keras.Input(shape=(1,), dtype="string")
# Turn strings into vocab indices
indices = vectorize_layer(inputs)
# Turn vocab indices into predictions
outputs = model(indices)
# Our end to end model
end_to_end_model = tf.keras.Model(inputs, outputs)
end_to_end_model.compile(
loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]
)
# Test it with `raw_test_ds`, which yields raw strings
end_to_end_model.evaluate(raw_test_ds)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This example features
Step2: Imports
Step3: Run Workflow
Step4: Instantiate Client
Step5: Build Nearest Neighbor Embedding Index
Step7: Define Model Class
Step8: Earlier we logged an artifact with the key "nn_index".
Step9: Log Model
Step10: We also have to make sure we provide every package involved in the model.
Step11: Make Live Predictions
Step12: Prepare Data
Step13: Load Deployed Model
Step14: Query Deployed Model
|
<ASSISTANT_TASK:>
Python Code:
try:
import verta
except ImportError:
!pip install verta
HOST = "app.verta.ai"
PROJECT_NAME = "Film Review Embeddings"
EXPERIMENT_NAME = "TF Hub and Annoy"
# import os
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
from __future__ import print_function
import os
import time
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
import annoy
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
train_data_url = "http://s3.amazonaws.com/verta-starter/imdb_master.csv"
train_data_filename = wget.detect_filename(train_data_url)
if not os.path.isfile(train_data_filename):
wget.download(train_data_url)
all_reviews = pd.read_csv(train_data_filename, encoding='latin')['review'].values.tolist()
reviews = all_reviews[:2000] # just a subset for this example
reviews[0]
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
run = client.set_experiment_run()
EMBEDDING_LENGTH = 512
NN_INDEX_FILENAME = "reviews.ann"
os.environ["TFHUB_CACHE_DIR"] = "tf_cache_dir"
# define graph
g = tf.Graph()
with g.as_default():
text_input = tf.placeholder(dtype=tf.string, shape=[None])
encoder = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-large/3")
embed = encoder(text_input)
init_op = tf.group([tf.global_variables_initializer(), tf.tables_initializer()])
g.finalize()
# initialize session
sess = tf.Session(graph=g)
sess.run(init_op)
# build and save embedding index
t = annoy.AnnoyIndex(EMBEDDING_LENGTH, 'angular') # Length of item vector that will be indexed
for i, review in enumerate(reviews):
# produce embedding with TF
embedding = sess.run(embed, feed_dict={text_input: [review]})
t.add_item(i, embedding[0])
t.build(10) # 10 trees
t.save(NN_INDEX_FILENAME)
run.log_artifact("nn_index", open(NN_INDEX_FILENAME, 'rb'))
class EmbeddingAndLookupModel:
def __init__(self, artifacts):
Parameters
----------
artifacts
Mapping of Experiment Run artifact keys to filepaths.
This is provided by ``run.fetch_artifacts(artifact_keys)``.
# get artifact filepath from `artifacts` mapping
annoy_index_filepath = artifacts['nn_index']
# load embedding index
self.index = annoy.AnnoyIndex(EMBEDDING_LENGTH, "angular")
self.index.load(annoy_index_filepath)
os.environ["TFHUB_CACHE_DIR"] = "tf_cache_dir"
# define graph
g = tf.Graph()
with g.as_default():
self.text_input = tf.placeholder(dtype=tf.string, shape=[None])
self.encoder = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-large/3")
self.embed = self.encoder(self.text_input)
init_op = tf.group([tf.global_variables_initializer(), tf.tables_initializer()])
g.finalize()
self.graph = g
# initialize session
self.session = tf.Session(graph=self.graph)
self.session.run(init_op)
def predict(self, data):
predictions = []
for review in data:
# embed sentence
embedding = self.session.run(self.embed, feed_dict={self.text_input: [review]})
# find closest
predictions.append({
review: self.index.get_nns_by_vector(embedding[0], 10)
})
return predictions
artifacts = run.fetch_artifacts(["nn_index"])
model = EmbeddingAndLookupModel(artifacts=artifacts)
model.predict(["Good film.", "Bad film!"])
run.log_model(
model=EmbeddingAndLookupModel,
artifacts=['nn_index'],
)
run.log_requirements([
"annoy==1.16.2",
"tensorflow",
"tensorflow_hub",
])
run
reviews = all_reviews[-2000:]
from verta.deployment import DeployedModel
deployed_model = DeployedModel(HOST, run.id)
for review in reviews:
print(deployed_model.predict([review]))
time.sleep(.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's define a generator xx which selects random elements from items, and two other generators yy and zz which extract individual attributes from these elements.
Step2: When xx is reset, internally this also automatically resets yy and zz with the same seed (because they are "dependent generators" whose parent is xx).
Step3: This also works if xx, yy, zz are defined inside a CustomGenerator.
Step4: Just for illustration, let's repeat the last example but with a different set of items (produced by another custom generator, although this doesn't really matter).
Step5:
Step6: Using ExtractAttribute we can produce \"derived\" generators which extract the attributes aaa, bbb from the elements produced by g.
Step7: Class Lookup
Step8: Create generator g which selects a random letter and generator h which looks up each letter in the lowercase->uppercase mapping.
|
<ASSISTANT_TASK:>
Python Code:
letters = 'abcdefghijklmnopqrstuvwxyz'
Foobar = namedtuple('Foobar', ('foo', 'bar'))
items = [Foobar(c+c, c+c+c) for c in letters]
items[:3]
xx = SelectOne(items)
yy = xx.foo
zz = xx.bar
xx.reset(seed=12345)
print_generated_sequence(xx, num=10, sep='\n')
print_generated_sequence(yy, num=10)
print_generated_sequence(zz, num=10)
class QuuxGenerator(CustomGenerator):
xx = SelectOne(items)
yy = xx.foo
zz = xx.bar
ww = yy # alias
g = QuuxGenerator()
list(g.generate(10, seed=12345))
class FoobarGenerator(CustomGenerator):
foo = Integer(0, 100)
bar = HashDigest(length=8)
fg = FoobarGenerator()
items2 = list(fg.generate(10, seed=12345))
items2
class QuuxGenerator(CustomGenerator):
xx = SelectOne(items2)
yy = xx.foo
zz = xx.bar
g = QuuxGenerator()
g.reset(seed=99999); print_generated_sequence(g, num=10, sep='\n')
class QuuxGenerator(CustomGenerator):
aaa = Integer(0, 100)
bbb = HashDigest(length=6)
g = QuuxGenerator()
h1 = ExtractAttribute(g, 'aaa')
h2 = ExtractAttribute(g, 'bbb')
g.reset(seed=99999)
print_generated_sequence(g, num=10, sep='\n')
print_generated_sequence(h1, num=10)
print_generated_sequence(h2, num=10)
letters = 'abcdefghijklmnopqrstuvwxyz'
mapping = dict([(c, c.upper()) for c in letters])
g = SelectOne(letters)
h = Lookup(g, mapping)
g.reset(seed=12345)
print_generated_sequence(g, num=20)
print_generated_sequence(h, num=20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1:
Step2:
Step3: Initial Results
|
<ASSISTANT_TASK:>
Python Code:
### Task 1: Select what features you'll use.
### features_list is a list of strings, each of which is a feature name.
### The first feature must be "poi".
names = np.array(my_dataset.keys())
print names.shape, names[:5], "\n"
features_list = my_dataset.itervalues().next().keys()
features_list.sort()
features_list.remove('poi')
features_list.insert(0, 'poi')
features_list.remove('email_address')
print features_list
### convert dictionary to pandas dataframe
df = pd.DataFrame([entry for entry in my_dataset.itervalues()])
df = df.drop('email_address', axis=1)
df = df[features_list]
#df.dtypes
#df.describe()
#df.count()
df.poi = df.poi.astype('int')
df = df.convert_objects(convert_numeric=True)
for col in list(df.columns):
df[col] = df[col].round(decimals=3)
print "POI Count:\n", df.poi.value_counts()
df.head()
# create labels
y = df.poi.values
print y.shape
print y[:5]
# create initial features
X = df.drop('poi', axis=1).values
print X.shape
# imputation for 'NaN' values
imp = Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit(X)
X = imp.transform(X)
print X[:5]
### Task 2: Remove outliers
num_rows = X.shape[0]
num_cols = X.shape[1]
rows_to_remove = set()
for i in xrange(num_cols):
point_five_percentile = np.percentile(X[:,i], 0.5)
ninety_nine_point_five_percentile = np.percentile(X[:,i], 99.5)
for j in xrange(num_rows):
if X[j,i] < point_five_percentile:
#print "\tlow outlier: ", "row: ", j, "col: ", i, " -> ", X[j,i]
rows_to_remove.add(j)
elif X[j,i] > ninety_nine_point_five_percentile:
#print "\thigh outlier: ", "row: ", j, "col: ", i, " -> ", X[j,i]
rows_to_remove.add(j)
X = np.delete(X, list(rows_to_remove), axis=0)
y = np.delete(y, list(rows_to_remove))
print "names associated with outlier-containing rows to remove:"
for i in rows_to_remove:
print "\t",names[i]
names = np.delete(names, list(rows_to_remove))
print "\nnew X shape: ", X.shape
print "\ntotal rows removed: ", len(rows_to_remove), "({})".format(round(len(rows_to_remove)/float(num_rows), 2))
# split into training and testing data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
print X_train.shape, X_test.shape, y_train.shape, y_test.shape
### Task 3: Create new feature(s)
# scale
scaler = MinMaxScaler()
scaler = scaler.fit(X_train)
X_train = scaler.transform(X_train)
print X_train.shape
X_test = scaler.transform(X_test)
print X_test.shape
X_train
### Task 4: Try a varity of classifiers
### Please name your classifier clf for easy export below.
### Note that if you want to do PCA or other multi-stage operations,
### you'll need to use Pipelines. For more info:
### http://scikit-learn.org/stable/modules/pipeline.html
classifiers = dict()
def grid_searcher(clf):
# PUT IN FUNCTION FOR MINMAXSCALER
t0 = time()
even_range = range(2,X.shape[1],2)
random_state = [42]
t_or_f = [True, False]
#powers_of_ten = [10**x for x in range(-5,5)]
logspace = np.logspace(-5, 5, 10)
#kernels = ['linear', 'poly', 'rbf', 'sigmoid'] # takes too long, unfortunately
kernels = ['rbf']
criteria = ['gini', 'entropy']
splitters = ['best', 'random']
max_features = ['auto', 'sqrt', 'log2', None]
inits = ['k-means++', 'random']
# pca and select K best
pipeline = make_pipeline(RandomizedPCA(), SelectKBest(), clf)
params = dict(randomizedpca__n_components = even_range,
randomizedpca__whiten = t_or_f,
randomizedpca__random_state = random_state,
selectkbest__k = ['all'])
if pipeline.steps[2][0] == 'decisiontreeclassifier':
params['decisiontreeclassifier__criterion'] = criteria
params['decisiontreeclassifier__splitter'] = splitters
params['decisiontreeclassifier__max_features'] = max_features
params['decisiontreeclassifier__random_state'] = random_state
if pipeline.steps[2][0] == 'svc':
params['svc__C'] = logspace
params['svc__kernel'] = kernels
#params['svc__degree'] = [1,2,3,4,5] # for use with 'poly'
params['svc__gamma'] = logspace
params['svc__random_state'] = random_state
if pipeline.steps[2][0] == 'kmeans':
params['kmeans__n_clusters'] = [2]
params['kmeans__init'] = inits
params['kmeans__random_state'] = random_state
grid_search = GridSearchCV(pipeline, param_grid=params, n_jobs=4)
grid_search = grid_search.fit(X_train, y_train)
print "*"*15, pipeline.steps[2][0].upper(), "*"*15
#print "\nbest estimator: ", grid_search.best_estimator_, "\n"
print "\nBEST SCORE: ", grid_search.best_score_, "\n"
#print "\nbest params: ", grid_search.best_params_, "\n"
#print "#"*50
print "\nBEST ESTIMATOR:"
clf = grid_search.best_estimator_.fit(X_train, y_train)
#classifiers[pipeline.steps[2][0]] = clf
X_test_pca = clf.steps[0][1].transform(X_test)
X_test_skb = clf.steps[1][1].transform(X_test_pca)
print "new X_test shape: ", X_test_skb.shape
#print "#"*50
print "\nPREDICTIONS:"
#test_classifier(clf, my_dataset, features_list)
print "\nground truth:\n", y_test
y_pred = clf.steps[2][1].predict(X_test_skb)
print "\npredictions:\n", y_pred
#print "#"*50
print "\nEVALUATIONS:"
print "\nconfusion matrix:\n", confusion_matrix(y_test, y_pred)
print "\nclassification report:\n", classification_report(y_test, y_pred, target_names=["non-poi", "poi"])
print "ELAPSED TIME: ", round(time()-t0,3), "s"
grid_searcher(GaussianNB())
grid_searcher(DecisionTreeClassifier())
grid_searcher(SVC())
grid_searcher(KMeans())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Unit tests
Step9: Training the network
Step10: Check out your predictions
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
#self.activation_function = lambda x : (1 / float(1 + np.exp(-x))) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
def sigmoid(x):
return 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation here
self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden)# signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output)# signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
# TODO: Calculate the hidden layer's contribution to the error
# TODO: Backpropagated error terms - Replace these values with your calculations.
error = y - final_outputs# Output layer error is the difference between desired target and actual output.
output_error_term = error
hidden_error = np.dot(self.weights_hidden_to_output, output_error_term)
hidden_error_term = hidden_error * hidden_outputs * ( 1 - hidden_outputs )
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records# update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records# update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden)# signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output)# signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
import sys
### Set the hyperparameters here ###
iterations = 100
learning_rate = 0.1
hidden_nodes = 2
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Constant
Step2: Boolean
Step3: Integer
Step4: Float
Step5: HashDigest
Step6: HashDigest hex strings (lowercase)
Step7: HashDigest byte strings
Step8: FakerGenerator
Step9: Example
|
<ASSISTANT_TASK:>
Python Code:
import tohu
from tohu.v5.primitive_generators import *
from tohu.v5.utils import print_generated_sequence
print(f'Tohu version: {tohu.__version__}')
g = Constant('quux')
print_generated_sequence(g, num=10, seed=12345)
g1 = Boolean()
g2 = Boolean(p=0.8)
print_generated_sequence(g1, num=20, seed=12345)
print_generated_sequence(g2, num=20, seed=99999)
g = Integer(low=100, high=200)
print_generated_sequence(g, num=20, seed=12345)
g = Float(low=2.3, high=4.2)
print_generated_sequence(g, num=10, sep='\n', fmt='.12f', seed=12345)
g = HashDigest(length=6)
print_generated_sequence(g, num=10, seed=12345)
g = HashDigest(length=6, uppercase=False)
print_generated_sequence(g, num=10, seed=12345)
g = HashDigest(length=10, as_bytes=True)
print_generated_sequence(g, num=5, seed=12345, sep='\n')
g = FakerGenerator(method='name')
print_generated_sequence(g, num=8, seed=12345)
g = FakerGenerator(method='address')
print_generated_sequence(g, num=8, seed=12345, sep='\n---\n')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Detached Systems
Step3: We can see that the default system is well within this critical value by printing all radii and critical radii.
Step4: If we increase 'requiv' past the critical point, we'll receive a warning from the logger and would get an error if attempting to call b.run_compute().
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b['requiv_max@component@primary']
b['requiv_max@constraint@primary']
print(b.filter(qualifier='requiv*', context='component'))
b['requiv@primary'] = 2.2
print(b.run_checks())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We load the boston house-prices dataset and X are our features and y is the target variable medv (Median value of owner-occupied homes in $1000s).
Step2: Let's split the data in a test and training set.
Step3: Fitting models - the standard way
Step4: Full model with an intercept
Step5: Fitting models using R-style formulas
Step6: Full model with an intercept
Step7: Model with a polynomial and the target variable log transformed
Step8: Let's plot the QQ-Plot for the residuals
|
<ASSISTANT_TASK:>
Python Code:
import statsmodels.api as sm
import statsmodels.formula.api as smf
import statsmodels.tools.eval_measures as eval_measures
import seaborn as sns
import scipy.stats as stats
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from sklearn import datasets, model_selection, metrics
boston = datasets.load_boston()
print(boston.DESCR)
X = pd.DataFrame(boston.data, columns=boston.feature_names)
y = boston.target
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, train_size=0.7)
df_train = pd.DataFrame(y_train, columns=['target'])
df_train['type'] = 'train'
df_test = pd.DataFrame(y_test, columns=['target'])
df_test['type'] = 'test'
df_set = df_train.append(df_test)
_ = sns.displot(df_set, x="target" ,hue="type", kind="kde", log_scale=False)
model = sm.OLS(y_train, X_train)
result = model.fit()
print(result.summary())
_ = sm.qqplot(result.resid, fit=True, line="s")
result.pvalues < 0.05
predicted = result.predict(X_test)
print("r2 score: {}".format(metrics.r2_score(y_test, predicted)))
print("mse: {}".format(metrics.mean_squared_error(y_test, predicted)))
print("rmse: {}".format(np.sqrt(metrics.mean_squared_error(y_test, predicted))))
print("mae: {}".format(metrics.mean_absolute_error(y_test, predicted)))
model = sm.OLS(y_train, sm.add_constant(X_train))
result = model.fit()
print(result.summary())
_ = sm.qqplot(result.resid, fit=True, line="s")
predicted = result.predict(sm.add_constant(X_test))
print("r2 score: {}".format(metrics.r2_score(y_test, predicted)))
print("mse: {}".format(metrics.mean_squared_error(y_test, predicted)))
print("rmse: {}".format(np.sqrt(metrics.mean_squared_error(y_test, predicted))))
print("mae: {}".format(metrics.mean_absolute_error(y_test, predicted)))
dat = X_train.copy()
dat['MEDV'] = y_train
dat.head()
result = smf.ols('MEDV ~ CRIM + ZN + INDUS + CHAS + NOX + RM + AGE + DIS + RAD + TAX + PTRATIO + B', data=dat).fit()
print(result.summary())
_ = sm.qqplot(result.resid, fit=True, line="s")
predicted = result.predict(X_test)
print("r2 score: {}".format(metrics.r2_score(y_test, predicted)))
print("mse: {}".format(metrics.mean_squared_error(y_test, predicted)))
print("rmse: {}".format(np.sqrt(metrics.mean_squared_error(y_test, predicted))))
print("mae: {}".format(metrics.mean_absolute_error(y_test, predicted)))
result = smf.ols('np.log(MEDV) ~ CRIM + CHAS + NOX + RM + DIS + RAD + TAX + PTRATIO + B + pow(AGE, 2)', data=dat).fit()
print(result.summary())
result.pvalues < 0.05
predicted = np.exp(result.predict(X_test))
print("r2 score: {}".format(metrics.r2_score(y_test, predicted)))
print("mse: {}".format(metrics.mean_squared_error(y_test, predicted)))
print("rmse: {}".format(np.sqrt(metrics.mean_squared_error(y_test, predicted))))
print("mae: {}".format(metrics.mean_absolute_error(y_test, predicted)))
_ = sm.qqplot(result.resid, fit=True, line="q")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Start TensorBoard
Step2: Build Synthetic Data
Step3: Build Datasets
Step4: Generate a plot from an Estimator
Step5: Using numeric_column with DNNRegressor
Step6: Using bucketized_column
Step7: Using crossed_column on its own.
Step8: Using raw categories with crossed_column
Step9: Open TensorBoard
|
<ASSISTANT_TASK:>
Python Code:
import os
import subprocess
import tempfile
import tensorflow as tf
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
assert tf.VERSION.split('.') >= ['1','4']
%matplotlib inline
mpl.rcParams['figure.figsize'] = 12, 6
mpl.rcParams['image.cmap'] = 'viridis'
logdir = tempfile.mkdtemp()
logdir
subprocess.Popen(['pkill','-f','tensorboard'])
subprocess.Popen(['tensorboard', '--logdir', logdir])
# Define the grid
min_latitude = 33.641336
max_latitude = 33.887157
delta_latitude = max_latitude-min_latitude
min_longitude = -84.558798
max_longitude = -84.287259
delta_longitude = max_longitude-min_longitude
resolution = 100
# Use RandomState so the behavior is repeatable.
R = np.random.RandomState(1)
# The price data will be a sum of Gaussians, at random locations.
n_centers = 20
centers = R.rand(n_centers, 2) # shape: (centers, dimensions)
# Each Gaussian has a maximum price contribution, at the center.
# Price_
price_delta = 0.5+2*R.rand(n_centers)
# Each Gaussian also has a standard-deviation and variance.
std = 0.2*R.rand(n_centers) # shape: (centers)
var = std**2
def price(latitude, longitude):
# Convert latitude, longitude to x,y in [0,1]
x = (longitude - min_longitude)/delta_longitude
y = (latitude - min_latitude)/delta_latitude
# Cache the shape, and flatten the inputs.
shape = x.shape
assert y.shape == x.shape
x = x.flatten()
y = y.flatten()
# Convert x, y examples into an array with shape (examples, dimensions)
xy = np.array([x,y]).T
# Calculate the square distance from each example to each center.
components2 = (xy[:,None,:] - centers[None,:,:])**2 # shape: (examples, centers, dimensions)
r2 = components2.sum(axis=2) # shape: (examples, centers)
# Calculate the z**2 for each example from each center.
z2 = r2/var[None,:]
price = (np.exp(-z2)*price_delta).sum(1) # shape: (examples,)
# Restore the original shape.
return price.reshape(shape)
# Build the grid. We want `resolution` cells between `min` and `max` on each dimension
# so we need `resolution+1` evenly spaced edges. The centers are at the average of the
# upper and lower edge.
latitude_edges = np.linspace(min_latitude, max_latitude, resolution+1)
latitude_centers = (latitude_edges[:-1] + latitude_edges[1:])/2
longitude_edges = np.linspace(min_longitude, max_longitude, resolution+1)
longitude_centers = (longitude_edges[:-1] + longitude_edges[1:])/2
latitude_grid, longitude_grid = np.meshgrid(
latitude_centers,
longitude_centers)
# Evaluate the price at each center-point
actual_price_grid = price(latitude_grid, longitude_grid)
price_min = actual_price_grid.min()
price_max = actual_price_grid.max()
price_mean = actual_price_grid.mean()
price_mean
def show_price(price):
plt.imshow(
price,
# The color axis goes from `price_min` to `price_max`.
vmin=price_min, vmax=price_max,
# Put the image at the correct latitude and longitude.
extent=(min_longitude, max_longitude, min_latitude, max_latitude),
# Make the image square.
aspect = 1.0*delta_longitude/delta_latitude)
show_price(actual_price_grid)
# For test data we will use the grid centers.
test_features = {'latitude':latitude_grid.flatten(), 'longitude':longitude_grid.flatten()}
test_ds = tf.data.Dataset.from_tensor_slices((test_features,
actual_price_grid.flatten()))
test_ds = test_ds.cache().batch(512).prefetch(1)
# For training data we will use a set of random points.
train_latitude = min_latitude + np.random.rand(50000)*delta_latitude
train_longitude = min_longitude + np.random.rand(50000)*delta_longitude
train_price = price(train_latitude, train_longitude)
train_features = {'latitude':train_latitude, 'longitude':train_longitude}
train_ds = tf.data.Dataset.from_tensor_slices((train_features, train_price))
train_ds = train_ds.cache().repeat().shuffle(100000).batch(512).prefetch(1)
# A shortcut to build an `input_fn` from a `Dataset`
def in_fn(ds):
return lambda : ds.make_one_shot_iterator().get_next()
def plot_est(est, ds = test_ds):
# Create two plot axes
actual, predicted = plt.subplot(1,2,1), plt.subplot(1,2,2)
# Plot the actual price.
plt.sca(actual)
show_price(actual_price_grid.reshape(resolution, resolution))
# Generate predictions over the grid from the estimator.
pred = est.predict(in_fn(ds))
# Convert them to a numpy array.
pred = np.fromiter((item['predictions'] for item in pred), np.float32)
# Plot the predictions on the secodn axis.
plt.sca(predicted)
show_price(pred.reshape(resolution, resolution))
# Use `normalizer_fn` so that the model only sees values in [0, 1]
norm_latitude = lambda latitude:(latitude-min_latitude)/delta_latitude - 0.5
norm_longitude = lambda longitude:(longitude-min_longitude)/delta_longitude - 0.5
fc = [tf.feature_column.numeric_column('latitude', normalizer_fn = norm_latitude),
tf.feature_column.numeric_column('longitude', normalizer_fn = norm_longitude)]
# Build and train the Estimator
est = tf.estimator.DNNRegressor(
hidden_units=[100,100],
feature_columns=fc,
model_dir = os.path.join(logdir,'DNN'))
est.train(in_fn(train_ds), steps = 5000)
est.evaluate(in_fn(test_ds))
plot_est(est)
# Bucketize the latitude and longitude usig the `edges`
latitude_bucket_fc = tf.feature_column.bucketized_column(
tf.feature_column.numeric_column('latitude'),
list(latitude_edges))
longitude_bucket_fc = tf.feature_column.bucketized_column(
tf.feature_column.numeric_column('longitude'),
list(longitude_edges))
fc = [
latitude_bucket_fc,
longitude_bucket_fc]
# Build and train the Estimator.
est = tf.estimator.LinearRegressor(fc, model_dir = os.path.join(logdir,'separable'))
est.train(in_fn(train_ds), steps = 5000)
est.evaluate(in_fn(test_ds))
plot_est(est)
# Cross the bucketized columns, using 5000 hash bins (for an average weight sharing of 2).
crossed_lat_lon_fc = tf.feature_column.crossed_column(
[latitude_bucket_fc, longitude_bucket_fc], int(5e3))
fc = [crossed_lat_lon_fc]
# Build and train the Estimator.
est = tf.estimator.LinearRegressor(fc, model_dir=os.path.join(logdir, 'crossed'))
est.train(in_fn(train_ds), steps = 5000)
est.evaluate(in_fn(test_ds))
plot_est(est)
fc = [
latitude_bucket_fc,
longitude_bucket_fc,
crossed_lat_lon_fc]
# Build and train the Estimator.
est = tf.estimator.LinearRegressor(fc, model_dir=os.path.join(logdir, 'both'))
est.train(in_fn(train_ds), steps = 5000)
est.evaluate(in_fn(test_ds))
plot_est(est)
%%html
<iframe width="900" height="800" src="http://0.0.0.0:6006#scalars&_smoothingWeight=0.85" frameborder="0"></iframe>
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: The Python SDK is organized into layers
Step3: We now have a fully authenticated SDK client!
Step4: Now let's look at some different objects, like File and Folder.
Step5: The SDK makes it very easy to download and upload files.
Step6: The SDK provides information when an API call couldn't be completed.
Step7: The Pre-flight check API will verify that a file will be accepted by Box before you send all the bytes over the wire. It can be used for both first-time uploads, and uploading new versions of an existing File (on /files/[id]/content). If the call returns a 200, then you can proceed with a standard upload call. Preflight checks verify all permissions as if the file was actually uploaded including
|
<ASSISTANT_TASK:>
Python Code:
# Import two classes from the boxsdk module - Client and OAuth2
from boxsdk import Client, OAuth2
# Define client ID, client secret, and developer token.
CLIENT_ID = None
CLIENT_SECRET = None
ACCESS_TOKEN = None
# Read app info from text file
with open('app.cfg', 'r') as app_cfg:
CLIENT_ID = app_cfg.readline()
CLIENT_SECRET = app_cfg.readline()
ACCESS_TOKEN = app_cfg.readline()
from boxsdk.network.default_network import DefaultNetwork
from pprint import pformat
class LoggingNetwork(DefaultNetwork):
def request(self, method, url, access_token, **kwargs):
Base class override. Pretty-prints outgoing requests and incoming responses.
print '\x1b[36m{} {} {}\x1b[0m'.format(method, url, pformat(kwargs))
response = super(LoggingNetwork, self).request(
method, url, access_token, **kwargs
)
if response.ok:
print '\x1b[32m{}\x1b[0m'.format(response.content)
else:
print '\x1b[31m{}\n{}\n{}\x1b[0m'.format(
response.status_code,
response.headers,
pformat(response.content),
)
return response
# Create OAuth2 object. It's already authenticated, thanks to the developer token.
oauth2 = OAuth2(CLIENT_ID, CLIENT_SECRET, access_token=ACCESS_TOKEN)
# Create the authenticated client
client = Client(oauth2, LoggingNetwork())
# Get information about the logged in user (that's whoever owns the developer token)
my = client.user(user_id='me').get()
print my.name
print my.login
print my.avatar_url
root_folder = client.folder('0')
root_folder_with_info = root_folder.get()
# Save time and bandwidth by only asking for the folder owner
root_folder_with_limited_info = root_folder.get(fields=['owned_by'])
# Upload a file to Box!
from StringIO import StringIO
stream = StringIO()
stream.write('Box Python SDK test!')
stream.seek(0)
box_file = client.folder('0').upload_stream(stream, 'box-python-sdk-test.txt')
print box_file.name
# Download the file's contents from Box
print box_file.content()
print box_file.id
stream.seek(0)
box_file = client.folder('0').upload_stream(stream, 'box-python-sdk-test.txt')
stream.seek(0)
from boxsdk.exception import BoxAPIException
try:
box_file = client.folder('0').upload_stream(stream, 'box-python-sdk-test.txt', preflight_check=True)
except BoxAPIException:
pass
# See if we can find the file on Box using search (may need to wait for Box to index the file)
results = client.search('Box Python SDK test', 2, 0)
matching_results = (r for r in results if r.id == box_file.id)
for m in matching_results:
print m.name
print m.created_at
break
else:
print 'No match found'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2 使用类(class)装饰器
Step2: 3 使用GetInstance方法,非线程安全
|
<ASSISTANT_TASK:>
Python Code:
class Singleton(object):
def __new__(cls, *args, **kwargs):
if not hasattr(cls, '_instance'):
cls._instance = super(Singleton, cls).__new__(cls, *args, **kwargs)
return cls._instance
class MyClass(object):
pass
single1 = Singleton()
single2 = Singleton()
myclass1 = MyClass()
myclass2 = MyClass()
print id(single1) == id(single2)
print id(myclass1) == id(myclass2)
from functools import wraps
def singleton(cls):
instances = {}
@wraps(cls)
def wrapper(*args, **kwargs):
if cls not in instances:
instances[cls] = cls(*args, **kwargs)
return instances[cls]
return wrapper
@singleton
class MyClass(object):
pass
myclass1 = MyClass()
myclass2 = MyClass()
print id(myclass1) == id(myclass2)
class MySingleton(object):
@classmethod
def getInstance(cls):
if not hasattr(cls, '_instance'):
cls._instance = cls()
return cls._instance
mysingleton1 = MySingleton.getInstance()
mysingleton2 = MySingleton.getInstance()
print id(mysingleton1) == id(mysingleton2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part II
Step2: Now, let's inspect the images SFrame. The 'extracted_features' column contains vector representations of the data, as we expected it to.
Step3: Part III
Step4: similar_images is an SFrame which contains a query label, and it's neighbor, the reference label
Step5: We do some cleaning to remove the instances where the query equals the reference. This happened beacause the query set was identical to the reference set
Step6: Now we can explore similar images. For instance, the closest image to image 9 is image 1710. We can view and see both are starfish
Step7: Similarly, images 0 and 1535 are two similar photos of the same person
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
images = graphlab.SFrame('https://static.turi.com/datasets/caltech_101/caltech_101_images')
# Only do this if you have a GPU
#pretrained_model = graphlab.load_model('https://static.turi.com/models/imagenet_model_iter45')
#images['extracted_features'] = pretrained_model.extract_features(images)
# If you do not have a GPU, do this instead.
images['extracted_features'] = graphlab.SArray('https://static.turi.com/models/pre_extracted_features.gl')
images
nearest_neighbor_model = graphlab.nearest_neighbors.create(images, features=['extracted_features'])
similar_images = nearest_neighbor_model.query(images, k = 2)
similar_images
similar_images = similar_images[similar_images['query_label'] != similar_images['reference_label']]
similar_images
graphlab.canvas.set_target('ipynb')
graphlab.SArray([images['image'][9]]).show()
graphlab.SArray([images['image'][1710]]).show()
graphlab.SArray([images['image'][0]]).show()
graphlab.SArray([images['image'][1535]]).show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we define the function which returns the data iterators
Step2: We then download a pretrained 50-layer ResNet model and load into memory. Note that if load_checkpoint reports an error, we can remove the downloaded files and try get_model again.
Step4: Train
Step5: Now we create a module. We pass the argument parameters of the pre-trained model to replace all parameters except for the last fully-connected layer. For the last fully-connected layer, we use an initializer to initialize.
Step6: Then we can start training. We use AWS EC2 g2.8xlarge, which has 8 GPUs.
Step7: As you can see, after only 8 epochs, we can get 78% validation accuracy. This matches the state-of-the-art results training on caltech-256 alone, e.g. VGG.
|
<ASSISTANT_TASK:>
Python Code:
import os, urllib
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.urlretrieve(url, filename)
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec')
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec')
import mxnet as mx
def get_iterators(batch_size, data_shape=(3, 224, 224)):
train = mx.io.ImageRecordIter(
path_imgrec = './caltech-256-60-train.rec',
data_name = 'data',
label_name = 'softmax_label',
batch_size = batch_size,
data_shape = data_shape,
shuffle = True,
rand_crop = True,
rand_mirror = True)
val = mx.io.ImageRecordIter(
path_imgrec = './caltech-256-60-val.rec',
data_name = 'data',
label_name = 'softmax_label',
batch_size = batch_size,
data_shape = data_shape,
rand_crop = False,
rand_mirror = False)
return (train, val)
def get_model(prefix, epoch):
download(prefix+'-symbol.json')
download(prefix+'-%04d.params' % (epoch,))
get_model('http://data.mxnet.io/models/imagenet/resnet/50-layers/resnet-50', 0)
sym, arg_params, aux_params = mx.model.load_checkpoint('resnet-50', 0)
def get_fine_tune_model(symbol, arg_params, num_classes, layer_name='flatten0'):
symbol: the pre-trained network symbol
arg_params: the argument parameters of the pre-trained model
num_classes: the number of classes for the fine-tune datasets
layer_name: the layer name before the last fully-connected layer
all_layers = sym.get_internals()
net = all_layers[layer_name+'_output']
net = mx.symbol.FullyConnected(data=net, num_hidden=num_classes, name='fc1')
net = mx.symbol.SoftmaxOutput(data=net, name='softmax')
new_args = dict({k:arg_params[k] for k in arg_params if 'fc1' not in k})
return (net, new_args)
import logging
head = '%(asctime)-15s %(message)s'
logging.basicConfig(level=logging.DEBUG, format=head)
def fit(symbol, arg_params, aux_params, train, val, batch_size, num_gpus):
devs = [mx.gpu(i) for i in range(num_gpus)]
mod = mx.mod.Module(symbol=new_sym, context=devs)
mod.fit(train, val,
num_epoch=8,
arg_params=arg_params,
aux_params=aux_params,
allow_missing=True,
batch_end_callback = mx.callback.Speedometer(batch_size, 10),
kvstore='device',
optimizer='sgd',
optimizer_params={'learning_rate':0.01},
initializer=mx.init.Xavier(rnd_type='gaussian', factor_type="in", magnitude=2),
eval_metric='acc')
metric = mx.metric.Accuracy()
return mod.score(val, metric)
# @@@ AUTOTEST_OUTPUT_IGNORED_CELL
num_classes = 256
batch_per_gpu = 16
num_gpus = 8
(new_sym, new_args) = get_fine_tune_model(sym, arg_params, num_classes)
batch_size = batch_per_gpu * num_gpus
(train, val) = get_iterators(batch_size)
mod_score = fit(new_sym, new_args, aux_params, train, val, batch_size, num_gpus)
assert mod_score > 0.77, "Low training accuracy."
# @@@ AUTOTEST_OUTPUT_IGNORED_CELL
get_model('http://data.mxnet.io/models/imagenet-11k/resnet-152/resnet-152', 0)
sym, arg_params, aux_params = mx.model.load_checkpoint('resnet-152', 0)
(new_sym, new_args) = get_fine_tune_model(sym, arg_params, num_classes)
mod_score = fit(new_sym, new_args, aux_params, train, val, batch_size, num_gpus)
assert mod_score > 0.86, "Low training accuracy."
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use a StarCatalog to organize data
Step2: Fit models
Step3: Analyze samples
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from isochrones import get_ichrone
bands = ['J', 'H', 'K', 'G', 'BP', 'RP']
mist = get_ichrone('mist', bands=bands)
from itertools import product
primary_masses = [0.8, 1.0]
mass_ratios = [0.5, 0.9]
feh_grid = [-0.25, 0.0]
age = 9.7
distance = 500
AV = 0.
m1, m2, feh, name = zip(*[(m, q*m, f, f'{m:.2f}_{q*m:0.2f}_{f:0.2f}')
for m, q, f
in product(primary_masses, mass_ratios, feh_grid)])
df = mist.generate_binary(m1, m2, age, feh, distance=distance, AV=AV, accurate=True)
# add uncertainties for each band
uncs = {'J': 0.02, 'H': 0.02, 'K':0.02, 'G': 0.002, 'BP': 0.002, 'RP':0.002}
for b in bands:
df[f'{b}_mag_unc'] = uncs[b]
# Add parallax & uncertainty
df['parallax'] = 1000/distance
df['parallax_unc'] = 0.02
df.index = name
from isochrones.catalog import StarCatalog
from isochrones.priors import FlatPrior
catalog = StarCatalog(df, bands=bands, props=['parallax'])
catalog.set_prior(AV=FlatPrior((0, 0.0001)), age=FlatPrior((8.5, 10)))
from multiprocessing import Pool
def fit_model(mod):
print(mod.mnest_basename)
mod.fit(verbose=True)
return mod.derived_samples
pool = Pool(processes=8) # e.g.
samples = pool.map(fit_model, catalog.iter_models(N=2))
cols = ['mass_0', 'mass_1', 'age', 'feh', 'AV']
qs = [0.05, 0.16, 0.5, 0.84, 0.95]
for name, samps in zip(catalog.df.index, samples):
print(name)
print(samps[cols].quantile(qs))
from corner import corner
corner(samples[-1][['mass_0', 'mass_1', 'age', 'feh', 'distance']]);
corner(samples[-1][['J_mag', 'K_mag', 'G_mag', 'BP_mag', 'RP_mag']]);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Session 3
Step2: Ex 3.1
Step3: cvs module
Step4: Ex 3.2
Step5: Writing your own module
Step6: Ex 3.3
Step7: Ex 3.4
|
<ASSISTANT_TASK:>
Python Code:
# built-in functions
seq = 'ATCCTGCTAAA'
print(len(seq))
# your own function
def gc_content(seq):
gc = 0
for base in seq:
if (base == 'C') or (base == 'G'):
gc += 1
return gc
print(gc_content('ATCCTGCTAAA'))
print(gc_content('GGGCCCCTTTA'))
import math
print(math.pi)
import os.path
seq_filename = os.path.join('data', 'seq.txt')
print(os.path.exists(seq_filename))
print(os.path.dirname(seq_filename))
print(os.path.basename(seq_filename))
data_filename = os.path.join('data', 'genes.txt')
if os.path.exists(data_filename):
with open(data_filename) as data:
header = data.readline()
with open('results.txt', 'w') as out:
for line in data:
#gene, chrom, start, end = line.strip().split()
row = line.strip().split()
print(int(row[3])-int(row[2]))
#out.write('{}\t{}\n'.format(gene, int(end)-int(start)+1))
else:
print('{} file does not exist'.format(data_filename))
def gc_content(seq):
gc = 0
for base in seq:
if (base == 'C') or (base == 'G'):
gc += 1
return gc
seq_filename = os.path.join('data', 'seq.txt')
if os.path.exists(seq_filename):
with open (seq_filename) as data:
with open('gc_content_data.csv', 'w') as out:
for line in data:
seq = line.strip()
out.write('{},{}\n'.format(seq, gc_content(seq)))
import csv
data_filename = 'gc_content_data.csv'
if os.path.exists(data_filename):
with open(data_filename) as data:
reader = csv.reader(data, delimiter=',')
for row in reader:
print(row)
import csv
# add column header to data file called seq,gc
data_filename = 'gc_content_data.csv'
results = []
if os.path.exists(data_filename):
with open(data_filename) as data:
reader = csv.DictReader(data, delimiter=',')
for row in reader:
results.append(row)
for r in results:
print('{}\t{}'.format(r['seq'], r['gc']))
import csv
with open('output.txt', 'w') as out:
writer = csv.DictWriter(out, fieldnames=['seq', 'gc'], delimiter='\t')
for r in results:
writer.writerow(r)
data_filename = os.path.join('data', 'genes.txt')
results = []
if os.path.exists(data_filename):
with open(data_filename) as data:
reader = csv.DictReader(data, delimiter='\t')
for row in reader:
results.append({'gene': row['gene'], 'len': int(row['end'])-int(row['start'])+1})
else:
print('{} file does not exist'.format(data_filename))
with open('results_with_csv.txt', 'w') as out:
writer = csv.DictWriter(out, fieldnames=['gene', 'len'], delimiter='\t')
for r in results:
writer.writerow(r)
#print(results)
import pandas
data = pandas.read_csv('results_with_csv.txt', sep='\t')
print(data)
def gc_content(seq):
gc = 0
for base in seq:
if (base == 'C') or (base == 'G'):
gc += 1
return gc
import tools
print(tools.gc_content('CCCTTCGCTT'))
from tools import gc_content
print(gc_content('AAAAA'))
def extract_seq(seq, window_size):
results = []
nb_windows = len(seq) - window_size + 1
for i in range(nb_windows):
results.append(seq[i:i+window_size])
return results
seq = 'ATTCCGGGCCTTAAAA'
print(extract_seq(seq, 5))
import tools
seq = 'ATTCCGGGCCTTAAAA'
for sub_seq in tools.extract_seq(seq, 5):
print(tools.gc_content(sub_seq))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Chapter 12
Step2: Note the syntax used
Step3: In this case we wouldn't have to specify where our machine should find the randint() function
Step4: If we like the randint() method, we might also be interested which other methods are offered in the random package. We can use dir() for this
Step5: You can also (temporarily) change the names of the functions you import
Step6: If you want to import the entire module, it works the same way (using the syntax we have already seen above). Changing the name can be useful if we want to shorten it (as shown below).
Step7: 3. Other useful modules
Step8: We can decipher the output ourselves
Step9: 3.2 Requests
Step10: Don't worry too much about the response
Step11: Exercise 2
|
<ASSISTANT_TASK:>
Python Code:
%%capture
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip
!unzip Data.zip -d ../
!unzip images.zip -d ./
!unzip Extra_Material.zip -d ../
!rm Data.zip
!rm Extra_Material.zip
!rm images.zip
import random
print(random.randint(0, 100))
from random import randint
print(randint(0, 100))
help(randint)
dir(random)
from random import randint as random_number
print(random_number(0, 100))
import random as rn
print(rn.randint(0,4))
import datetime
print(datetime.datetime.now())
help(datetime.datetime.now)
import requests
a=requests.get("https://tae898.github.io")
print(a.content)
# your code here
# play around here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flower power
Step2: ConvNet Codes
Step3: Below I'm running images through the VGG network in batches.
Step4: Building the Classifier
Step5: Data prep
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Step10: Training
Step11: Testing
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
|
<ASSISTANT_TASK:>
Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
images = np.concatenate(batch)
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
train_idx, val_idx = next(ss.split(codes, labels))
half_val_len = int(len(val_idx)/2)
val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
fc = tf.contrib.layers.fully_connected(inputs_, 256)
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer().minimize(cost)
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
epochs = 10
iteration = 0
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in get_batches(train_x, train_y):
feed = {inputs_: x,
labels_: y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(loss))
iteration += 1
if iteration % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
test_img_path = 'flower_photos/sunflowers/1008566138_6927679c8a.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
s = pd.Series([1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0.98,0.93],
index=['146tf150p','havent','home','okie','thanx','er','anything','lei','nite','yup','thank','ok','where','beerage','anytime','too','done','645','tick','blank'])
import numpy as np
def g(s):
return s.iloc[np.lexsort([s.index, s.values])]
result = g(s.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 1b
Step2: Problem 1c
Step3: Problem 1d
Step4: Problem 2) A Brief Review of Fourier Analysis
Step5: The common Fourier pairs are especially useful in light of the convolution theorem. Fourier transforms convert convolutions into point-wise products. We define a convolution as
Step6: Sampling a signal directly at the Nyquist frequency results in a lack of any variability. But does this just mean that $f_\mathrm{Ny}$ is special? What happens at $f > f_\mathrm{Ny}$?
Step7: From the plot the signal is clearly variable (unlike when $f = f_\mathrm{Ny}$). However, there are fewer than 2 observations per cycle.
Step8: The observations are identical! Here is what you need to remember about the Nyquist frequency
Step9: Problem 3b
Step10: Problem 3c
Step11: Problem 3d
Step12: Problem 3e
Step13: Problem 3f
Step14: Congratulations
Step15: Problem 4) Other Considerations and Faster Implementations
Step16: Unlike ls_periodogram, LombScargle effectively takes no time to run on the simulated data.
Step17: There are many choices regarding the calculation of the periodogram, so read the docs.
Step18: We can see that the best fit model doesn't match the signal in the case where we do not allow a floating mean.
Step19: Window Functions
Step20: Problem 4e
Step21: Interestingly, there are very strong peaks in the data at $P \approx 3\,\mathrm{d} \;\&\; 365\,\mathrm{d}$.
Step22: Uncertainty on the best-fit period
Step23: Problem 5b
Step24: Problem 5c
Step25: Herein lies a fundamental issue regarding the LS periodogram
Step26: One way to combat this very specific issue is to include more Fourier terms at the harmonic of the best fit period. This is easy to implement in LombScargle with the nterms keyword. [Though always be weary of adding degrees of freedom to a model, especially at the large pipeline level of analysis.]
Step27: Interestingly, for the $n=2, 3, 4$ harmonics, it appears as though we get the period that we have visually confirmed. However, by $n=5$ harmonics we no longer get a reasonable answer. Again - be very careful about adding harmonics, especially in large analysis pipelines.
Step28: This example also shows why it is somewhat strange to provide an uncertainty with a LS best-fit period. Errors tend to be catastrophic, and not some small fractional percentage, with the LS periodogram (we'll explore this a little further later).
|
<ASSISTANT_TASK:>
Python Code:
def gen_periodic_data( # complete
y = # complete
return y
x = # complete
y = gen_periodic_data( # complete
fig, ax = plt.subplots()
ax.scatter( # complete
# complete
# complete
fig.tight_layout()
def phase_plot( # complete
phases = # complete
# complete
# complete
fig, ax = plt.subplots()
ax.errorbar( # complete
# complete
# complete
fig.tight_layout()
phase_plot( # complete
fourier_pairs_plot()
x = # complete
y = # complete
x_signal = # complete
y_signal = # complete
fig, ax = plt.subplots()
ax.scatter( # complete
ax.plot( # complete
fig.tight_layout()
x = # complete
y = # complete
x_signal = # complete
y_signal = # complete
fig, ax = plt.subplots()
ax.scatter( # complete
ax.plot( # complete
fig.tight_layout()
x = # complete
y = # complete
x_signal = # complete
y_signal = # complete
# complete
# complete
# complete
# complete
def chi2( # complete
# complete
# complete
# complete
def min_chi2( # complete
# complete
# complete
def ls_periodogram( # complete
psd = np.empty_like(f_grid)
chi2_0 = # complete
for f_num, f in enumerate(f_grid):
psd[f_num] = # complete
return psd
np.random.seed(23)
# calculate the periodogram
x = # complete
y = # complete
y_unc = # complete
f_grid = # complete
psd_ls = # complete
# plot the periodogram
fig, ax = plt.subplots()
ax.plot( # complete
# complete
# complete
fig.tight_layout()
# calculate the periodogram
f_grid = # complete
psd_ls = # complete
# plot the periodogram
fig,ax = plt.subplots()
ax.plot(# complete
# complete
# complete
fig.tight_layout()
print("The best fit period is: {:.4f}".format( # complete
phase_plot( # complete
f_min = # complete
f_max = # complete
delta_f = # complete
f_grid = np.arange( # complete
print("{:d} grid points are needed to sample the periodogram".format( # complete
from astropy.stats import LombScargle
frequency, power = LombScargle(x, y, y_unc).autopower()
fig, ax = plt.subplots()
ax.plot( # complete
# complete
# complete
# complete
fig.tight_layout()
# complete
freq_no_mean, power_no_mean = LombScargle( # complete
freq_fit_mean, power_fit_mean = LombScargle( # complete
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
ax1.plot( # complete
ax2.plot( # complete
ax1.set_xlim(0,15)
fig.tight_layout()
fit_mean_model = LombScargle(x[bright], y[bright], y_unc[bright],
fit_mean=True).model(np.linspace(0,10,1000),
freq_fit_mean[np.argmax(power_fit_mean)])
no_mean_model = LombScargle(x[bright], y[bright], y_unc[bright],
fit_mean=False).model(np.linspace(0,10,1000),
freq_no_mean[np.argmax(power_no_mean)])
fig, ax = plt.subplots()
ax.errorbar(x[bright], y[bright], y_unc[bright], fmt='o', label='data')
ax.plot(np.linspace(0,10,1000), fit_mean_model, label='fit mean')
ax.plot(np.linspace(0,10,1000), no_mean_model, label='no mean')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.legend()
fig.tight_layout()
# set up simulated observations
t_obs = np.arange(0, 10*365, 3) # 3d cadence
# complete
# complete
# complete
y = gen_periodic_data( # complete
# complete
fig, ax = plt.subplots()
ax.errorbar( # complete
ax.set_xlabel("Time (d)")
ax.set_ylabel("Flux (arbitrary units)")
ls = LombScargle( # complete
freq_window, power_window = # complete
fig, ax = plt.subplots()
ax.plot( # complete
ax.set_ylabel("Power")
ax.set_xlabel("Period (d)")
ax.set_xlim(0,500)
axins = plt.axes([.2, .65, .5, .2])
axins.plot( # complete
axins.set_xlim(0,5)
ls = LombScargle( # complete
frequency, power = # complete
fig, (ax,ax2) = plt.subplots(2,1, sharex=True)
ax.plot( # complete
ax.set_ylabel("Power")
ax.set_ylim(0,1)
ax2.plot( # complete
ax2.set_ylabel("Power")
ax2.set_xlabel("Period (d)")
ax2.set_xlim(0,10)
fig.tight_layout()
data = # complete
fig, ax = plt.subplots()
ax.errorbar( # complete
ax.set_xlabel('HJD (d)')
ax.set_ylabel('V (mag)')
ax.set_ylim(ax.get_ylim()[::-1])
fig.tight_layout()
frequency, power = # complete
# complete
fig,ax = plt.subplots()
ax.plot(# complete
ax.set_ylabel("Power")
ax.set_xlabel("Period (d)")
ax.set_xlim(0, 800)
axins = plt.axes([.25, .55, .6, .3])
axins.plot( # complete
axins.set_xlim(0,5)
fig.tight_layout()
# plot the phase folded light curve
phase_plot( # complete
phase_plot( # complete
fig, ax = plt.subplots()
ax.errorbar(data['hjd']/ls_period % 1, data['mag'], data['mag_unc'], fmt='o', zorder=-1)
ax.plot(np.linspace(0,1,1000),
LombScargle(data['hjd'],data['mag'], data['mag_unc']).model(np.linspace(0,1,1000)*ls_period, 1/ls_period) )
ax.set_xlabel('Phase')
ax.set_ylabel('V (mag)')
ax.set_ylim(ax.get_ylim()[::-1])
fig.tight_layout()
for i in np.arange(1,6):
frequency, power = # complete
# complete
print('For {:d} harmonics, P_LS = {:.8f}'.format( # complete
best_period = 0.73508568
fig, ax = plt.subplots()
ax.errorbar((data['hjd'])/best_period % 1, data['mag'], data['mag_unc'], fmt='o',zorder=-1)
ax.plot(np.linspace(0,1,1000),
LombScargle(data['hjd'],data['mag'], data['mag_unc'],
nterms=4).model(np.linspace(0,1,1000)*best_period, 1/best_period) )
ax.set_xlabel('Phase')
ax.set_ylabel('V (mag)')
ax.set_ylim(ax.get_ylim()[::-1])
fig.tight_layout()
def fourier_pairs_plot():
fig, ax = plt.subplots(4, 2, figsize=(10, 6))
fig.subplots_adjust(left=0.04, right=0.98, bottom=0.02, top=0.95,
hspace=0.3, wspace=0.2)
x = np.linspace(-5, 5, 1000)
for axi in ax.flat:
axi.xaxis.set_major_formatter(plt.NullFormatter())
axi.yaxis.set_major_formatter(plt.NullFormatter())
# draw center line
axi.axvline(0, linestyle='dotted', color='gray')
axi.axhline(0, linestyle='dotted', color='gray')
style_re = dict(linestyle='solid', color='k', linewidth=2)
style_im = dict(linestyle='solid', color='gray', linewidth=2)
text_style = dict(size=14, color='gray')
# sine -> delta
ax[0, 0].plot(x, np.cos(x),**style_re)
ax[0, 0].set(xlim=(-5, 5), ylim=(-1.2, 1.2))
ax[0, 0].annotate('', (-np.pi, 0), (np.pi, 0),
arrowprops=dict(arrowstyle='|-|', color='gray'))
ax[0, 0].text(0, 0, '$1/f_0$', ha='center', va='bottom', **text_style)
ax[0, 0].set_title('Sinusoid')
ax[0, 1].plot([-5, 2, 2, 2, 5], [0, 0, 1, 0, 0], **style_re)
ax[0, 1].plot([-5, -2, -2, -2, 5], [0, 0, 1, 0, 0], **style_re)
ax[0, 1].set(xlim=(-5, 5), ylim=(-0.2, 1.2))
ax[0, 1].annotate('', (0, 0.4), (2, 0.4), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[0, 1].annotate('', (0, 0.4), (-2, 0.4), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[0, 1].text(1, 0.45, '$+f_0$', ha='center', va='bottom', **text_style)
ax[0, 1].text(-1, 0.45, '$-f_0$', ha='center', va='bottom', **text_style)
ax[0, 1].set_title('Delta Functions')
# gaussian -> gaussian
ax[1, 0].plot(x, np.exp(-(2 * x) ** 2), **style_re)
ax[1, 0].set(xlim=(-5, 5), ylim=(-0.2, 1.2))
ax[1, 0].annotate('', (0, 0.35), (0.6, 0.35), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[1, 0].text(0, 0.4, '$\sigma$', ha='center', va='bottom', **text_style)
ax[1, 0].set_title('Gaussian')
ax[1, 1].plot(x, np.exp(-(x / 2) ** 2), **style_re)
ax[1, 1].set(xlim=(-5, 5), ylim=(-0.2, 1.2))
ax[1, 1].annotate('', (0, 0.35), (2, 0.35), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[1, 1].text(0, 0.4, '$(2\pi\sigma)^{-1}$', ha='center', va='bottom', **text_style)
ax[1, 1].set_title('Gaussian')
# top hat -> sinc
ax[2, 0].plot([-2, -1, -1, 1, 1, 2], [0, 0, 1, 1, 0, 0], **style_re)
ax[2, 0].set(xlim=(-2, 2), ylim=(-0.3, 1.2))
ax[2, 0].annotate('', (-1, 0.5), (1, 0.5), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[2, 0].text(0.0, 0.5, '$T$', ha='center', va='bottom', **text_style)
ax[2, 0].set_title('Top Hat')
ax[2, 1].plot(x, np.sinc(x), **style_re)
ax[2, 1].set(xlim=(-5, 5), ylim=(-0.3, 1.2))
ax[2, 1].annotate('', (-1, 0), (1, 0), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[2, 1].text(0.0, 0.0, '$2/T$', ha='center', va='bottom', **text_style)
ax[2, 1].set_title('Sinc')
# comb -> comb
ax[3, 0].plot([-5.5] + sum((3 * [i] for i in range(-5, 6)), []) + [5.5],
[0] + 11 * [0, 1, 0] + [0], **style_re)
ax[3, 0].set(xlim=(-5.5, 5.5), ylim=(-0.2, 1.2))
ax[3, 0].annotate('', (0, 0.5), (1, 0.5), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[3, 0].text(0.5, 0.6, '$T$', ha='center', va='bottom', **text_style)
ax[3, 0].set_title('Dirac Comb')
ax[3, 1].plot([-5.5] + sum((3 * [i] for i in range(-5, 6)), []) + [5.5],
[0] + 11 * [0, 1, 0] + [0], **style_re)
ax[3, 1].set(xlim=(-2.5, 2.5), ylim=(-0.2, 1.2));
ax[3, 1].annotate('', (0, 0.5), (1, 0.5), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[3, 1].text(0.5, 0.6, '$1/T$', ha='center', va='bottom', **text_style)
ax[3, 1].set_title('Dirac Comb')
for i, letter in enumerate('abcd'):
ax[i, 0].set_ylabel('({0})'.format(letter), rotation=0)
# Draw arrows between pairs of axes
for i in range(4):
left = ax[i, 0].bbox.inverse_transformed(fig.transFigure).bounds
right = ax[i, 1].bbox.inverse_transformed(fig.transFigure).bounds
x = 0.5 * (left[0] + left[2] + right[0])
y = left[1] + 0.5 * left[3]
fig.text(x, y, r'$\Longleftrightarrow$',
ha='center', va='center', size=30)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data generation
Step2: Model definition
Step3: Inference
Step4: After sampling let's see how well our model fits the data. We compute sampled counts mean and standard deviation and plot it against the original data.
Step5: But where do these values (and uncertainty) come from? Let's find out!
Step6: ...and do reparametrization (again please note that we get it from predictive!).
Step7: Now we plot reparametrized rate
Step8: We see that the probability of success rises with x. This means that it will take more and more trials before we observe those 28 failures imposed by concentration parameter.
Step9: It indeed does. Red lines show that 28 successes and rate 0.5 are located with the same x argument.
Step10: Manually defined guide
Step11: Pyros parameter store is comprised of learned parameters that will be used in Predictive stage. Instead of providing samples we pass guide parameter to construct predictive distribution.
Step12: AutoGuide
Step13: As we check PARAM_STORE we see that each sample site is approximated with a normal distribution.
Step14: Finally we again construct a predictive distribution and plot counts. For all three methods we managed to get similar results for our parameters.
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.datasets import make_regression
import pyro.distributions as dist
from pyro.infer import MCMC, NUTS, Predictive
from pyro.infer.mcmc.util import summary
from pyro.distributions import constraints
import pyro
import torch
pyro.set_rng_seed(101)
%matplotlib inline
%config InlineBackend.figure_format='retina'
X, y = make_regression(n_features=1, bias=150., noise=5., random_state=108)
X_ = torch.tensor(X, dtype=torch.float)
y_ = torch.tensor((y**3)/100000. + 10., dtype=torch.float)
y_.round_().clamp_(min=0);
plt.scatter(X_, y_)
plt.ylabel('y')
plt.xlabel('x');
def model(features, counts):
N, P = features.shape
scale = pyro.sample("scale", dist.LogNormal(0, 1))
coef = pyro.sample("coef", dist.Normal(0, scale).expand([P]).to_event(1))
rate = pyro.deterministic("rate", torch.nn.functional.softplus(coef @ features.T))
concentration = pyro.sample("concentration", dist.LogNormal(0, 1))
with pyro.plate("bins", N):
return pyro.sample("counts", dist.GammaPoisson(concentration, rate), obs=counts)
nuts_kernel = NUTS(model)
mcmc = MCMC(nuts_kernel, num_samples=500)
%%time
mcmc.run(X_, y_);
samples = mcmc.get_samples()
for k, v in samples.items():
print(f"{k}: {tuple(v.shape)}")
predictive = Predictive(model, samples)(X_, None)
for k, v in predictive.items():
print(f"{k}: {tuple(v.shape)}")
def prepare_counts_df(predictive):
counts = predictive['counts'].numpy()
counts_mean = counts.mean(axis=0)
counts_std = counts.std(axis=0)
counts_df = pd.DataFrame({
"feat": X_.squeeze(),
"mean": counts_mean,
"high": counts_mean + counts_std,
"low": counts_mean - counts_std,
})
return counts_df.sort_values(by=['feat'])
counts_df = prepare_counts_df(predictive)
plt.scatter(X_, y_, c='r')
plt.ylabel('y')
plt.xlabel('x')
plt.plot(counts_df['feat'], counts_df['mean'])
plt.fill_between(counts_df['feat'], counts_df['high'], counts_df['low'], alpha=0.5);
print('Concentration mean: ', samples['concentration'].mean().item())
print('Concentration std: ', samples['concentration'].std().item())
print('Coef mean: ', samples['coef'].mean().item())
print('Coef std: ', samples['coef'].std().item())
rates = predictive['rate'].squeeze()
rates_reparam = 1. / (rates + 1.) # here's reparametrization
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_size_inches(13, 4)
ax1.scatter(X_, rates_reparam.mean(axis=0))
ax1.set_ylabel('mean')
ax1.set_xlabel('x')
ax1.set_title('rate means')
ax2.scatter(X_, rates_reparam.std(axis=0))
ax2.set_ylabel('std')
ax2.set_xlabel('x')
ax2.set_title('rate stds');
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_size_inches(13, 4)
ax1.scatter(X_, y_, c='r')
ax1.plot(counts_df['feat'], counts_df['mean'])
ax1.fill_between(counts_df['feat'], counts_df['high'], counts_df['low'], alpha=0.5)
ax1.axhline(samples['concentration'].mean().item(), c='g', linestyle='dashed')
ax1.axvline(-0.46, c='g', linestyle='dashed')
ax1.set_ylabel('y')
ax1.set_xlabel('x')
ax1.set_title('fitted model')
ax2.scatter(X_, rates_reparam.mean(axis=0))
ax2.axhline(0.5, c='g', linestyle='dashed')
ax2.axvline(-0.46, c='g', linestyle='dashed')
ax2.set_ylabel('mean')
ax2.set_xlabel('x')
ax2.set_title('rate means');
from pyro.infer import SVI, Trace_ELBO
from pyro.optim import Adam
from pyro.infer.autoguide import AutoNormal
def guide(features, counts):
N, P = features.shape
scale_param = pyro.param("scale_param", torch.tensor(0.1), constraint=constraints.positive)
loc_param = pyro.param("loc_param", torch.tensor(0.0))
scale = pyro.sample("scale", dist.Delta(scale_param))
coef = pyro.sample("coef", dist.Normal(loc_param, scale).expand([P]).to_event(1))
concentration_param = pyro.param("concentration_param", torch.tensor(0.1), constraint=constraints.positive)
concentration = pyro.sample("concentration", dist.Delta(concentration_param))
pyro.clear_param_store()
adam_params = {"lr": 0.005, "betas": (0.90, 0.999)}
optimizer = Adam(adam_params)
svi = SVI(model, guide, optimizer, loss=Trace_ELBO())
%%time
n_steps = 5001
for step in range(n_steps):
loss = svi.step(X_, y_)
if step % 1000 == 0:
print('Loss: ', loss)
list(pyro.get_param_store().items())
predictive_svi = Predictive(model, guide=guide, num_samples=500)(X_, None)
for k, v in predictive_svi.items():
print(f"{k}: {tuple(v.shape)}")
counts_df = prepare_counts_df(predictive_svi)
plt.scatter(X_, y_, c='r')
plt.ylabel('y')
plt.xlabel('x')
plt.plot(counts_df['feat'], counts_df['mean'])
plt.fill_between(counts_df['feat'], counts_df['high'], counts_df['low'], alpha=0.5);
pyro.clear_param_store()
adam_params = {"lr": 0.005, "betas": (0.90, 0.999)}
optimizer = Adam(adam_params)
auto_guide = AutoNormal(model)
svi = SVI(model, auto_guide, optimizer, loss=Trace_ELBO())
%%time
n_steps = 3001
for step in range(n_steps):
loss = svi.step(X_, y_)
if step % 1000 == 0:
print('Loss: ', loss)
auto_guide(X_, y_)
list(pyro.get_param_store().items())
predictive_svi = Predictive(model, guide=auto_guide, num_samples=500)(X_, None)
for k, v in predictive_svi.items():
print(f"{k}: {tuple(v.shape)}")
counts_df = prepare_counts_df(predictive_svi)
plt.scatter(X_, y_, c='r')
plt.ylabel('y')
plt.xlabel('x')
plt.plot(counts_df['feat'], counts_df['mean'])
plt.fill_between(counts_df['feat'], counts_df['high'], counts_df['low'], alpha=0.5);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Predict with your deployed model
Step2: It should output 6 which is the label index for the frog class.
Step3: Run server
|
<ASSISTANT_TASK:>
Python Code:
%%bash
cd cifar10
MODEL_NAME="cifar10"
VERSION_NAME="v1"
JOB_DIR="gs://dost_deeplearning_cifar10/cifar10_train_1499931245" # Change this to your own
gcloud ml-engine models create $MODEL_NAME
gcloud ml-engine versions create \
$VERSION_NAME \
--model $MODEL_NAME \
--origin $JOB_DIR/model
%%bash
cd cifar10
MODEL_NAME="cifar10"
VERSION_NAME="v1"
gcloud ml-engine predict \
--model $MODEL_NAME \
--version $VERSION_NAME \
--json-instances predict_test.json
!pip install -r emojify/requirements.txt
import os
import subprocess
import IPython
from google.datalab.utils import pick_unused_port
port = pick_unused_port()
# Config is reckoned from env vars
env = {
'PROJECT_ID': 'dost-deeplearning', # Change this to your project id
'MODEL_NAME': 'cifar10',
'PORT': str(port),
}
args = ['python', 'emojify/emojify.py']
subprocess.Popen(args, env=env)
url = '/_proxy/%d/' % port
html = 'Running emojify! Click <a href="%s" target="_blank">here</a> to access it.' % url
IPython.display.display_html(html, raw=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step32: Batches
Step34: Neural Network Training
Step36: Build the Graph
Step38: Train
Step40: Save Parameters
Step42: Checkpoint
Step45: Implement Generate Functions
Step48: Choose Word
Step50: Generate TV Script
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
len(sentences)
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# Create a set for the vocabulary
vocabulary = set()
# Add word tokens from text to the vocabulary set
for word in text:
vocabulary.add(word)
# Convert to a list to be able to access by index
vocab = list(vocabulary)
# Populate dictionary of words in the vocabulary mapped to index positions and vice versa
vocab_to_int = {}
int_to_vocab = {}
for i, word in enumerate(vocab):
vocab_to_int[word] = i
int_to_vocab[i] = word
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
# Instantiate punctuation dict
punctuation_dict = {}
# Populate the dictionary
punctuation_dict['.'] = 'Period'
punctuation_dict[','] = 'Comma'
punctuation_dict['"'] = 'Quotation_Mark'
punctuation_dict[';'] = 'Semicolon'
punctuation_dict['!'] = 'Exclamation_Mark'
punctuation_dict['?'] = 'Question_Mark'
punctuation_dict['('] = 'Left_Parenthesis'
punctuation_dict[')'] = 'Right_Parenthesis'
punctuation_dict['--'] = 'Dash'
punctuation_dict['\n'] = 'Return'
return punctuation_dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='target')
learning_rate = tf.placeholder(tf.float32, shape=None, name='lr')
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
num_layers=1
keep_prob = .8
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name='initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
# Embed the words for training
embed = tf.Variable(tf.random_uniform((vocab_size, embed_dim),
-1, 1))
embedded = tf.nn.embedding_lookup(embed, input_data)
return embedded
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
#tf.reset_default_graph()
outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(state, name='final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embedding = get_embed(input_data, vocab_size, rnn_size)
output, final_state = build_rnn(cell, embedding)
logits = tf.contrib.layers.fully_connected(output, vocab_size,
activation_fn=None,
weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
biases_initializer=tf.zeros_initializer())
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
n_batches = len(int_text)//(batch_size*seq_length)
inputs = np.array(int_text[:n_batches*(batch_size*seq_length)])
#targets = np.array(int_text[1:n_batches*(batch_size*seq_length)+1])
targets = np.roll(inputs, -1)
input_batches = np.split(inputs.reshape(batch_size,-1),n_batches,1)
target_batches = np.split(targets.reshape(batch_size,-1),n_batches,1)
output = np.array(list(zip(input_batches,target_batches)))
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 20
# Batch Size
batch_size = 50
# RNN Size
rnn_size = 300
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 1
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
input_tensor = loaded_graph.get_tensor_by_name("input:0")
initial_state_tensor = loaded_graph.get_tensor_by_name("initial_state:0")
final_state_tensor = loaded_graph.get_tensor_by_name("final_state:0")
probs_tensor = loaded_graph.get_tensor_by_name("probs:0")
return input_tensor, initial_state_tensor, final_state_tensor, probs_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
p = np.squeeze(probabilities)
p[np.argsort(p)[:-1]] = 0
p = p / np.sum(p)
c = np.random.choice(len(int_to_vocab), 1, p=p)[0]
return int_to_vocab[c]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Nearest-neighbors are both pretty simple and pretty powerful. But you can imagine that they could also be really slow if you have either a lot of points or want to consider a lot of neighbors as you have to compute all of the pairwise distances! You can certainly do this "brute force" computation, but the use of trees speeds things up considerably.
Step2: Note that, while I said that there is no built in function for this, there are any number of ways to accomplish this! For example, using map, np.mgrid(), transposes, etc.
|
<ASSISTANT_TASK:>
Python Code:
# Based on Ivezic, Figure 6.5
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from scipy import stats
from astroML.density_estimation import KNeighborsDensity
from astroML.plotting import hist
from sklearn.neighbors import KernelDensity
#------------------------------------------------------------
# Generate our data: a mix of several Cauchy distributions
# this is the same data used in the Bayesian Blocks figure
np.random.seed(0)
N = 10000
mu_gamma_f = [(5, 1.0, 0.1),
(7, 0.5, 0.5),
(9, 0.1, 0.1),
(12, 0.5, 0.2),
(14, 1.0, 0.1)]
true_pdf = lambda x: sum([f * stats.cauchy(mu, gamma).pdf(x) for (mu, gamma, f) in mu_gamma_f])
x = np.concatenate([stats.cauchy(mu, gamma).rvs(int(f * N)) for (mu, gamma, f) in mu_gamma_f])
np.random.shuffle(x)
x = x[x > -10]
x = x[x < 30]
#------------------------------------------------------------
# plot the results
fig = plt.figure(figsize=(10, 10))
N = 5000
k = 10
xN = x[:N]
t = np.linspace(-10, 30, 1000)
# Compute density with KDE
kde = KernelDensity(0.1, kernel='gaussian')
kde.fit(xN[:, None])
dens_kde = np.exp(kde.score_samples(t[:, None]))
# Compute density with Bayesian nearest neighbors
nbrs = KNeighborsDensity('bayesian', n_neighbors=k)
nbrs.fit(xN[:, None])
dens_nbrs = nbrs.eval(t[:, None]) / N
# plot the results
plt.plot(t, true_pdf(t), ':', color='black', zorder=3, label="Generating Distribution")
plt.plot(xN, -0.005 * np.ones(len(xN)), '|k')
plt.plot(t, dens_nbrs, '-', lw=1.5, color='gray', zorder=2, label="Nearest Neighbors (k=%i)" % k)
plt.plot(t, dens_kde, '-', color='black', zorder=3, label="Kernel Density (h=0.1)")
# label the plot
#plt.text(0.02, 0.95, "%i points" % N, ha='left', va='top', transform=ax.transAxes)
plt.ylabel('$p(x)$')
plt.legend(loc='upper right')
plt.xlim(0, 20)
plt.ylim(-0.01, 0.4001)
plt.show()
x = np.linspace(0,10,11)
y = np.linspace(0,10,11)
print x,y
xv,yv = np.meshgrid(x,y)
print xv
print yv
print xv.ravel()
print yv.ravel()
# Equivalent to flatten(), except for making a copy (or not) of the array
xystack = np.vstack([xv.ravel(),yv.ravel()])
print xystack
Xgrid = xystack.T
print Xgrid
# Comparison of KDE and K-Nearest Neighbors "smoothing"
# Based on Ivezic, Figure 6.4
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import LogNorm
from scipy.spatial import cKDTree
from astroML.datasets import fetch_great_wall
from astroML.density_estimation import KDE, KNeighborsDensity
#------------------------------------------------------------
# Fetch the great wall data
X = fetch_great_wall()
#------------------------------------------------------------
# Create the grid on which to evaluate the results
Nx = 50
Ny = 125
xmin, xmax = (-375, -175)
ymin, ymax = (-300, 200)
#------------------------------------------------------------
# Evaluate for several models
Xgrid = np.vstack(map(np.ravel, np.meshgrid(np.linspace(xmin, xmax, Nx), np.linspace(ymin, ymax, Ny)))).T
#print Xgrid
kde = KDE(metric='gaussian', h=5)
dens_KDE = kde.fit(X).eval(Xgrid).reshape((Ny, Nx))
knn5 = KNeighborsDensity('bayesian', 5)
dens_k5 = knn5.fit(X).eval(Xgrid).reshape((Ny, Nx))
knn40 = KNeighborsDensity('bayesian', 40)
dens_k40 = knn40.fit(X).eval(Xgrid).reshape((Ny, Nx))
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(10, 5))
fig.subplots_adjust(left=0.12, right=0.95, bottom=0.2, top=0.9,
hspace=0.01, wspace=0.01)
# First plot: scatter the points
ax1 = plt.subplot(221, aspect='equal')
ax1.scatter(X[:, 1], X[:, 0], s=1, lw=0, c='k')
ax1.text(0.95, 0.9, "input", ha='right', va='top',
transform=ax1.transAxes,
bbox=dict(boxstyle='round', ec='k', fc='w'))
# Second plot: KDE
ax2 = plt.subplot(222, aspect='equal')
ax2.imshow(dens_KDE.T, origin='lower', norm=LogNorm(),
extent=(ymin, ymax, xmin, xmax), cmap=plt.cm.binary)
ax2.text(0.95, 0.9, "KDE: Gaussian $(h=5)$", ha='right', va='top',
transform=ax2.transAxes,
bbox=dict(boxstyle='round', ec='k', fc='w'))
# Third plot: KNN, k=5
ax3 = plt.subplot(223, aspect='equal')
ax3.imshow(dens_k5.T, origin='lower', norm=LogNorm(),
extent=(ymin, ymax, xmin, xmax), cmap=plt.cm.binary)
ax3.text(0.95, 0.9, "$k$-neighbors $(k=5)$", ha='right', va='top',
transform=ax3.transAxes,
bbox=dict(boxstyle='round', ec='k', fc='w'))
# Fourth plot: KNN, k=40
ax4 = plt.subplot(224, aspect='equal')
ax4.imshow(dens_k40.T, origin='lower', norm=LogNorm(),
extent=(ymin, ymax, xmin, xmax), cmap=plt.cm.binary)
ax4.text(0.95, 0.9, "$k$-neighbors $(k=40)$", ha='right', va='top',
transform=ax4.transAxes,
bbox=dict(boxstyle='round', ec='k', fc='w'))
for ax in [ax1, ax2, ax3, ax4]:
ax.set_xlim(ymin, ymax - 0.01)
ax.set_ylim(xmin, xmax)
for ax in [ax1, ax2]:
ax.xaxis.set_major_formatter(plt.NullFormatter())
for ax in [ax3, ax4]:
ax.set_xlabel('$y$ (Mpc)')
for ax in [ax2, ax4]:
ax.yaxis.set_major_formatter(plt.NullFormatter())
for ax in [ax1, ax3]:
ax.set_ylabel('$x$ (Mpc)')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 准备数据
Step2: 转换倾斜的连续特征
Step3: 对于高度倾斜分布的特征如'capital-gain'和'capital-loss',常见的做法是对数据施加一个<a href="https
Step4: 规一化数字特征
Step5: 练习:数据预处理
Step6: 混洗和切分数据
Step7: 评价模型性能
Step8: 问题 1 - 天真的预测器的性能
Step9: 练习:初始模型的评估
Step10: 提高效果
Step11: 问题 5 - 最终模型评估
Step12: 问题 7 - 提取特征重要性
Step13: 问题 8 - 特征选择的影响
|
<ASSISTANT_TASK:>
Python Code:
# TODO:总的记录数
n_records = len(data)
# # TODO:被调查者 的收入大于$50,000的人数
n_greater_50k = len(data[data.income.str.contains('>50K')])
# # TODO:被调查者的收入最多为$50,000的人数
n_at_most_50k = len(data[data.income.str.contains('<=50K')])
# # TODO:被调查者收入大于$50,000所占的比例
greater_percent = (n_greater_50k / n_records) * 100
# 打印结果
print ("Total number of records: {}".format(n_records))
print ("Individuals making more than $50,000: {}".format(n_greater_50k))
print ("Individuals making at most $50,000: {}".format(n_at_most_50k))
print ("Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent))
# 为这个项目导入需要的库
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # 允许为DataFrame使用display()
# 导入附加的可视化代码visuals.py
import visuals as vs
# 为notebook提供更加漂亮的可视化
%matplotlib inline
# 导入人口普查数据
data = pd.read_csv("census.csv")
# 成功 - 显示第一条记录
display(data.head(n=1))
# 将数据切分成特征和对应的标签
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
# 可视化 'capital-gain'和'capital-loss' 两个特征
vs.distribution(features_raw)
# 对于倾斜的数据使用Log转换
skewed = ['capital-gain', 'capital-loss']
features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1))
# 可视化对数转换后 'capital-gain'和'capital-loss' 两个特征
vs.distribution(features_raw, transformed = True)
from sklearn.preprocessing import MinMaxScaler
# 初始化一个 scaler,并将它施加到特征上
scaler = MinMaxScaler()
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_raw[numerical] = scaler.fit_transform(data[numerical])
# 显示一个经过缩放的样例记录
display(features_raw.head(n = 1))
# TODO:使用pandas.get_dummies()对'features_raw'数据进行独热编码
features = pd.get_dummies(features_raw)
# TODO:将'income_raw'编码成数字值
income = income_raw.replace(['>50K', '<=50K'], [1, 0])
# 打印经过独热编码之后的特征数量
encoded = list(features.columns)
print ("{} total features after one-hot encoding.".format(len(encoded)))
# 移除下面一行的注释以观察编码的特征名字
#print encoded
# 导入 train_test_split
from sklearn.model_selection import train_test_split
# 将'features'和'income'数据切分成训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0,
stratify = income)
# 将'X_train'和'y_train'进一步切分为训练集和验证集
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=0,
stratify = y_train)
# 显示切分的结果
print ("Training set has {} samples.".format(X_train.shape[0]))
print ("Validation set has {} samples.".format(X_val.shape[0]))
print ("Testing set has {} samples.".format(X_test.shape[0]))
#不能使用scikit-learn,你需要根据公式自己实现相关计算。
#TODO: 计算准确率
accuracy = np.divide(n_greater_50k, float(n_records))
# TODO: 计算查准率 Precision
precision = np.divide(n_greater_50k, float(n_records))
# TODO: 计算查全率 Recall
recall = np.divide(n_greater_50k, n_greater_50k)
# TODO: 使用上面的公式,设置beta=0.5,计算F-score
fscore = (1 + np.power(0.5, 2)) * np.multiply(precision, recall) / (np.power(0.5, 2) * precision + recall)
# 打印结果
print ("Naive Predictor on validation data: \n \
Accuracy score: {:.4f} \n \
Precision: {:.4f} \n \
Recall: {:.4f} \n \
F-score: {:.4f}".format(accuracy, precision, recall, fscore))
# TODO:从sklearn中导入两个评价指标 - fbeta_score和accuracy_score
from sklearn.metrics import fbeta_score, accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_val, y_val):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_val: features validation set
- y_val: income validation set
'''
results = {}
# TODO:使用sample_size大小的训练数据来拟合学习器
# TODO: Fit the learner to the training data using slicing with 'sample_size'
start = time() # 获得程序开始时间
learner = learner.fit(X_train[:sample_size],y_train[:sample_size])
end = time() # 获得程序结束时间
# TODO:计算训练时间
results['train_time'] = end - start
print(results['train_time'])
# TODO: 得到在验证集上的预测值
# 然后得到对前300个训练数据的预测结果
start = time() # 获得程序开始时间
predictions_val = learner.predict(X_val)
predictions_train = learner.predict(X_train[:300])
end = time() # 获得程序结束时间
# TODO:计算预测用时
results['pred_time'] = end - start
# TODO:计算在最前面的300个训练数据的准确率
results['acc_train'] = accuracy_score(y_train[:300],predictions_train)
# TODO:计算在验证上的准确率
results['acc_test'] = accuracy_score( y_val, predictions_val)
# TODO:计算在最前面300个训练数据上的F-score
results['f_train'] = fbeta_score(y_train[:300], predictions_train, 0.5)
# TODO:计算验证集上的F-score
results['f_test'] = fbeta_score(y_val,predictions_val,0.5)
# 成功
print ("{} trained on {} samples.".format(learner.__class__.__name__, sample_size))
# 返回结果
return results
# TODO:从sklearn中导入三个监督学习模型
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import AdaBoostClassifier
# TODO:初始化三个模型
clf_A = DecisionTreeClassifier()
clf_B = GaussianNB()
clf_C = AdaBoostClassifier()
# TODO:计算1%, 10%, 100%的训练数据分别对应多少点
samples_1 = int(len(X_train)*0.01)
samples_10 = int(len(X_train)*0.1)
samples_100 = int(len(X_train))
# 收集学习器的结果
results = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = train_predict(clf, samples, X_train, y_train, X_val, y_val)
# 对选择的三个模型得到的评价结果进行可视化
vs.evaluate(results, accuracy, fscore)
# TODO:导入'GridSearchCV', 'make_scorer'和其他一些需要的库
from sklearn.ensemble import AdaBoostClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import fbeta_score,make_scorer
# TODO:初始化分类器
clf = AdaBoostClassifier(random_state=0)
# TODO:创建你希望调节的参数列表
parameters = {'n_estimators': [50, 100, 200]}
# TODO:创建一个fbeta_score打分对象
scorer = make_scorer(fbeta_score, beta=0.5)
# TODO:在分类器上使用网格搜索,使用'scorer'作为评价函数
grid_obj = GridSearchCV(clf, parameters,scorer)
# TODO:用训练数据拟合网格搜索对象并找到最佳参数
grid_obj = grid_obj.fit(X_train, y_train)
# 得到estimator
best_clf = grid_obj.best_estimator_
# 使用没有调优的模型做预测
predictions = (clf.fit(X_train, y_train)).predict(X_val)
best_predictions = best_clf.predict(X_val)
# 汇报调优后的模型
print ("best_clf\n------")
print (best_clf)
# 汇报调参前和调参后的分数
print ("\nUnoptimized model\n------")
print ("Accuracy score on validation data: {:.4f}".format(accuracy_score(y_val, predictions)))
print ("F-score on validation data: {:.4f}".format(fbeta_score(y_val, predictions, beta = 0.5)))
print ("\nOptimized Model\n------")
print ("Final accuracy score on the validation data: {:.4f}".format(accuracy_score(y_val, best_predictions)))
print ("Final F-score on the validation data: {:.4f}".format(fbeta_score(y_val, best_predictions, beta = 0.5)))
# TODO:导入一个有'feature_importances_'的监督学习模型
# TODO:在训练集上训练一个监督学习模型
model = None
# TODO: 提取特征重要性
importances = None
# 绘图
vs.feature_plot(importances, X_train, y_train)
# 导入克隆模型的功能
from sklearn.base import clone
# 减小特征空间
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_val_reduced = X_val[X_val.columns.values[(np.argsort(importances)[::-1])[:5]]]
# 在前面的网格搜索的基础上训练一个“最好的”模型
clf_on_reduced = (clone(best_clf)).fit(X_train_reduced, y_train)
# 做一个新的预测
reduced_predictions = clf_on_reduced.predict(X_val_reduced)
# 对于每一个版本的数据汇报最终模型的分数
print ("Final Model trained on full data\n------")
print ("Accuracy on validation data: {:.4f}".format(accuracy_score(y_val, best_predictions)))
print ("F-score on validation data: {:.4f}".format(fbeta_score(y_val, best_predictions, beta = 0.5)))
print ("\nFinal Model trained on reduced data\n------")
print ("Accuracy on validation data: {:.4f}".format(accuracy_score(y_val, reduced_predictions)))
print ("F-score on validation data: {:.4f}".format(fbeta_score(y_val, reduced_predictions, beta = 0.5)))
#TODO test your model on testing data and report accuracy and F score
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Q
Step2: Q
Step3: Q
Step4: Example
Step5: Q
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
from numpy import sqrt,pi,cos,sin,arange,random
from qutip import *
H = Qobj([[1],[0]])
V = Qobj([[0],[1]])
P45 = Qobj([[1/sqrt(2)],[1/sqrt(2)]])
M45 = Qobj([[1/sqrt(2)],[-1/sqrt(2)]])
R = Qobj([[1/sqrt(2)],[-1j/sqrt(2)]])
L = Qobj([[1/sqrt(2)],[1j/sqrt(2)]])
def sim_transform(o_basis1, o_basis2, n_basis1, n_basis2):
a = n_basis1.dag()*o_basis1
b = n_basis1.dag()*o_basis2
c = n_basis2.dag()*o_basis1
d = n_basis2.dag()*o_basis2
return Qobj([[a.data[0,0],b.data[0,0]],[c.data[0,0],d.data[0,0]]])
Phv = H*H.dag() - V*V.dag()
Phv
psi = 1/sqrt(5)*H + 2/sqrt(5)*V
psi.dag()*Phv*psi
psi.dag()*Phv*Phv*psi
1- (-0.6)**2
data = random.choice([1, -1],size=1000000,p=[0.2,0.8])
data.mean()
data.var()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Required input data files
Step2: The scaffold table
Step3: Selecting scaffolds
Step4: Subsetting scaffold windows
Step5: Filtering missing data with mincov
Step6: Filtering missing data with imap and minmap
Step7: Subsample taxa with imap
Step8: Concatenate multiple scaffolds together
Step9: Consensus reduction with imap
Step10: Write selected window to a file
Step11: Accessing the output files
|
<ASSISTANT_TASK:>
Python Code:
# conda install ipyrad -c conda-forge -c bioconda
import ipyrad.analysis as ipa
ipa.__version__
# path to an HDF5 formatted seqs file
SEQSFILE = "/tmp/oaks.seqs.hdf5"
# download example seqs file if not already present (~500Mb, takes ~5 minutes)
URL = "https://www.dropbox.com/s/c1u89nwuuv8e6ie/virentes_ref.seqs.hdf5?raw=1"
ipa.download(URL, path=SEQSFILE);
# first load the data file with no other arguments to see scaffold table
ext = ipa.window_extracter(SEQSFILE)
# the scaffold table shows scaffold names and lens in order of the ref. genome
ext.scaffold_table.head(15)
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=SEQSFILE,
scaffold_idxs=0,
)
# show stats of the window
ext.stats
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=SEQSFILE,
scaffold_idxs=0,
start=0,
end=10000,
)
# show stats of the window
ext.stats
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=SEQSFILE,
scaffold_idxs=0,
start=500000,
end=800000,
)
# show stats of the window
ext.stats
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=SEQSFILE,
scaffold_idxs=0,
start=500000,
end=800000,
mincov=0.8,
rmincov=0.5,
)
# show stats of the window
ext.stats
# assign samples to groups/taxa
imap = {
"reference": ["reference"],
"virg": ["TXWV2", "LALC2", "SCCU3", "FLSF33", "FLBA140"],
"mini": ["FLSF47", "FLMO62", "FLSA185", "FLCK216"],
"gemi": ["FLCK18", "FLSF54", "FLWO6", "FLAB109"],
"bran": ["BJSL25", "BJSB3", "BJVL19"],
"fusi": ["MXED8", "MXGT4", "TXGR3", "TXMD3"],
"sagr": ["CUVN10", "CUCA4", "CUSV6"],
"oleo": ["CRL0030", "HNDA09", "BZBB1", "MXSA3017"],
}
# set a simple minmap requiring 1 sample from each group
minmap = {name: 0.75 for name in imap}
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=SEQSFILE,
scaffold_idxs=0,
start=500000,
end=800000,
mincov=0.8,
imap=imap,
minmap=minmap,
)
# show stats of the window
ext.stats
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=SEQSFILE,
scaffold_idxs=2,
mincov=0.8,
imap={
"include": [
"TXWV2", "LALC2", "SCCU3", "FLSF33", "FLBA140",
"FLSF47", "FLMO62", "FLSA185", "FLCK216",
"FLCK18", "FLSF54", "FLWO6", "FLAB109",
]
},
)
# show stats of the window
ext.stats
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=SEQSFILE,
scaffold_idxs=[0, 1, 2, 3, 4, 5],
mincov=0.5,
)
# show stats of the window
ext.stats
# select a scaffold idx, start, and end positions
ext = ipa.window_extracter(
data=SEQSFILE,
scaffold_idxs=0,
start=200000,
end=5000000,
mincov=0.8,
imap=imap,
minmap=minmap,
consensus_reduce=True, # <--- uses IMAP info to make consensus calls
)
# show stats of the window
ext.stats
ext.run(force=True)
# path to the phylip file output
ext.outfile
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From the figures above, it is difficult to determine an optimal number of clusters. The silhouette score clearly shows that we need more than 5 clusters. 6 looks like a good number, but one of the clusters shows odd behavior when plotting generation change vs net demand change (looks like 2 solid regression lines in the figure).
Step2: PREVIOUS WORK BELOW
Step3: missing '1-hr ramp rate'
Step4: Filter out non-fossil plants
Step5: Start classifying only with all nan values dropped
Step6: The code below begins by scaling the X-vector data to mean 0 and a standard variance. It then loops through a wide range of k values for the number of clusters (3 to 14), calculating the labels for each plant, the Calinski Harabaz score, and the Silhouette score for each value of k.
Step7: The two score values are plotted below. They don't agree exactly on the optimal number of centers, but it looks like 6 or 8 is probably best.
Step8: The integer appended to each of the cluster_id column labels is equal to the number of clusters.
Step9: Some basic information about each of the clusters
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import os
import sklearn as sk
from cluster import Clusters
import os
filename = 'Cluster_Data_2.csv'
path = '../Clean Data'
fullpath = os.path.join(path, filename)
cluster = Clusters(fullpath)
cluster.make_clusters(n_clusters=range(4,26))
cluster.evaluate_clusters()
cluster_labels = cluster.export_cluster_id(k=16)
cluster_labels[:5]
labeled_plants = cluster.label_and_export(k=16)
labeled_plants.to_clipboard()
export_df = fossil_with_ramp.loc[:,['year', 'plant_id', 'cluster_id_6']]
export_df.to_csv('Cluster labels.csv', index = False)
# cols = ['year', 'fuel type', '1-hr ramp rate', '3-hr ramp rate',
# 'efficiency', 'efficiency std', 'CF', 'CF std']
# #Add index of power plant IDs
# df = pd.DataFrame(columns=cols)
filename = 'Cluster_Data.csv'
path = '../Clean Data'
fullpath = os.path.join(path, filename)
cluster_df = pd.read_csv(fullpath)
cluster_df[cluster_df.plant_id==127]
cluster_df[cluster_df.plant_id==3466]
cluster_df[cluster_df.plant_id==3584]
cluster_df['fuel_type'].unique()
fossil_codes = ['SUB', 'LIG', 'NG', 'DFO', 'PC']
fossil_df = cluster_df.loc[cluster_df['fuel_type'].isin(fossil_codes)]
fossil_df.describe()
# Unique plants
len(fossil_df.dropna().loc[:,'plant_id'].unique())
sns.distplot(fossil_df['capacity'].dropna())
fossil_with_ramp = fossil_df.dropna()
sns.distplot(fossil_with_ramp['capacity'])
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_samples, silhouette_score, calinski_harabaz_score
from sklearn import preprocessing
fossil_with_ramp.columns
sns.pairplot(fossil_with_ramp, hue='fuel_type', vars=[u'capacity', u'capacity_factor', u'efficiency',
u'ramp_rate'])
cluster_data = pd.DataFrame(index=range(3,15), columns=['n_clusters', 'score', 'silhouette'])
cluster_labels = {}
X = fossil_with_ramp[['capacity', 'capacity_factor', 'efficiency', 'ramp_rate']]
X_scaled = preprocessing.StandardScaler().fit_transform(X)
for idx, n_clusters in enumerate(range(3,15)):
cluster_data.loc[n_clusters, 'n_clusters'] = n_clusters
clusterer = KMeans(n_clusters, random_state=42)
# fit_clusters = clusterer.fit(X)
# cluster_labels = clusterer.fit_predict(X)
cluster_labels[n_clusters] = clusterer.fit_predict(X_scaled)
# http://scikit-learn.org/stable/modules/clustering.html#calinski-harabaz-index
# The score is higher when clusters are dense and well separated
# score[idx] = metrics.calinski_harabaz_score(X, cluster_labels)
cluster_data.loc[n_clusters, 'score'] = calinski_harabaz_score(X_scaled, cluster_labels[n_clusters])
# silhouette[idx] = silhouette_score(X, cluster_labels)
cluster_data.loc[n_clusters, 'silhouette'] = silhouette_score(X_scaled, cluster_labels[n_clusters])
# print 'For ', n_clusters, ' clusters, the average silhouette score is :', silhouette[idx], \
# ' and the score is :', score[idx]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8,3))
cluster_data.plot(y='score', ax=ax1)
# ax1.plot(range(3,15), cluster_data['score'])
ax1.set_title('Calinski Harabaz score\nHigher is better')
cluster_data.plot(y='silhouette', ax=ax2)
# ax2.plot(range(3,15), silhouette)
ax2.set_title('Silhouette score\nLower is better')
for n_clusters in cluster_labels.keys():
fossil_with_ramp.loc[:,'cluster_id_{}'.format(n_clusters)] = cluster_labels[n_clusters]
fossil_with_ramp.head()
sns.pairplot(fossil_with_ramp, hue='cluster_id_6', vars=[u'capacity', u'capacity_factor', u'efficiency',
u'ramp_rate'])
drop_columns = ['cluster_id_{}'.format(i) for i in [3,4,5,7,8,9,10,11,12,13,14]]
grouped = fossil_with_ramp.drop(drop_columns, axis=1).groupby(['cluster_id_6', 'fuel_type'])
grouped.mean()
grouped.count()
grouped.std()
sns.countplot('cluster_id_6', hue='fuel_type', data=fossil_with_ramp)
plt.title('Count of plants in each cluster')
sns.barplot('cluster_id_6', 'capacity', data=fossil_with_ramp, hue='fuel_type',
estimator=sum)
plt.title('Total Capacity of plants in each cluster')
# data = fossil_with_ramp.drop(drop_columns, axis=1)
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2,2, figsize=(10,8))
sns.barplot('cluster_id_6', 'capacity', data=fossil_with_ramp, hue='fuel_type', ax=ax1)
ax1.set_title('Mean Capacity')
sns.barplot('cluster_id_6', 'capacity_factor', data=fossil_with_ramp, hue='fuel_type', ax=ax2)
ax2.set_title('Mean Capacity Factor')
sns.barplot('cluster_id_6', 'efficiency', data=fossil_with_ramp, hue='fuel_type', ax=ax3)
ax3.set_title('Mean Efficiency')
sns.barplot('cluster_id_6', 'ramp_rate', data=fossil_with_ramp, hue='fuel_type', ax=ax4)
ax4.set_title('Mean Ramp Rate')
# sns.barplot('Climate', 'HDD65', data=house, ax=ax2)
# # Shrink the point sizes (scale), change the estimator from mean to median
# sns.pointplot('Climate', 'HDD65', data=house, scale=0.7, estimator=np.median, ax=ax3)
# # Adjust the bandwidth (smoothing)
# sns.violinplot('Climate', 'HDD65', data=house, ax=ax4, bw=0.4)
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Joe F. found that the gradient features do still exist in 2016 biases, by visually inspecting all of the bias images. They are, however, at a very low frequency (few percent) and a much lower amplitude (~20 ADU for IM9)
Step2: Bias roll-off
|
<ASSISTANT_TASK:>
Python Code:
for ccdNum in [2,3]:
exts = 4*(ccdNum-1) + arange(1,5)
figure(figsize=(10,6))
for pnum,ext in enumerate(exts,start=1):
subplot(2,2,pnum)
for t,lbl,clr in zip([t15,t16],#,s15[b15],s16[b16],s15[o15],s16[o16]],
['2015','2016'],#,'2015b','2016b','2015o','2016o'],
'gbcrmk'):
hist(t['sliceRangeAdu'][:,ext-1].compressed(),30,(0,60),normed=True,
histtype='step',label=lbl,color=clr)
title('HDU[%d]'%ext)
legend(loc='upper right',ncol=3,fontsize=9)
figtext(0.5,0.01,'Range of pixels in bias center slice [ADU]',ha='center')
def check_ext(t,ext,badval=None):
j = ext-1
print "MIN: ",t['sliceRangeAdu'][:,j].min()
print "MAX: ",t['sliceRangeAdu'][:,j].max()
m = sigma_clip(t['sliceRangeAdu'][:,j]).mean()
s = sigma_clip(t['sliceRangeAdu'][:,j]).std()
print "MEAN: ",m
print "STD: ",s
nbias = (~t['sliceRangeAdu'][:,j].mask).sum()
if badval is None: badval = m+5*s
bad = t['sliceRangeAdu'][:,j] > badval
print "BAD: %d/%d = %.2f%%" % (sum(bad),nbias,100*float(sum(bad))/nbias)
return where(bad.filled(False))[0]
# IM9
print '---- 2015 ----'
_ = check_ext(t15,9,badval=10)
#_ = check_ext(s15[b15],9,badval=10)
#_ = check_ext(s15[o15],9,badval=10)
print '---- 2016 ----'
ii = check_ext(t16,9)
#_ = check_ext(s16[b16],9)
#_ = check_ext(s16[o16],9)
# the offending biases
print t16['fileName'][ii]
# IM5
print '---- 2015 ----'
ii = check_ext(t15,8,badval=10)
print '---- 2016 ----'
ii = check_ext(t16,8)
# the offending biases
print t16['fileName'][ii]
def count_rolloffs(t):
nbias = (~t['dropFlag'].mask).sum(axis=0)
froll = t['dropFlag'].sum(axis=0)/nbias.astype(float)
return froll
f15 = count_rolloffs(t15)
f16 = count_rolloffs(t16)
f15b = count_rolloffs(s15[b15])
f16b = count_rolloffs(s16[b16])
f15o = count_rolloffs(s15[o15])
f16o = count_rolloffs(s16[o16])
#froll = Table([arange(1,17),f15,f16,f15b,f16b,f15o,f16o],
# names=['hduNum','freq2015','freq2016','b15','b16','o15','o16'])
froll = Table([arange(1,17),f15,f16,f15o,f16o],
names=['hduNum','freq2015(bias)','freq2016(bias)',
'freq2015(obj)','freq2016(obj)'])
for col in froll.itercols():
if col.name != 'hduNum':
col.format='%.4f'
print(froll)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Restart the Kernel
Step2: Before you begin
Step3: Otherwise, set your project id here.
Step4: Authenticate your GCP account
Step5: Create a Cloud Storage bucket
Step6: Only if your bucket doesn't already exist
Step7: Finally, validate access to your Cloud Storage bucket by examining its contents
Step8: Set up variables
Step9: Notes
Step10: Keep examples quick. Use small datasets, or small slices of datasets. You don't need to train to convergence, train until it's obvious it's making progress.
Step11: Cleaning up
|
<ASSISTANT_TASK:>
Python Code:
%pip install -U missing_or_updating_package --user
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
# Get your GCP project id from gcloud
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID=shell_output[0]
print("Project ID: ", PROJECT_ID)
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" #@param {type:"string"}
import sys, os
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on AI Platform, then don't execute this code
if not os.path.exists('/opt/deeplearning/metadata/env_version'):
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"}
REGION = "us-central1" #@param {type:"string"}
! gsutil mb -l $REGION gs://$BUCKET_NAME
! gsutil ls -al gs://$BUCKET_NAME
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
import os, sys
#Build the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu', input_shape=(None, 5)),
tf.keras.layers.Dense(3)
])
# Run the model on a single batch of data, and inspect the output.
result = model(tf.constant(np.random.randn(10,5), dtype = tf.float32)).numpy()
print("min:", result.min())
print("max:", result.max())
print("mean:", result.mean())
print("shape:", result.shape)
# Compile the model for training
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.categorical_crossentropy)
# Clone repo
! git clone https://github.com/GoogleCloudPlatform/professional-services.git
% cd professional-services/examples/cloudml-energy-price-forecasting/
! sed -i 's/energyforecast\/data/ai-platform-data\/energy_data/g' trainer/task.py
# Delete model version resource
! gcloud ai-platform versions delete $MODEL_VERSION --quiet --model $MODEL_NAME
# Delete model resource
! gcloud ai-platform models delete $MODEL_NAME --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r $JOB_DIR
# If training job is still running, cancel it
! gcloud ai-platform jobs cancel $JOB_NAME --quiet
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can pass in a second argument to array that gives the numeric type. There are a number of types listed here that your matrix can be. Some of these are aliased to single character codes. The most common ones are 'd' (double precision floating point number), 'D' (double precision complex number), and 'i' (int32). Thus,
Step2: To build matrices, you can either use the array command with lists of lists
Step3: You can also form empty (zero) matrices of arbitrary shape (including vectors, which Numpy treats as vectors with one row), using the zeros command
Step4: The first argument is a tuple containing the shape of the matrix, and the second is the data type argument, which follows the same conventions as in the array command. Thus, you can make row vectors
Step5: or column vectors
Step6: There's also an identity command that behaves as you'd expect
Step7: as well as a ones command.
Step8: If you provide a third argument, it takes that as the number of points in the space. If you don't provide the argument, it gives a length 50 linear space.
Step9: linspace is an easy way to make coordinates for plotting. Functions in the numpy library (all of which are imported into IPython notebook) can act on an entire vector (or even a matrix) of points at once. Thus,
Step10: In conjunction with matplotlib, this is a nice way to plot things
Step11: Matrix operations
Step12: as well as when you add two matrices together. (However, the matrices have to be the same shape.)
Step13: Something that confuses Matlab users is that the times (*) operator give element-wise multiplication rather than matrix multiplication
Step14: To get matrix multiplication, you need the dot command
Step15: dot can also do dot products (duh!)
Step16: as well as matrix-vector products.
Step17: There's also a diag() function that takes a list or a vector and puts it along the diagonal of a square matrix.
Step18: We'll find this useful later on.
Step19: There are a number of routines to compute eigenvalues and eigenvectors
Step20: Example
Step21: Let's see whether this works for our sin example from above
Step22: Pretty close!
Step23: We've made a couple of hacks here to get the orbitals the way we want them. First, I inserted a -1 factor before the wave functions, to fix the phase of the lowest state. The phase (sign) of a quantum wave function doesn't hold any information, only the square of the wave function does, so this doesn't really change anything.
Step24: Let's compare the first function to our solution.
Step25: The agreement is almost exact.
Step26: Other than phase errors (which I've corrected with a little hack
Step28: As well as Jacobi, Laguerre, Hermite polynomials, Hypergeometric functions, and many others. There's a full listing at the Scipy Special Functions Page.
Step29: There's a section below on parsing CSV data. We'll steal the parser from that. For an explanation, skip ahead to that section. Otherwise, just assume that this is a way to parse that text into a numpy array that we can plot and do other analyses with.
Step30: Since we expect the data to have an exponential decay, we can plot it using a semi-log plot.
Step31: For a pure exponential decay like this, we can fit the log of the data to a straight line. The above plot suggests this is a good approximation. Given a function
Step32: Let's see whether this curve fits the data.
Step34: If we have more complicated functions, we may not be able to get away with fitting to a simple polynomial. Consider the following data
Step35: This data looks more Gaussian than exponential. If we wanted to, we could use polyfit for this as well, but let's use the curve_fit function from Scipy, which can fit to arbitrary functions. You can learn more using help(curve_fit).
Step36: Now fit to it using curve_fit
Step37: The curve_fit routine we just used is built on top of a very good general minimization capability in Scipy. You can learn more at the scipy documentation pages.
Step38: random() uses the Mersenne Twister algorithm, which is a highly regarded pseudorandom number generator. There are also functions to generate random integers, to randomly shuffle a list, and functions to pick random numbers from a particular distribution, like the normal distribution
Step39: It is generally more efficient to generate a list of random numbers all at once, particularly if you're drawing from a non-uniform distribution. Numpy has functions to generate vectors and matrices of particular types of random distributions.
Step40: One of the first programs I ever wrote was a program to compute $\pi$ by taking random numbers as x and y coordinates, and counting how many of them were in the unit circle. For example
Step41: The idea behind the program is that the ratio of the area of the unit circle to the square that inscribes it is $\pi/4$, so by counting the fraction of the random points in the square that are inside the circle, we get increasingly good estimates to $\pi$.
Step42: If you're interested a great method, check out Ramanujan's method. This converges so fast you really need arbitrary precision math to display enough decimal places. You can do this with the Python decimal module, if you're interested.
Step43: Scipy has a numerical integration routine quad (since sometimes numerical integration is called quadrature), that we can use for this
Step44: There are also 2d and 3d numerical integrators in Scipy. See the docs for more information.
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import numpy as np
# Import pylab to provide scientific Python libraries (NumPy, SciPy, Matplotlib)
%pylab --no-import-all
#import pylab as pl
# import the Image display module
from IPython.display import Image
import math
np.array([1,2,3,4,5,6])
np.array([1,2,3,4,5,6],'d')
np.array([1,2,3,4,5,6],'D')
np.array([1,2,3,4,5,6],'i')
np.array([[0,1],[1,0]],'d')
np.zeros((3,3),'d')
np.zeros(3,'d')
np.zeros((1,3),'d')
np.zeros((3,1),'d')
np.identity(4,'d')
np.linspace(0,1)
np.linspace(0,1,11)
x = np.linspace(0,2*np.pi)
np.sin(x)
plot(x,np.sin(x))
0.125*identity(3,'d')
identity(2,'d') + array([[1,1],[1,2]])
identity(2)*ones((2,2))
dot(identity(2),ones((2,2)))
v = array([3,4],'d')
sqrt(dot(v,v))
m = array([[1,2],[3,4]])
m.T
diag([1,2,3,4,5])
A = array([[1,1,1],[0,2,5],[2,5,-1]])
b = array([6,-4,27])
solve(A,b)
A = array([[13,-4],[-4,7]],'d')
eigvalsh(A)
eigh(A)
def nderiv(y,x):
"Finite difference derivative of the function f"
n = len(y)
d = zeros(n,'d') # assume double
# Use centered differences for the interior points, one-sided differences for the ends
for i in range(1,n-1):
d[i] = (y[i+1]-y[i-1])/(x[i+1]-x[i-1])
d[0] = (y[1]-y[0])/(x[1]-x[0])
d[n-1] = (y[n-1]-y[n-2])/(x[n-1]-x[n-2])
return d
x = linspace(0,2*pi)
dsin = nderiv(sin(x),x)
plot(x,dsin,label='numerical')
plot(x,cos(x),label='analytical')
title("Comparison of numerical and analytical derivatives of sin(x)")
legend()
def Laplacian(x):
h = x[1]-x[0] # assume uniformly spaced points
n = len(x)
M = -2*identity(n,'d')
for i in range(1,n):
M[i,i-1] = M[i-1,i] = 1
return M/h**2
x = linspace(-3,3)
m = 1.0
ohm = 1.0
T = (-0.5/m)*Laplacian(x)
V = 0.5*(ohm**2)*(x**2)
H = T + diag(V)
E,U = eigh(H)
h = x[1]-x[0]
# Plot the Harmonic potential
plot(x,V,color='k')
for i in range(4):
# For each of the first few solutions, plot the energy level:
axhline(y=E[i],color='k',ls=":")
# as well as the eigenfunction, displaced by the energy level so they don't
# all pile up on each other:
plot(x,-U[:,i]/sqrt(h)+E[i])
title("Eigenfunctions of the Quantum Harmonic Oscillator")
xlabel("Displacement (bohr)")
ylabel("Energy (hartree)")
from numpy.polynomial.hermite import Hermite
def ho_evec(x,n,m,ohm):
vec = [0]*9
vec[n] = 1
Hn = Hermite(vec)
return (1/sqrt(2**n*math.factorial(n)))*pow(m*ohm/pi,0.25)*exp(-0.5*m*ohm*x**2)*Hn(x*sqrt(m*ohm))
plot(x,ho_evec(x,0,1,1),label="Analytic")
plot(x,-U[:,0]/sqrt(h),label="Numeric")
xlabel('x (bohr)')
ylabel(r'$\psi(x)$')
title("Comparison of numeric and analytic solutions to the Harmonic Oscillator")
legend()
phase_correction = [-1,1,1,-1,-1,1]
for i in range(6):
subplot(2,3,i+1)
plot(x,ho_evec(x,i,1,1),label="Analytic")
plot(x,phase_correction[i]*U[:,i]/sqrt(h),label="Numeric")
from scipy.special import airy,jn,eval_chebyt,eval_legendre
subplot(2,2,1)
x = linspace(-1,1)
Ai,Aip,Bi,Bip = airy(x)
plot(x,Ai)
plot(x,Aip)
plot(x,Bi)
plot(x,Bip)
title("Airy functions")
subplot(2,2,2)
x = linspace(0,10)
for i in range(4):
plot(x,jn(i,x))
title("Bessel functions")
subplot(2,2,3)
x = linspace(-1,1)
for i in range(6):
plot(x,eval_chebyt(i,x))
title("Chebyshev polynomials of the first kind")
subplot(2,2,4)
x = linspace(-1,1)
for i in range(6):
plot(x,eval_legendre(i,x))
title("Legendre polynomials")
raw_data = \
3.1905781584582433,0.028208609537968457
4.346895074946466,0.007160804747670053
5.374732334047101,0.0046962988461934805
8.201284796573875,0.0004614473299618756
10.899357601713055,0.00005038370219939726
16.295503211991434,4.377451812785309e-7
21.82012847965739,3.0799922117601088e-9
32.48394004282656,1.524776208284536e-13
43.53319057815846,5.5012073588707224e-18
data = []
for line in raw_data.splitlines():
words = line.split(',')
data.append(map(float,words))
data = array(data)
title("Raw Data")
xlabel("Distance")
plot(data[:,0],data[:,1],'bo')
title("Raw Data")
xlabel("Distance")
semilogy(data[:,0],data[:,1],'bo')
params = polyfit(data[:,0],log(data[:,1]),1)
a = params[0]
A = exp(params[1])
x = linspace(1,45)
title("Raw Data")
xlabel("Distance")
semilogy(data[:,0],data[:,1],'bo')
semilogy(x,A*exp(a*x),'b-')
gauss_data = \
-0.9902286902286903,1.4065274110372852e-19
-0.7566104566104566,2.2504438576596563e-18
-0.5117810117810118,1.9459459459459454
-0.31887271887271884,10.621621621621626
-0.250997150997151,15.891891891891893
-0.1463309463309464,23.756756756756754
-0.07267267267267263,28.135135135135133
-0.04426734426734419,29.02702702702703
-0.0015939015939017698,29.675675675675677
0.04689304689304685,29.10810810810811
0.0840994840994842,27.324324324324326
0.1700546700546699,22.216216216216214
0.370878570878571,7.540540540540545
0.5338338338338338,1.621621621621618
0.722014322014322,0.08108108108108068
0.9926849926849926,-0.08108108108108646
data = []
for line in gauss_data.splitlines():
words = line.split(',')
data.append(map(float,words))
data = array(data)
plot(data[:,0],data[:,1],'bo')
def gauss(x,A,a): return A*exp(a*x**2)
from scipy.optimize import curve_fit
params,conv = curve_fit(gauss,data[:,0],data[:,1])
x = linspace(-1,1)
plot(data[:,0],data[:,1],'bo')
A,a = params
plot(x,gauss(x,A,a),'b-')
from random import random
rands = []
for i in range(100):
rands.append(random())
plot(rands)
from random import gauss
grands = []
for i in range(100):
grands.append(gauss(0,1))
plot(grands)
plot(rand(100))
npts = 5000
xs = 2*rand(npts)-1
ys = 2*rand(npts)-1
r = xs**2+ys**2
ninside = (r<1).sum()
figsize(6,6) # make the figure square
title("Approximation to pi = %f" % (4*ninside/float(npts)))
plot(xs[r<1],ys[r<1],'b.')
plot(xs[r>1],ys[r>1],'r.')
figsize(8,6) # change the figsize back to 4x3 for the rest of the notebook
n = 100
total = 0
for k in range(n):
total += pow(-1,k)/(2*k+1.0)
print 4*total
from numpy import sqrt
def f(x): return exp(-x)
x = linspace(0,10)
plot(x,exp(-x))
from scipy.integrate import quad
quad(f,0,inf)
from scipy.fftpack import fft,fftfreq
npts = 4000
nplot = npts/10
t = linspace(0,120,npts)
def acc(t): return 10*sin(2*pi*2.0*t) + 5*sin(2*pi*8.0*t) + 2*rand(npts)
signal = acc(t)
FFT = abs(fft(signal))
freqs = fftfreq(npts, t[1]-t[0])
subplot(211)
plot(t[:nplot], signal[:nplot])
subplot(212)
plot(freqs,20*log10(FFT),',')
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Defining models and using the algorithm presented in Ref [2]
Step2: A simple model
Step3: Parametrizing the transition rates
Step4: Calculating the first and second fundamental cumulants
Step5: Observables
Step6: Exploring the parameter dependence
Step7: Plotting
|
<ASSISTANT_TASK:>
Python Code:
# inline plotting/interaction
%pylab inline
# replace the line above with the line below for command line scripts:
# from pylab import *
from sympy import * # symbolic python
init_printing() # pretty printing
import numpy as np # numeric python
import time # timing, for performance monitoring
# activate latex text rendering
from matplotlib import rc
rc('text', usetex=True)
import cumulants # cumulants.py implements the algorithm presented in Ref [2]
# We define the transition rates as symbolic expressions...
w01, w10, w02, w20, w03, w30, w12, w21, w23, w32 = \
symbols("w_{01}, w_{10}, w_{02}, w_{20}, w_{03}, w_{30},"+\
"w_{12}, w_{21}, w_{23}, w_{32}",\
real=True, positive=True)
# ... and specify the model topology as a dictionary
model4State = {(0,1): w01, (1,0): w10,\
(0,2): w02, (2,0): w20,\
(0,3): w03, (3,0): w30,\
(1,2): w12, (2,1): w21,\
(2,3): w23, (3,2): w32 \
}
# Define the symbols for parameters, place-holders, etc.
b = symbols("b", positive=True)
f, g = symbols("f, g", real=True)
x, y = symbols("x, y", real=True)
u, v, w = symbols("u, v, w", positive=True)
# symmetric-antisymmetric substitutions
onehalf = Rational(1,2)
# Generate a substitution list to parametrize the symbolic transition rates
rate_cw = exp(+onehalf*f)
rate_ccw = exp(-onehalf*f)
# Outer circle, clockwise
w01p = rate_cw
w12p = rate_cw
w23p = rate_cw
w30p = rate_cw
# Outer circle, counter-clockwise
w03p = rate_ccw
w32p = rate_ccw
w21p = rate_ccw
w10p = rate_ccw
# Center rates
w02p = b*exp(+onehalf*g)
w20p = b*exp(-onehalf*g)
# Create the substitution list that gives the parametric dependence of the rates:
rates_parametrized = [(w01,w01p),(w10,w10p),(w02,w02p),(w20,w20p),\
(w12,w12p),(w21,w21p),(w23,w23p),(w32,w32p),(w30,w30p),(w03,w03p)]
# calculate the cumulants (this takes only a few seconds)
c, C = cumulants.getCumulants( model4State, [(0,1),(0,2)], rates_parametrized)
## WARNING: READ BEFORE EXECUTING THIS CELL
##
## Additional simplifications may help if numerical problems are encountered (e.g. for very stiff rate matrices).
## They can be very time-consuming.
## Note: The time of simplification steps strongly increases with the number of free symbolic parameters
##
## For the present example, they are NOT needed!
##
t0 = time.time() # Time the simplification steps
c = simplify(factor(c))
C = factor(simplify(C))
display(time.time() - t0)
## Calculate cycle affinities:
# First fundamental cycle (for chord $(0,1)$): (0,1,2,3):
aff0 = simplify((log((w01*w12*w23*w30)/(w10*w21*w32*w03))).subs(rates_parametrized))
# Second fundamental cycle (for chord $(0,2)$ (0,2,3):
aff1 = simplify((log((w02*w23*w30)/(w03*w32*w20))).subs(rates_parametrized))
# affinities should be $4f$ and $2f+g$, respectively:
display((aff0,aff1))
## Define expressions for quantities of interest
topc = c[0] # average current through top edge
cenc = c[1] # average current through center edge
ep = simplify(c[0]*aff0 + c[1]*aff1) # entropy production
topd = onehalf*C[0,0] # two times variance yields diffusion constants
cend = onehalf*C[1,1]
cov = C[0,1] #co-variance of both currents
res = 2*c[0].diff(f)/C[0,0] # response of top current to the driving affinity $f$ divided by top diffusion constant
#res = simplify(res)
# Show some analytical expressions for...
# ...top and center average currents
display('Average currents')
display(topc)
display(cenc)
# ...entropy production
display('Entropy production')
display(ep)
## ...diffusion constants and response (WARNING: these are longish expressions...)
#display('Diffusion constants')
#display(topd)
#display(cend)
#display('Normalized response')
#display(res)
## Lambdify all SymPy expressions into NumPy Expressions
topcL, cencL, epL, topdL, cendL, covL, resL\
= [ lambdify( (f,g,b), N(thing), "numpy" )\
for thing in ( topc, cenc, ep, topd, cend, cov, res\
) ]
## Prepare 2D plotting range
from pylab import meshgrid,cm,imshow,contour,clabel,colorbar,axis,title,show
# prepare the plotting grid
[xmin, xmax, ymin, ymax] = [-10,10,.01,20] # boundaries of the grid
resolution = 400 # plot resolution
plotarea = [xmin, xmax, ymin, ymax] # full plotarea
# prepare the plotting grid for kinesin 6 figures
xpts = linspace(xmin, xmax, resolution)
ypts = linspace(ymin, ymax, resolution)
X, Y = meshgrid(xpts, ypts)
## General setup for figures
fig_size = (6,5) # in inch
fs = 22 # font size
colormap1 = cm.gist_earth # linear color gradient (blue) for densityplots
colormap2 = cm.coolwarm # color gradient (red-white-blue) for densityplots with highlighted center
# font setup
font = {'family' : 'serif',
'color' : 'black',
'weight' : 'normal',
'size' : fs,
}
ts = 16 # tick+contour label size
figdir = "toymodel/"
## This function takes a lambda function and creates a (logarithmic) 2D plot
## g: lambda function with *exactly two* arguments
## t: title string
## x,y: x and y axis strings
## logplot: flag whether to plot in logscale or not
## crop: min/max values
def pplot(g, t, x='', y='', logplot=True, highlight=0, crop=[]):
fig = figure(figsize=fig_size)
if(logplot):
G = np.log(np.abs(g(X,Y)))/np.log(10)
GG = g(X,Y)
ccmm = colormap1
else:
G = g(X,Y)
GG = G
ccmm = colormap2
# the slicing parameter [::-1] reverses the y-axis before plotting
im = imshow( G[::-1], cmap=ccmm, extent=plotarea ) # drawing the function
if(len(crop)==2):
im.set_clim(vmin=crop[0], vmax=crop[1])
# adding the contour lines with labels
cset1 = contour( X,Y, G, arange(-20,20,1),linewidths=1,linestyles="-",colors='black')
stalling = contour( X,Y, GG, [highlight], linewidths=3,linestyles="-",colors='white')
# adding the colorbar on the right
cb = colorbar(im)
# latex fashion title
title(t, fontdict=font)
xlabel(x, fontdict=font)
ylabel(y, fontdict=font)
# Set tick label size
tick_params(axis='both', which='major', labelsize=ts )
#savefig(figdir+t+".png")
return(fig)
## NOTE: Our lambda functions take three parameters (f,g,b).
## For plotting, we need to define a new anonymous lambda function, that takes only two parameters
# Currents and EP depending on two forces f,g with same symmetric contribution on the center edge
pplot(lambda f,g: topcL(f,g,1),"Average current through top edge (log scale)",'$f$','$g$')
pplot(lambda f,g: cencL(f,g,1),"Average current through center edge (log scale)",'$f$','$g$')
pplot(lambda f,g: epL(f,g,1),"Steady state entropy production (log scale)",'$f$','$g$')
# Currents and EP depending on one force f=g while increasing strength of center edge
pplot(lambda f,b: topcL(f,f,b),"Average current through top edge (log scale)",'$f$','$b$')
pplot(lambda f,b: cencL(f,f,b),"Average current through center edge (log scale)",'$f$','$b$')
pplot(lambda f,b: epL(f,f,b),"Steady state entropy production (log scale)",'$f$','$b$')
print("Done")
# Diffusion constant and normalized response depending on two forces f,g with same symmetric contribution on the center edge
pplot(lambda f,g: topdL(f,g,1),"Diffusion top edge (log scale)",'$f$','$g$')
pplot(lambda f,g: cendL(f,g,1),"Diffusion center edge (log scale)",'$f$','$g$')
pplot(lambda f,g: resL(f,g,1),"Normalized $f$-response in top edge",'$f$','$g$', False)
# Diffusion constant and normalized response depending on one force f=g while increasing strength of center edge
pplot(lambda f,b: topdL(f,f,b),"Average current through top edge (log scale)",'$f$','$b$')
pplot(lambda f,b: cendL(f,f,b),"Average current through center edge (log scale)",'$f$','$b$')
pplot(lambda f,b: resL(f,f,b),"Normalized $f$-response in top edge",'$f$','$b$', False)
print("Done")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Calculating the surface energy
Step2: When generating a slab of LiFePO4, we also want to be careful
Step3: There are a couple of rules before we actually run calculations on some of these
|
<ASSISTANT_TASK:>
Python Code:
# Import the neccesary tools to generate surfaces
from pymatgen.core.surface import SlabGenerator, generate_all_slabs, Structure, Lattice
# Import the neccesary tools for making a Wulff shape
from pymatgen.analysis.wulff import WulffShape
import os
# Let's start with fcc Ni
lattice = Lattice.cubic(3.508)
Ni = Structure(lattice, ["Ni", "Ni", "Ni", "Ni"],
[[0,0,0], [0,0.5,0],
[0.5,0,0], [0,0,0.5]
])
# We'll use the SlabGenerator class to get a single slab. We'll start with the
# (111) slab of Ni. Plug in the CONVENTIONAL unit cell of your structure, the
# maximum Miller index value to generate the different slab orientations along
# with the minimum slab and vacuum size in Angstroms
slabgen = SlabGenerator(Ni, (1,1,1), 10, 10)
# If we want to find all terminations for a particular Miller index orientation,
# we use the get_slabs() method. This returns a LIST of slabs rather than a single
# slab. When generating a slab for a particular orientation, there are sometimes
# more than one location we can terminate or cut the structure to create a slab. The
# simplest example of this would be the Si(Fd-3m) (111) slab which can be cut or
# terminated in two different locations along the vector of the Miller index. For a
# fcc structure such as Ni however, there should only be one way to cut a (111) slab.
all_slabs = slabgen.get_slabs()
print("The Ni(111) slab only has %s termination." %(len(all_slabs)))
# Let's try this for a diamond Silicon structure
lattice = Lattice.cubic(5.46873)
Si = Structure(lattice, ["Si", "Si", "Si", "Si",
"Si", "Si", "Si", "Si"],
[[0.00000, 0.00000, 0.50000],
[0.75000, 0.75000, 0.75000],
[0.00000, 0.50000, 0.00000],
[0.75000, 0.25000, 0.25000],
[0.50000, 0.00000, 0.00000],
[0.25000, 0.75000, 0.25000],
[0.50000, 0.50000, 0.50000],
[0.25000, 0.25000, 0.75000]])
slabgen = SlabGenerator(Si, (1,1,1), 10, 10)
print("Notice now there are actually now %s terminations that can be \
generated in the (111) direction for diamond Si" %(len(slabgen.get_slabs())))
# The simplest way to do this is to just use generate_all_slabs which finds all the unique
# Miller indices for a structure and uses SlabGenerator to create all terminations for all of them.
all_slabs = generate_all_slabs(Si, 3, 10, 10)
print("%s unique slab structures have been found for a max Miller index of 3" %(len(all_slabs)))
# What are the Miller indices of these slabs?
for slab in all_slabs:
print(slab.miller_index)
print("Notice some Miller indices are repeated. Again, this is due to there being more than one termination")
# Now let's assume that we then calculated the surface energies for these slabs
# Surface energy values in J/m^2
surface_energies_Ni = {(3, 2, 0): 2.3869, (1, 1, 0): 2.2862,
(3, 1, 0): 2.3964, (2, 1, 0): 2.3969,
(3, 3, 2): 2.0944, (1, 0, 0): 2.2084,
(2, 1, 1): 2.2353, (3, 2, 2): 2.1242,
(3, 2, 1): 2.3183, (2, 2, 1): 2.1732,
(3, 3, 1): 2.2288, (3, 1, 1): 2.3039,
(1, 1, 1): 1.9235}
miller_list = surface_energies_Ni.keys()
e_surf_list = surface_energies_Ni.values()
# We can now construct a Wulff shape with an accuracy up to a max Miller index of 3
wulffshape = WulffShape(Ni.lattice, miller_list, e_surf_list)
# Let's get some useful information from our wulffshape object
print("shape factor: %.3f, anisotropy: \
%.3f, weighted surface energy: %.3f J/m^2" %(wulffshape.shape_factor,
wulffshape.anisotropy,
wulffshape.weighted_surface_energy))
# If we want to see what our Wulff shape looks like
wulffshape.show()
# Lets try something a little more complicated, say LiFePO4
from pymatgen.util.testing import PymatgenTest
# Get the LiFePO4 structure
LiFePO4 = PymatgenTest.get_structure("LiFePO4")
# Let's add some oxidation states to LiFePO4, this will be
# important when we want to take surface polarity into consideration
LiFePO4.add_oxidation_state_by_element({"Fe": 2, "Li": 1, "P": 5, "O": -2})
slabgen = SlabGenerator(LiFePO4, (0,0,1), 10, 10)
all_slabs = slabgen.get_slabs(bonds={("P", "O"): 2})
# any bond between P and O less than 2 Angstroms cannot be broken when generating slabs
print("For the (001) slab of LiFePO4, there are %s terminations." %(len(all_slabs)))
for slab in all_slabs:
print(slab.is_polar(), slab.is_symmetric())
# Notice that none of the terminations in the (001) direction do not simultaneously satisfy
# our two criteria so a (001) surface with a reasonable surface energy cannot be calculated.
# In such cases, we need to modify the surfaces of our slabs. A future release of surface.py
# will implement such modification techniques for these cases.
# Now let's generate all possible slabs for a max Miller index of 2 for LiFePO4 and see if
# any of these surfaces can be calculated to yield reasonable and accurate surface energy
# values. This may take a while.
all_slabs = generate_all_slabs(LiFePO4, 2, 10, 10, bonds={("P", "O"): 2})
print("There is a total of %s slabs generated including polar, asymmetric, and \
P-O terminated slabs" %(len(all_slabs)))
# store any slabs for calculation that satisfies our criterias
valid_slabs = []
for slab in all_slabs:
if not slab.is_polar() and slab.is_symmetric():
print(slab.miller_index)
valid_slabs.append(slab)
print("Number of slabs that are nonpolar, symmetric and do not terminate P-O bonds: %s" %(len(valid_slabs)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we'll generate a test field grid. You only need to do this the first time you run the simulator.
Step2: Let's load the Fields object with the default field grid. Fields is a thin wrapper around a pandas DataFrame containing the field information.
Step3: The raw fieldid and coordinates are stored as a pandas Dataframe in the .fields attribute
Step4: Now let's calculate their altitude and azimuth at a specific time using the astropy.time.Time object
Step5: Demonstrating low-level access to fields by the fieldid index (usually not required)
Step6: We can select fields with conditionals
Step7: It's easier to use the select_fields convenience function, though. It returns a boolean Series indexed by fieldid that we can use to do calculations on subsets of the field grid.
Step8: Calculate the current altitude and azimuth of the selected fields
Step9: Calculating the overhead time (max of ha, dec, dome slews and readout time)
|
<ASSISTANT_TASK:>
Python Code:
# hack to get the path right
import sys
sys.path.append('..')
import ztf_sim
from astropy.time import Time
import pandas as pd
import numpy as np
import astropy.units as u
import pylab as plt
ztf_sim.fields.generate_test_field_grid()
f = ztf_sim.fields.Fields()
f.fields.head()
f.alt_az(Time.now()).head()
f.fields.loc[853]
f.fields['dec'] > -30.
cuts = f.select_fields(dec_range=[0,10],gridid=0,ecliptic_lat_range=[-5,5])
cuts.head()
f.alt_az(Time.now(),cuts=cuts)
f.overhead_time(853,Time.now())
f = ztf_sim.fields.Fields()
Exposure_time = 60*u.second
Night_length=9*u.h
time0 = Time('2015-09-10 20:00:00') + 7*u.h
time = time0
f.fields = f.fields.join(pd.DataFrame(np.zeros(len(f.fields)),columns=['observed']))
f.fields = f.fields.join(pd.DataFrame(np.zeros(len(f.fields)),columns=['possibleToObserve']))
def observe(f, nightStart):
time=nightStart
goodAltitude = f.alt_az(time)['alt'] > 20
shouldObserve = f.fields['observed'] == 0
good = goodAltitude & shouldObserve #& f.alt_az(time+1*u.h)['alt'] < 20 # start with a field which won't be observable later
if np.all(good) is False:
good = goodAltitude & shouldObserve
fid = f.fields[good].iloc[0].name
f.fields['observed'][fid]+=1
f.fields['possibleToObserve'][goodAltitude] = 1
time += Exposure_time
while time < nightStart + Night_length:
goodAltitude = f.alt_az(time)['alt'] > 20
shouldObserve = f.fields['observed'] == 0
good = goodAltitude & shouldObserve
f.fields['possibleToObserve'][goodAltitude] = 1
if not np.any(good):
time += 60*u.s
continue
slewTime = f.overhead_time(fid,time)[good]
fid = int(slewTime.idxmin())
# print slewTime['overhead_time'][fid]
time += Exposure_time + slewTime['overhead_time'][fid]*u.second
f.fields['observed'][fid]+=1
# print time-7*u.h
# First night
observe(f,time)
fieldsPossible = np.sum(f.fields['possibleToObserve'])
print fieldsPossible
fieldsObserved = np.sum(f.fields['observed'])
print fieldsObserved
meanTime = (Night_length.to(u.s)-fieldsObserved*Exposure_time)/(fieldsObserved-1)
print meanTime
# Second night
time=time0+24*u.h
observe(f,time)
fieldsPossible = np.sum(f.fields['possibleToObserve'])
print fieldsPossible
fieldsObserved = np.sum(f.fields['observed'])
print fieldsObserved
meanTime = (2*Night_length.to(u.s)-fieldsObserved*Exposure_time)/(fieldsObserved-1)
print meanTime
for dec in np.append(np.linspace(-90,90,10),0):
ra=np.linspace(0, 360,1000)
x,y = raDec2xy(ra,dec)
plt.plot(x,y,'k')
for ra in np.linspace(0,360,10):
dec=np.linspace(-90, 90,1000)
x,y = raDec2xy(ra,dec)
plt.plot(x,y,'k')
x,y = raDec2xy(f.fields['ra'],f.fields['dec'])
plt.plot(x,y,'o',color=(.8,.8,.8))
plt.show()
def raDec2xy(ra,dec):
# Using Aitoff projections (from Wiki) returns x-y coordinates on a plane of RA and Dec
theta = np.deg2rad(dec)
phi = np.deg2rad(ra)-np.pi #the range is [-pi,pi]
alpha=np.arccos(np.cos(theta)*np.cos(phi/2))
x=2*np.cos(theta)*np.sin(phi/2)/np.sinc(alpha/np.pi) # The python's sinc is normalazid, hence the /pi
y=np.sin(theta)/np.sinc(alpha/np.pi)
return x,y
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using interact for animation with data
Step2: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays
Step3: Compute a 2d NumPy array called phi
Step4: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
Step5: Use interact to animate the plot_soliton_data function versus time.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
def soliton(x, t, c, a):
i=(((c**(1/2))/2)*(x-c*t-a))
return ((1/2)*c*(np.cos(i)**(-2)))
assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
tmin = 0.0
tmax = 10.0
tpoints = 100
t = np.linspace(tmin, tmax, tpoints)
xmin = 0.0
xmax = 10.0
xpoints = 200
x = np.linspace(xmin, xmax, xpoints)
c = 1.0
a = 0.0
phi=np.ndarray((xpoints,tpoints), dtype=float)
for i in range(200):
for j in range(100):
phi[i,j]=soliton(x[i],t[j],c,a)
assert phi.shape==(xpoints, tpoints)
assert phi.ndim==2
assert phi.dtype==np.dtype(float)
assert phi[0,0]==soliton(x[0],t[0],c,a)
def plot_soliton_data(i=0):
plt.figure(figsize=(9,6))
plt.plot(x,soliton(x,t[i],c,a))
plt.box(False)
plt.ylim(0,6000)
plt.grid(True)
plt.ylabel('soliton wave')
plt.xlabel('x')
plot_soliton_data(0)
assert True # leave this for grading the plot_soliton_data function
interact(plot_soliton_data, i=(0,100,5))
assert True # leave this for grading the interact with plot_soliton_data cell
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In addition to the standard setup as above, we need a way to randomize the snapshots. For this simple example, we actually won't randomize them (NoModification), but typically we would assign totally random velocities from a Boltzmann distribution (RandomVelocities).
Step2: Now we set up the committor simulation and run it. In this example, we use a list of two snapshots. If you only want to do the committor from one snapshot, you don't have to wrap it in a list.
Step3: Now we do the analysis
Step4: Analyzing shooting points from a TIS calculation
Step5: Committor as a function of one variable
Step6: Here we've used the order parameter associated with state A as our collective variable. However, it is important to keep in mind that the choice of collective variable is completely arbitrary. For example, we could do the same analysis by looking at the distance from the central point (0.0, 0.0). The plot would look different, since it is a different CV, but the procedure would remain the same.
|
<ASSISTANT_TASK:>
Python Code:
pes = toys.LinearSlope(m=[0.0], c=[0.0]) # flat line
topology = toys.Topology(n_spatial=1, masses=[1.0], pes=pes)
integrator = toys.LeapfrogVerletIntegrator(0.1)
options = {
'integ': integrator,
'n_frames_max': 1000,
'n_steps_per_frame': 1
}
engine = toys.Engine(options=options, topology=topology)
snap0 = toys.Snapshot(coordinates=np.array([[0.0]]),
velocities=np.array([[1.0]]),
engine=engine)
snap1 = toys.Snapshot(coordinates=np.array([[0.2]]),
velocities=np.array([[1.0]]),
engine=engine)
cv = paths.FunctionCV("Id", lambda snap : snap.coordinates[0][0])
# these are our states:
left = paths.CVDefinedVolume(cv, float("-inf"), -1.0).named("left")
right = paths.CVDefinedVolume(cv, 1.0, float("inf")).named("right")
# set up a file for storage
storage = paths.Storage("committor_test.nc", mode="w", template=snap0)
## more typical:
#randomizer = paths.RandomVelocities(beta=1.0)
## for testing purposes:
randomizer = paths.NoModification()
simulation = paths.CommittorSimulation(storage=storage,
engine=engine,
states=[left, right],
randomizer=randomizer,
initial_snapshots=[snap0, snap1])
simulation.run(n_per_snapshot=10)
results = paths.ShootingPointAnalysis(steps=storage.steps, states=[left, right])
results[snap0]
# prettier printing of the same
# first version uses number indexes to label snapshots
results.to_pandas()
# second version uses given label_function
results.to_pandas(label_function=cv)
store2 = paths.AnalysisStorage("mstis.nc")
stateA = store2.volumes.find("A")
stateB = store2.volumes.find("B")
stateC = store2.volumes.find("C")
results = paths.ShootingPointAnalysis(store2.steps, [stateA, stateB, stateC])
len(results)
len(store2.steps)
opA = store2.cvs['opA']
distA = lambda snap : np.sqrt(opA(snap))
bins = [0.0+0.05*i for i in range(31)]
hist, bins = results.committor_histogram(distA, stateA, bins)
plt.bar(left=bins[:-1], height=hist, width=[bins[i+1]-bins[i] for i in range(len(bins)-1)], log=True)
plt.xlim(0.0, 1.6);
twoD_hash = lambda snap: (snap.xyz[0][0], snap.xyz[0][1])
bins = [-1.0+i*0.05 for i in range(41)]
hist, bins_x, bins_y = results.committor_histogram(twoD_hash, stateA, bins)
# when using pcolor, we need to transpose the histogram (hist.T)
plt.pcolor(bins_x, bins_y, hist.T)
plt.clim(0.0, 1.0)
plt.colorbar();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's say that I want flow (parameterCd=00060) for site '02325000'. All of the tsgettoolbox functions create a pandas DataFrame.
Step2: 'tstoolbox ...'
Step3: 'tstoolbox plot' has many options that can be used to modify the plot.
Step4:
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from tsgettoolbox import tsgettoolbox
df = tsgettoolbox.nwis_dv(sites="02325000", startDT="2000-01-01", parameterCd="00060")
df.head() # The .head() function gives the first 5 values of the time-series
from tstoolbox import tstoolbox
tstoolbox.plot(input_ts=df, ofilename="plot_api.png")
tstoolbox.plot(
input_ts=df,
ofilename="flow_api.png",
ytitle="Flow (cfs)",
title="02325000: FENHOLLOWAY RIVER NEAR PERRY, FLA",
legend=False,
)
mdf = tstoolbox.aggregate(input_ts=df, agg_interval="M", statistic="mean")
tstoolbox.plot(input_ts=mdf, drawstyle="steps-pre", ofilename="flow_api_monthly.png")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: adder(n)
|
<ASSISTANT_TASK:>
Python Code:
data_inorder = pd.read_csv('Data\\adder_inorder_data.csv')
data_inorder = data_inorder[['Steps', 'MSE']]
data_inorder = data_inorder.sort_values(['Steps'])
data_inorder.head(9)
data_rnd_0 = pd.read_csv('Data\\adder_random_0_data.csv')
data_rnd_0 = data_rnd_0[['Steps', 'MSE']]
data_rnd_0 = data_rnd_0.sort_values(['Steps'])
data_rnd_1 = pd.read_csv('Data\\adder_random_1_data.csv')
data_rnd_1 = data_rnd_1[['Steps', 'MSE']]
data_rnd_1 = data_rnd_1.sort_values(['Steps'])
data_rnd_2 = pd.read_csv('Data\\adder_random_2_data.csv')
data_rnd_2 = data_rnd_2[['Steps', 'MSE']]
data_rnd_2 = data_rnd_2.sort_values(['Steps'])
data_rnd_3 = pd.read_csv('Data\\adder_random_3_data.csv')
data_rnd_3 = data_rnd_3[['Steps', 'MSE']]
data_rnd_3 = data_rnd_3.sort_values(['Steps'])
data_rnd_4 = pd.read_csv('Data\\adder_random_4_data.csv')
data_rnd_4 = data_rnd_4[['Steps', 'MSE']]
data_rnd_4 = data_rnd_4.sort_values(['Steps'])
plt.plot(data_inorder['Steps'].ix[:20], data_inorder['MSE'].ix[:20], 'bo',
data_rnd_0['Steps'].ix[:20], data_rnd_0['MSE'].ix[:20],
data_rnd_1['Steps'].ix[:20], data_rnd_1['MSE'].ix[:20],
data_rnd_2['Steps'].ix[:20], data_rnd_2['MSE'].ix[:20],
data_rnd_3['Steps'].ix[:20], data_rnd_3['MSE'].ix[:20],
data_rnd_4['Steps'].ix[:20], data_rnd_4['MSE'].ix[:20])
plt.show()
plt.plot(data_inorder['Steps'].ix[30:], data_inorder['MSE'].ix[30:], 'bo',
data_rnd_0['Steps'].ix[30:], data_rnd_0['MSE'].ix[30:],
data_rnd_1['Steps'].ix[30:], data_rnd_1['MSE'].ix[30:],
data_rnd_2['Steps'].ix[30:], data_rnd_2['MSE'].ix[30:],
data_rnd_3['Steps'].ix[30:], data_rnd_3['MSE'].ix[30:],
data_rnd_4['Steps'].ix[30:], data_rnd_4['MSE'].ix[30:])
plt.show()
plt.plot(data_rnd_1['Steps'].ix[30:], data_rnd_1['MSE'].ix[30:],
data_rnd_2['Steps'].ix[30:], data_rnd_2['MSE'].ix[30:],
data_rnd_4['Steps'].ix[30:], data_rnd_4['MSE'].ix[30:])
plt.show()
data_inorder = pd.read_csv('Data\\adder_inorder_data.csv')
data_inorder = data_inorder[['Steps', 'MSE']]
data_inorder = data_inorder.sort_values(['Steps'])
arr = np.zeros(5)
arr[0] = 5
arr = ['100', '200', '300', '400', '500', '600', '700',
'1000','1100','1200','1300', '1400','1500', '1600','1700','1800', '1900',
'2000', '2100', '2300', '2400', '2500']
df_arr = []
for i in range(len(arr)):
temp = pd.read_csv('Data\\determinant_' + arr[i] +'_layer_by_100.csv', header=None)
temp = temp.T
temp.columns=['Second', 'MSE']
temp['First'] = arr[i]
temp = temp.sort_values(['First', 'Second'])
df_arr.append(temp)
len(df_arr)
temp = pd.read_csv('Data\\determinant_layer_by_100.csv', header=None)
temp = temp.T
temp.columns=['MSE', 'Layer']
plt.plot(temp['Layer'], temp['MSE'])
frames = [df_arr[0], df_arr[1], df_arr[2], df_arr[3], df_arr[4], df_arr[5],
df_arr[6], df_arr[7], df_arr[8], df_arr[9], df_arr[10], df_arr[11],
df_arr[12], df_arr[13], df_arr[14], df_arr[15], df_arr[16], df_arr[17],
df_arr[18], df_arr[19], df_arr[20], df_arr[21]]
result = pd.concat(frames)
result = result.reset_index(drop=True)
result.sort_values(['MSE'])
res1 = result.as_matrix(columns=['First'])
res2 = result.as_matrix(columns=['Second'])
res3 = result.as_matrix(columns=['MSE'])
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_trisurf(res1[:,0], res2[:,0], res3[:,0], cmap=cm.jet, linewidth=0.2)
plt.show()
min = 30
for i in range(len(arr)):
plt.plot(df_arr[i]['Second'], df_arr[i]['MSE'])
plt.show()
num = 5
df_arr[5].as_matrix(columns=['MSE']))
df_arr[5].head(21)
for i in range(len(arr)):
plt.plot(df_arr[i]['Second'], df_arr[i]['MSE'])
plt.ylim(.3, .5)
plt.show()
plt.plot(df_arr[0]['Second'], df_arr[0]['MSE'])
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import matplotlib.pyplot as plt
import numpy as np
n_angles = 36
n_radii = 8
# An array of radii
# Does not include radius r=0, this is to eliminate duplicate points
radii = np.linspace(0.125, 1.0, n_radii)
# An array of angles
angles = np.linspace(0, 2*np.pi, n_angles, endpoint=False)
# Repeat all angles for each radius
angles = np.repeat(angles[...,np.newaxis], n_radii, axis=1)
# Convert polar (radii, angles) coords to cartesian (x, y) coords
# (0, 0) is added here. There are no duplicate points in the (x, y) plane
x = np.append(0, (radii*np.cos(angles)).flatten())
y = np.append(0, (radii*np.sin(angles)).flatten())
# Pringle surface
z = np.sin(-x*y)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_trisurf(x, y, z, cmap=cm.jet, linewidth=0.2)
plt.show()
z
arr np.array()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sizes per distributor
Step2: Print joint table with first 60 sizes.
Step3: Calculate entropy
Step4: Create new collection from data only with '_id', 'source' and 'size' fields
Step5: Sizes list per distributor
Step8: Tagging according to size
Step9: Let's calculate data entropy for results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from scipy.stats import entropy
from tabulate import tabulate
from pymongo import MongoClient
import matplotlib.pyplot as plt
plt.style.use('seaborn')
plt.rcParams["figure.figsize"] = (20,8)
db = MongoClient()['stores']
TOTAL_NUMBER_OF_PRODUCTS = db.data.count()
results = db.data.aggregate(
[
{
"$group": {
"_id": "$size",
"count": {"$sum": 1},
}
},
{
"$sort": {
"count": -1,
}
}
]
)
ALL_SIZES = [(str(x['_id']), x['count']) for x in list(results)]
print('Number of uniq. sizes: {}'.format(len(ALL_SIZES)))
DISTRIBUTORS = list(db.data.distinct("source"))
results = db.data.aggregate(
[
{
"$group": {
"_id": "$source",
"sizes": {"$addToSet": "$size"},
}
},
{
"$project": {
"_id": 1,
"count": {"$size": "$sizes"}
}
},
{
"$sort": {
"count": -1,
}
}
]
)
SIZES_PER_DISTRIBUTOR = [
(str(x['_id']), x['count'])
for x in list(results)
]
print(tabulate(SIZES_PER_DISTRIBUTOR,
headers=['Distributor', 'Number of uniq. Sizes'],
tablefmt="simple"))
df_values_by_key = pd.DataFrame(SIZES_PER_DISTRIBUTOR,
index=[x[0] for x in SIZES_PER_DISTRIBUTOR],
columns=['Distributor', 'Sizes'])
df_values_by_key.iloc[::-1].plot.barh()
import operator
all_sizes_table = []
number_of_sizes = 180
for sizes in zip(ALL_SIZES[0:number_of_sizes:3],
ALL_SIZES[1:number_of_sizes:3],
ALL_SIZES[2:number_of_sizes:3]):
all_sizes_table.append(list(reduce(operator.add, sizes)))
print(
tabulate(
all_sizes_table[:60],
headers=3*['Size', 'Number of Products'],
tablefmt="simple"))
# calculate probability vector
p = [x[1] for x in ALL_SIZES]
size_prob_vector = np.array(p) / TOTAL_NUMBER_OF_PRODUCTS
# calculate entropy
first_entropy = entropy(size_prob_vector)
print("Data entropy:", first_entropy)
# create new collection
db.data.aggregate(
[
{
"$project": {
"_id": 1,
"source": 1,
"size": 1,
},
},
{
"$out": "size_mapping"
}
]
)
print('Db "size_mapping" created')
# create indexes
db.size_mapping.create_index([("size", 1)])
db.size_mapping.create_index([("source", 1)])
print('Indexes "size", "source" for "size_mapping" created.')
print(list(db.size_mapping.find().limit(5)))
SIZES_LIST_PER_DISTRIBUTOR = db.size_mapping.aggregate(
[
{
"$group": {
"_id": "$source",
"sizes": {"$addToSet": "$size"},
},
},
{
"$project": {
"_id": 1,
"sizes": 1,
"number_of_sizes": {"$size": "$sizes"},
}
},
{
"$sort": {
"number_of_sizes": -1
}
}
]
)
TABLE_SIZES_LIST_PER_DISTRIBUTOR = [
(str(x['_id']), x['sizes'], x['number_of_sizes'])
for x in SIZES_LIST_PER_DISTRIBUTOR
]
for distr, sizes, num in TABLE_SIZES_LIST_PER_DISTRIBUTOR:
print('Sizes for: "{}"'.format(distr))
print(", ".join(sizes))
print(80*"-")
SIZES_MAPPING = {
'ALL': [],
'NO SIZE': ['PLAIN', 'CONE', 'BLANKET'],
'ONE': ['OS', 'ONE SIZE', '1 SIZ', 'O/S'],
'XS': ['XXS', 'XX-SMALL', '2XS'],
'S': ['SMALL', 'S/M'],
'M': ['MEDIUM', 'S/M', 'M/L'],
'L': ['LARGE', 'L/XL', 'M/L'],
'XL': ['EXTRA', 'XLT', 'XT', 'L/XL'],
'2XL': ['2X', 'XXL', '2XT', '2XLL', '2X/', '2XLT'],
'3XL': ['3X', '3XT', '3XLL', '3XLT'],
'4XL': ['4X', '4XT', '4XLT'],
'5XL': ['5X', '5XT', '5XLT'],
'6XL': ['6X'],
}
def build_matching_table(matching_rules):
Build matching table from matching rules
:param matching_rules: matching rules used to build matching table
:type matching_rules: dict
:return: matching table `{'S/M: ['S', 'M'], '2X': ['2XL'], ...}`
:rtype: dict
matching_table = {}
# transform matching rules to the "shortcut": "group_key" table
for key, values in matching_rules.items():
if not values: # skip undefined rules i.e. "[]"
continue
# add rule for key
if key not in matching_table:
# NOTE: set('ab') would be {'a', 'b'}
# so it's impossible to matching_table[key] = set(key)
matching_table[key] = set()
matching_table[key].add(key)
for value in values:
if value not in matching_table:
matching_table[value] = set()
matching_table[value].add(key)
else:
matching_table[value].add(key)
return matching_table
MATCHING_RULES = build_matching_table(SIZES_MAPPING)
print(tabulate(MATCHING_TABLE.items(), headers=['From', 'To'], tablefmt="simple"))
# process data into the new table
# def get_groups(mtable, size):
# Get size groups for the given `size` according to matching table
# :param size: size (case insensetive)
# :type size: str
# :return: list of strings i.e. size groups or ``['UNDEFINED']``
# if not found
# :rtype: list or ['UNDEFINED']
#
# return list(mtable.get(size, default=size))
# for k, v in MATCHING_TABLE.items():
# res = db.size_mapping.update_many(
# {"size": k},
# {"$set": {"size": get_groups(MATCHING_TABLE, k)}})
# print(res.raw_result)
results = db.size_mapping.aggregate(
[
{
"$group": {
"_id": "$size",
"count": {"$sum": 1},
}
},
{
"$sort": {
"count": -1,
}
}
]
)
NEW_SIZES = [(str(x['_id']), x['count']) for x in list(results)]
print(
"\n" +
tabulate(NEW_SIZES[:20], headers=['Size', 'Number of Products'], tablefmt="orgtbl") +
"\n"
)
# calculate probability vector
p = []
for _, count in NEW_SIZES:
p.append(count)
size_prob_vector = np.array(p) / TOTAL_NUMBER_OF_PRODUCTS
# calculate entropy
first_entropy = entropy(size_prob_vector)
print("Data entropy: ", first_entropy)
from functools import reduce
total_matched_products = (sum([x[1] for x in NEW_SIZES[:11]]))
percent_from_db_total = round((total_matched_products / TOTAL_NUMBER_OF_PRODUCTS) * 100, 2)
print("Matched: {} Percent from total: {}".format(total_matched_products, percent_from_db_total))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will use the data-speech-commands database composed of 105,000 WAVE audio files of people saying thirty different words. We will use only a subset of this database.
Step2: Precompute all MFCCs
Step3: Prepare train/val dataset
Step4: Leave P Out Cross Validation
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import glob
import librosa
import numpy as np
DATABASE_PATH = '/Users/pierrerouanet/Downloads/data-speech_commands_v0.02'
labels = {'cat', 'dog', 'house', 'happy', 'zero'}
labels
# We will use only N occurences per word
N = 25
mfccs = []
true_labels = []
for l in labels:
sounds = glob.glob(os.path.join(DATABASE_PATH, l, '*.wav'))
np.random.shuffle(sounds)
sounds = sounds[:N]
for s in sounds:
y, sr = librosa.load(s)
mfcc = librosa.feature.mfcc(y, sr, n_mfcc=13)
mfccs.append(mfcc.T)
true_labels.append(l)
mfccs = np.array(mfccs)
true_labels = np.array(true_labels)
val_percent = 0.2
n_val = int(val_percent * len(true_labels))
I = np.random.permutation(len(true_labels))
I_val, I_train = I[:n_val], I[n_val:]
from dtw import dtw
def cross_validation(train_indices, val_indices):
score = 0.0
for i in val_indices:
x = mfccs[i]
dmin, jmin = np.inf, -1
for j in train_indices:
y = mfccs[j]
d, _, _, _ = dtw(x, y, dist=lambda x, y: np.linalg.norm(x - y, ord=1))
if d < dmin:
dmin = d
jmin = j
score += 1.0 if (true_labels[i] == true_labels[jmin]) else 0.0
return score / len(val_indices)
rec_rate = cross_validation(I_train, I_val)
print('Recognition rate {}%'.format(100. * rec_rate))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Latitude, Longitude for any pixel in a GeoTiff File
Step3: These global coordinates are in a projected coordinated system, which is a representation of the spheroidal earth's surface, but flattened and distorted onto a plane.
Step4: Reverse Geocoding
Step5: --------------------------------------------------------------------------------
Step6: The above wrapper for Google API is not good enough for us. Its not providing us with the district.
Step7: This is what we need, we are getting the district name for given lat,lon coordinates
Step8: Now lets check for an image from Rajasthan
Step9: Bing Maps REST API
Step10: Bing API Test
Step11: We have another player in the ground!
|
<ASSISTANT_TASK:>
Python Code:
from osgeo import ogr, osr, gdal
# opening the geotiff file
ds = gdal.Open('G:\BTP\Satellite\Data\Test2\LE07_L1GT_147040_20050506_20170116_01_T2\LE07_L1GT_147040_20050506_20170116_01_T2_B1.TIF')
col, row, band = ds.RasterXSize, ds.RasterYSize, ds.RasterCount
print(col, row, band)
xoff, a, b, yoff, d, e = ds.GetGeoTransform()
print(xoff, a, b, yoff, d, e)
# details about the params: GDAL affine transform parameters
# xoff,yoff = left corner
# a,e = weight,height of pixels
# b,d = rotation of the image (zero if image is north up)
def pixel2coord(x, y):
Returns global coordinates from coordinates x,y of the pixel
xp = a * x + b * y + xoff
yp = d * x + e * y + yoff
return(xp, yp)
x,y = pixel2coord(col/2,row/2)
print (x, y)
# get the existing coordinate system
old_cs= osr.SpatialReference()
old_cs.ImportFromWkt(ds.GetProjectionRef())
# create the new coordinate system
wgs84_wkt =
GEOGCS["WGS 84",
DATUM["WGS_1984",
SPHEROID["WGS 84",6378137,298.257223563,
AUTHORITY["EPSG","7030"]],
AUTHORITY["EPSG","6326"]],
PRIMEM["Greenwich",0,
AUTHORITY["EPSG","8901"]],
UNIT["degree",0.01745329251994328,
AUTHORITY["EPSG","9122"]],
AUTHORITY["EPSG","4326"]]
new_cs = osr.SpatialReference()
new_cs.ImportFromWkt(wgs84_wkt)
# create a transform object to convert between coordinate systems
transform = osr.CoordinateTransformation(old_cs,new_cs)
# converting into geographic coordinate system
lonx, latx, z = transform.TransformPoint(x,y)
print (latx, lonx, z)
# rb = ds.GetRasterBand(1)
px,py = col/2,row/2 # the pixel location
pix = ds.ReadAsArray(px,py,1,1)
print pix[0][0] # pixel value
coordinates = (latx,lonx)
results = rg.search(coordinates)
print results
print type(results)
print type(results[0])
results[0]
k = 4 # If we want k*k pixels in total from the image
for i in range(0,col,col/k):
for j in range(0,row,row/k):
# fetching the lat and lon coordinates
x,y = pixel2coord(i,j)
lonx, latx, z = transform.TransformPoint(x,y)
# fetching the name of district
coordinates = (latx,lonx)
results = rg.search(coordinates)
# The pixel value for that location
px,py = i,j
pix = ds.ReadAsArray(px,py,1,1)
pix = pix[0][0]
# printing
s = "The pixel value for the location Lat: {0:5.1f}, Long: {1:5.1f} ({2:15}) is {3:7}".format(latx,lonx,results[0]["name"],pix)
print (s)
g = geocoder.google([latx,lonx], method='reverse')
print type(g)
print g
print g.city
print g.state
print g.state_long
print g.country
print g.country_long
print g.address
results = Geocoder.reverse_geocode(latx, lonx)
print results.city
print results.country
print results.street_address
print results.administrative_area_level_1
print results.administrative_area_level_2 ## THIS GIVES THE DISTRICT !! <----------------
print results.administrative_area_level_3
## Converting the unicode string to ascii string
v = results.country
print type(v)
v = v.encode("ascii")
print type(v)
print v
k = 4 # If we want k*k pixels in total from the image
for i in range(0,col,col/k):
for j in range(0,row,row/k):
# fetching the lat and lon coordinates
x,y = pixel2coord(i,j)
lonx, latx, z = transform.TransformPoint(x,y)
# fetching the name of district
results = Geocoder.reverse_geocode(latx, lonx)
# The pixel value for that location
px,py = i,j
pix = ds.ReadAsArray(px,py,1,1)
pix = pix[0][0]
# printing
if results.country.encode('ascii') == 'India':
s = "Lat: {0:5.1f}, Long: {1:5.1f}, District: {2:12}, Pixel Val: {3:7}".format(latx,lonx,results.administrative_area_level_2,pix)
print (s)
import requests # To make the REST API Call
import json
(latx,lonx)
url = "http://dev.virtualearth.net/REST/v1/Locations/"
point = str(latx)+","+str(lonx)
key = "Aktjg1X8bLQ_KhLQbVueYMhXDEMo7OaTweIkBvFojInYE4tVxoTp1bGKWbtU_OPJ"
response = requests.get(url+point+"?key="+key)
print(response.status_code)
data = response.json()
print(type(data))
data
s = data["resourceSets"][0]["resources"][0]["address"]["adminDistrict2"]
s = s.encode("ascii")
s
url = "http://dev.virtualearth.net/REST/v1/Locations/"
key = "Aktjg1X8bLQ_KhLQbVueYMhXDEMo7OaTweIkBvFojInYE4tVxoTp1bGKWbtU_OPJ"
k = 10 # If we want k*k pixels in total from the image
for i in range(0,col,col/k):
for j in range(0,row,row/k):
############### fetching the lat and lon coordinates #######################################
x,y = pixel2coord(i,j)
lonx, latx, z = transform.TransformPoint(x,y)
############### fetching the name of district ##############################################
point = str(latx)+","+str(lonx)
response = requests.get(url+point+"?key="+key)
data = response.json()
s = data["resourceSets"][0]["resources"][0]["address"]
if s["countryRegion"].encode("ascii") != "India":
print ("Outside Indian Territory")
continue
district = s["adminDistrict2"].encode("ascii")
############### The pixel value for that location ##########################################
px,py = i,j
pix = ds.ReadAsArray(px,py,1,1)
pix = pix[0][0]
# printing
s = "Lat: {0:5.1f}, Long: {1:5.1f}, District: {2:12}, Pixel Val: {3:7}".format(latx,lonx,district,pix)
print (s)
import fiona
from shapely.geometry import Point, shape
# Change this for Win7
base = "/Users/macbook/Documents/BTP/Satellite/Data/Maps/Districts/Census_2011"
fc = fiona.open(base+"/2011_Dist.shp")
def reverse_geocode(pt):
for feature in fc:
if shape(feature['geometry']).contains(pt):
return feature['properties']['DISTRICT']
return "NRI"
k = 10 # If we want k*k pixels in total from the image
for i in range(0,col,col/k):
for j in range(0,row,row/k):
############### fetching the lat and lon coordinates #######################################
x,y = pixel2coord(i,j)
lonx, latx, z = transform.TransformPoint(x,y)
############### fetching the name of district ##############################################
point = Point(lonx,latx)
district = reverse_geocode(point)
if district=="NRI":
print ("Outside Indian Territory")
continue
############### The pixel value for that location ##########################################
px,py = i,j
pix = ds.ReadAsArray(px,py,1,1)
pix = pix[0][0]
# printing
s = "Lat: {0:5.1f}, Long: {1:5.1f}, District: {2:12}, Pixel Val: {3:7}".format(latx,lonx,district,pix)
print (s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exploring the dataset
Step2: Based on the above exploratory commands, I believe that the following questions can be answered using the dataset
Step3: Now that we have a data frame of information about each genre and the corresponding mean rating, we will visualize the data using matplotlib
Step4: Reporting findings/analyses
Step5: Now that we have a move year column, let us list the data types of the columns in the movies data frame.
Step6: The above plot provides some interesting insight
|
<ASSISTANT_TASK:>
Python Code:
# The first step is to import the dataset into a pandas dataframe.
import pandas as pd
#path = 'C:/Users/hrao/Documents/Personal/HK/Python/ml-20m/ml-20m/'
path = '/Users/Harish/Documents/HK_Work/Python/ml-20m/'
movies = pd.read_csv(path+'movies.csv')
movies.shape
tags = pd.read_csv(path+'tags.csv')
tags.shape
ratings = pd.read_csv(path+'ratings.csv')
ratings.shape
links = pd.read_csv(path+'links.csv')
links.shape
movies.head()
tags.head()
ratings.head()
links.head()
# List of genres as a Python list
genres = ['Action','Adventure','Animation','Children','Comedy','Crime','Documentary','Drama','Fantasy','Film-Noir','Horror','Musical','Mystery','Romance','Sci-Fi','Thriller','War','Western']
genres_rating_list = []
# The loop reads each element of the above list
# For each iteration, one genre is selected from the movies data frame
# This selection of the data frame is then merged with the rating data frame to get the rating for that genre
# Once the new merged data frame is created, we use the mean function to get the mean rating for the genre
# The genre and the corresponding mean rating are then appended to the genres_rating Data Frame
# The entire looping takes long - can certainly be optimized for performance
for i in range(len(genres)):
fil = genres[i]+'_filter'
mov = genres[i]+'_movies'
rat = genres[i]+'_ratings'
rat_mean = rat+'_mean'
fil = movies['genres'].str.contains(genres[i])
mov = movies[fil]
rat = mov.merge(ratings, on='movieId', how='inner')
rat_mean = round(rat['rating'].mean(), 2)
#print(genres[i], round(rat_mean,2))
genres_rating_list.append(rat_mean)
df = {'Genre':genres, 'Genres Mean Rating':genres_rating_list}
genres_rating = pd.DataFrame(df)
genres_rating
genres_rating['Genres Standard Deviation'] = genres_rating['Genres Mean Rating'].std()
genres_rating['Mean'] = genres_rating['Genres Mean Rating'].mean()
genres_rating['Zero'] = 0
genres_rating
overall_mean = round(genres_rating['Genres Mean Rating'].mean(), 2)
overall_std = round(genres_rating['Genres Mean Rating'].std(),2)
scifi_rating = genres_rating[genres_rating['Genre'] == 'Sci-Fi']['Genres Mean Rating']
print(overall_mean)
print(overall_std)
print(scifi_rating)
genres_rating['Diff from Mean'] = genres_rating['Genres Mean Rating'] - overall_mean
genres_rating
genre_list = list(genres_rating['Genre'])
genres_rating_list = list(genres_rating['Genres Mean Rating'])
genres_diff_list = list(genres_rating['Diff from Mean'])
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(20, 10))
ax1 = plt.subplot(2,1,1)
x = [x for x in range(0, 18)]
xticks_genre_list = genre_list
y = genres_rating_list
plt.xticks(range(len(x)), xticks_genre_list)
plt.scatter(x,y, color='g')
plt.plot(x, genres_rating['Mean'], color="red")
plt.autoscale(tight=True)
#plt.rcParams["figure.figsize"] = (10,2)
plt.title('Movie ratings by genre')
plt.xlabel('Genre')
plt.ylabel('Rating')
plt.ylim(ymax = 4, ymin = 3)
plt.grid(True)
plt.savefig(r'movie-ratings-by-genre.png')
plt.annotate("Sci-Fi Rating",
xy=(14.25,3.5), xycoords='data',
xytext=(14.20, 3.7), textcoords='data',
arrowprops=dict(arrowstyle="->",
connectionstyle="arc3"),
)
for i,j in enumerate( y ):
ax1.annotate( j, ( x[i] + 0.03, y[i] + 0.02))
ax2 = plt.subplot(2,1,2)
x = [x for x in range(0, 18)]
xticks_genre_list = genre_list
y = genres_rating['Diff from Mean']
plt.xticks(range(len(x)), xticks_genre_list)
plt.plot(x,y)
plt.plot(x, genres_rating['Zero'])
plt.autoscale(tight=True)
#plt.rcParams["figure.figsize"] = (10,2)
plt.title('Deviation of each genre\'s rating from the overall mean rating')
plt.xlabel('Genre')
plt.ylabel('Deviation from mean rating')
plt.grid(True)
plt.savefig(r'deviation-from-mean-rating.png')
plt.annotate("Sci-Fi Rating",
xy=(14,-0.13), xycoords='data',
xytext=(14.00, 0.0), textcoords='data',
arrowprops=dict(arrowstyle="->",
connectionstyle="arc3"),
)
plt.show()
# extract year of release of each movie from the title column
# convert the data type of the movie_year column to numeric (from str)
import numpy as np
import re
movies['movie_year'] = movies['title']
movies['movie_year'] = movies['movie_year'].str.extract(r"\(([0-9]+)\)", expand=False)
# creating a new column with just the movie titles
movies['title_only'] = movies['title']
movies['title_only'] = movies['title_only'].str.extract('(.*?)\s*\(', expand=False)
movies['movie_year'].fillna(0, inplace=True)
#Drop all rows containing incorrect year values - such as 0, 6, 69, 500 and -2147483648
movies.drop(movies[movies.movie_year == '0'].index, inplace=True)
movies.drop(movies[movies.movie_year == '6'].index, inplace=True)
movies.drop(movies[movies.movie_year == '06'].index, inplace=True)
movies.drop(movies[movies.movie_year == '69'].index, inplace=True)
movies.drop(movies[movies.movie_year == '500'].index, inplace=True)
movies.drop(movies[movies.movie_year == '-2147483648'].index, inplace=True)
movies.drop(movies[movies.movie_year == 0].index, inplace=True)
movies.drop(movies[movies.movie_year == 6].index, inplace=True)
movies.drop(movies[movies.movie_year == 69].index, inplace=True)
movies.drop(movies[movies.movie_year == 500].index, inplace=True)
movies.drop(movies[movies.movie_year == -2147483648].index, inplace=True)
#convert the string values to numeric
movies['movie_year'] = pd.to_datetime(movies['movie_year'], format='%Y')
movie_year = pd.DataFrame(movies['title_only'].groupby(movies['movie_year']).count())
movie_year.reset_index(inplace=True)
X=movie_year['movie_year']
Y=movie_year['title_only']
plt.plot_date(X,Y,'bo-')
plt.grid(True)
plt.rcParams["figure.figsize"] = (15,5)
plt.title('Number of movies per year')
plt.xlabel('Years')
plt.ylabel('Number of movies')
plt.xlim('1885-01-01','2020-01-01')
plt.show()
movies.head()
list(movies)
a = pd.Series(movies.iloc[0])
a
def flat(str1):
c = pd.DataFrame(columns=list(movies))
for i in range(len(str1)):
#print(str1[i])
if i == 2:
a = str1[i].split('|')
for j in range(len(a)):
c.loc[j] = [str1[0], str1[1], a[j], str1[3], str1[4]]
return c
c = flat(a)
c
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TensorFlow Addons 图像:运算
Step2: 准备和检查图像
Step3: 检查图像
Step4: 制作黑白版本
Step5: 使用 tfa.image
Step6: 旋转
Step7: 变换
Step8: YIQ 中的随机 HSV
Step9: 调整 YIQ 中的 HSV
Step10: 密集图像变形
Step11: 欧氏距离变换
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -U tensorflow-addons
import tensorflow as tf
import numpy as np
import tensorflow_addons as tfa
import matplotlib.pyplot as plt
img_path = tf.keras.utils.get_file('tensorflow.png','https://tensorflow.org/images/tf_logo.png')
img_raw = tf.io.read_file(img_path)
img = tf.io.decode_image(img_raw)
img = tf.image.convert_image_dtype(img, tf.float32)
img = tf.image.resize(img, [500,500])
plt.title("TensorFlow Logo with shape {}".format(img.shape))
_ = plt.imshow(img)
bw_img = 1.0 - tf.image.rgb_to_grayscale(img)
plt.title("Mask image with shape {}".format(bw_img.shape))
_ = plt.imshow(bw_img[...,0], cmap='gray')
mean = tfa.image.mean_filter2d(img, filter_shape=11)
_ = plt.imshow(mean)
rotate = tfa.image.rotate(img, tf.constant(np.pi/8))
_ = plt.imshow(rotate)
transform = tfa.image.transform(img, [1.0, 1.0, -250, 0.0, 1.0, 0.0, 0.0, 0.0])
_ = plt.imshow(transform)
delta = 0.5
lower_saturation = 0.1
upper_saturation = 0.9
lower_value = 0.2
upper_value = 0.8
rand_hsvinyiq = tfa.image.random_hsv_in_yiq(img, delta, lower_saturation, upper_saturation, lower_value, upper_value)
_ = plt.imshow(rand_hsvinyiq)
delta = 0.5
saturation = 0.3
value = 0.6
adj_hsvinyiq = tfa.image.adjust_hsv_in_yiq(img, delta, saturation, value)
_ = plt.imshow(adj_hsvinyiq)
input_img = tf.image.convert_image_dtype(tf.expand_dims(img, 0), tf.dtypes.float32)
flow_shape = [1, input_img.shape[1], input_img.shape[2], 2]
init_flows = np.float32(np.random.normal(size=flow_shape) * 2.0)
dense_img_warp = tfa.image.dense_image_warp(input_img, init_flows)
dense_img_warp = tf.squeeze(dense_img_warp, 0)
_ = plt.imshow(dense_img_warp)
gray = tf.image.convert_image_dtype(bw_img,tf.uint8)
# The op expects a batch of images, so add a batch dimension
gray = tf.expand_dims(gray, 0)
eucid = tfa.image.euclidean_dist_transform(gray)
eucid = tf.squeeze(eucid, (0, -1))
_ = plt.imshow(eucid, cmap='gray')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CONTENTS
Step2: The first parameter of the constructor varname has a default value of '?'. So if the name is not passed it defaults to ?. The keyword argument freqs can be a dictionary of values of random variable
Step3: Besides the prob and varname the object also separately keeps track of all the values of the distribution in a list called values. Every time a new value is assigned a probability it is appended to this list, This is done inside the _setitem _ method.
Step4: The distribution by default is not normalized if values are added incrementally. We can still force normalization by invoking the normalize method.
Step5: It is also possible to display the approximate values upto decimals using the show_approx method.
Step6: Joint Probability Distribution
Step7: A probability model is completely determined by the joint distribution for all of the random variables. (Section 13.3) The probability module implements these as the class JointProbDist which inherits from the ProbDist class. This class specifies a discrete probability distribute over a set of variables.
Step8: Values for a Joint Distribution is a an ordered tuple in which each item corresponds to the value associate with a particular variable. For Joint Distribution of X, Y where X, Y take integer values this can be something like (18, 19).
Step9: Like the ProbDist class JointProbDist also employes magic methods to assign probability to different values.
Step10: It is also possible to list all the values for a particular variable using the values method.
Step11: Inference Using Full Joint Distributions
Step12: Let us now look at the enumerate_joint function returns the sum of those entries in P consistent with e,provided variables is P's remaining variables (the ones not in e). Here, P refers to the full joint distribution. The function uses a recursive call in its implementation. The first parameter variables refers to remaining variables. The function in each recursive call keeps on variable constant while varying others.
Step13: Let us assume we want to find P(Toothache=True). This can be obtained by marginalization (Equation 13.6). We can use enumerate_joint to solve for this by taking Toothache=True as our evidence. enumerate_joint will return the sum of probabilities consistent with evidence i.e. Marginal Probability.
Step14: You can verify the result from our definition of the full joint distribution. We can use the same function to find more complex probabilities like P(Cavity=True and Toothache=True)
Step15: Being able to find sum of probabilities satisfying given evidence allows us to compute conditional probabilities like P(Cavity=True | Toothache=True) as we can rewrite this as $$P(Cavity=True | Toothache = True) = \frac{P(Cavity=True \ and \ Toothache=True)}{P(Toothache=True)}$$
Step16: We might be interested in the probability distribution of a particular variable conditioned on some evidence. This can involve doing calculations like above for each possible value of the variable. This has been implemented slightly differently using normalization in the function enumerate_joint_ask which returns a probability distribution over the values of the variable X, given the {var
Step17: Let us find P(Cavity | Toothache=True) using enumerate_joint_ask.
Step18: You can verify that the first value is the same as we obtained earlier by manual calculation.
Step19: The constructor takes in the name of variable, parents and cpt. Here variable is a the name of the variable like 'Earthquake'. parents should a list or space separate string with variable names of parents. The conditional probability table is a dict {(v1, v2, ...)
Step20: It is possible to avoid using a tuple when there is only a single parent. So an alternative format for the cpt is
Step21: The general format used for the alarm node always holds. For nodes with no parents we can also use.
Step22: It is possible to use the node for lookup function using the p method. The method takes in two arguments value and event. Event must be a dict of the type {variable
Step23: With all the information about nodes present it is possible to construct a Bayes Network using BayesNet. The BayesNet class does not take in nodes as input but instead takes a list of node_specs. An entry in node_specs is a tuple of the parameters we use to construct a BayesNode namely (X, parents, cpt). node_specs must be ordered with parents before children.
Step24: The constructor of BayesNet takes each item in node_specs and adds a BayesNode to its nodes object variable by calling the add method. add in turn adds node to the net. Its parents must already be in the net, and its variable must not. Thus add allows us to grow a BayesNet given its parents are already present.
Step25: BayesNet method variable_node allows to reach BayesNode instances inside a Bayes Net. It is possible to modify the cpt of the nodes directly using this method.
Step26: Exact Inference in Bayesian Networks
Step27: enumerate_all recursively evaluates a general form of the Equation 14.4 in the book.
Step28: Let us solve the problem of finding out P(Burglary=True | JohnCalls=True, MaryCalls=True) using the burglary network. enumeration_ask takes three arguments X = variable name, e = Evidence (in form a dict like previously explained), bn = The Bayes Net to do inference on.
Step29: Variable Elimination
Step30: make_factor is used to create the cpt and variables that will be passed to the constructor of Factor. We use make_factor for each variable. It takes in the arguments var the particular variable, e the evidence we want to do inference on, bn the bayes network.
Step31: The all_events function is a recursive generator function which yields a key for the orignal cpt which is part of the node. This works by extending evidence related to the node, thus all the output from all_events only includes events that support the evidence. Given all_events is a generator function one such event is returned on every call.
Step32: Here f5.cpt False key gives probability for P(MaryCalls=True | Alarm = False). Due to our representation where we only store probabilities for only in cases where the node variable is True this is the same as the cpt of the BayesNode. Let us try a somewhat different example from the book where evidence is that the Alarm = True
Step33: Here the cpt is for P(MaryCalls | Alarm = True). Therefore the probabilities for True and False sum up to one. Note the difference between both the cases. Again the only rows included are those consistent with the evidence.
Step34: Factor.pointwise_product implements a method of creating a joint via combining two factors. We take the union of variables of both the factors and then generate the cpt for the new factor using all_events function. Note that the given we have eliminated rows that are not consistent with the evidence. Pointwise product assigns new probabilities by multiplying rows similar to that in a database join.
Step35: pointwise_product extends this operation to more than two operands where it is done sequentially in pairs of two.
Step36: Factor.sum_out makes a factor eliminating a variable by summing over its values. Again events_all is used to generate combinations for the rest of the variables.
Step37: sum_out uses both Factor.sum_out and pointwise_product to finally eliminate a particular variable from all factors by summing over its values.
Step38: Elimination Ask Optimizations
Step39: In this test case we observe that variable elimination is slower than what we expected. It has something to do with number of threads, how Python tries to optimize things and this happens because the network is very small, with just 5 nodes. The elimination_ask has some critical point and some optimizations must be perfomed as seen above.
Step40: Before we consider the different algorithms in this section let us look at the BayesNode.sample method. It samples from the distribution for this variable conditioned on event's values for parent_variables. That is, return True/False at random according to with the conditional probability given the parents. The probability function is a simple helper from utils module which returns True with the probability passed to it.
Step41: The function prior_sample implements the algorithm described in Figure 14.13 of the book. Nodes are sampled in the topological order. The old value of the event is passed as evidence for parent values. We will use the Bayesian Network in Figure 14.12 to try out the prior_sample
Step42: Now we filter to get the observations where Rain = True
Step43: Finally, we can find P(Rain=True)
Step44: Sampling this another time might give different results as we have no control over the distribution of the random samples
Step45: To evaluate a conditional distribution. We can use a two-step filtering process. We first separate out the variables that are consistent with the evidence. Then for each value of query variable, we can find probabilities. For example to find P(Cloudy=True | Rain=True). We have already filtered out the values consistent with our evidence in rain_true. Now we apply a second filtering step on rain_true to find P(Rain=True and Cloudy=True)
Step46: Rejection Sampling
Step47: The function keeps counts of each of the possible values of the Query variable and increases the count when we see an observation consistent with the evidence. It takes in input parameters X - The Query Variable, e - evidence, bn - Bayes net and N - number of prior samples to generate.
Step48: To answer P(Cloudy=True | Rain=True)
Step49: Likelihood Weighting
Step50: weighted_sample samples an event from Bayesian Network that's consistent with the evidence e and returns the event and its weight, the likelihood that the event accords to the evidence. It takes in two parameters bn the Bayesian Network and e the evidence.
Step51: likelihood_weighting implements the algorithm to solve our inference problem. The code is similar to rejection_sampling but instead of adding one for each sample we add the weight obtained from weighted_sampling.
Step52: Gibbs Sampling
Step53: In gibbs_ask we initialize the non-evidence variables to random values. And then select non-evidence variables and sample it from P(Variable | value in the current state of all remaining vars) repeatedly sample. In practice, we speed this up by using markov_blanket_sample instead. This works because terms not involving the variable get canceled in the calculation. The arguments for gibbs_ask are similar to likelihood_weighting
Step54: Runtime analysis
Step55: As expected, all algorithms have a very similar runtime.
Step56: We instantiate the object hmm of the class using a list of lists for both the transition and the sensor model.
Step57: The sensor_dist() method returns a list with the conditional probabilities of the sensor model.
Step58: Now that we have defined an HMM object, our task here is to compute the belief $B_{t}(x)= P(X_{t}|U_{1
Step59: In Day 2 our initial belief is the updated belief of Day 1.
Step60: In the smoothing part we are interested in computing the distribution over past states given evidence up to the present. Assume that we want to compute the distribution for the time k, for $0\leq k<t $, the computation can be divided in two parts
Step61: Some may notice that the result is not the same as in the book. The main reason is that in the book the normalization step is not used. If we want to normalize the result, one can use the normalize() helper function.
Step62: Since HMMs are represented as single variable systems, we can represent the transition model and sensor model as matrices.
Step63: This algorithm applies forward as usual and optimizes the smoothing step by using the equations above.
Step64: Given evidence T, F, T, F and T, we want to calculate the probability distribution for the fourth day with a fixed lag of 2 days.
Step65: We cannot calculate probability distributions when $t$ is less than $d$
Step66: As expected, the output is None
Step67: Here, scalar_vector_product and vector_add are helper functions to help with vector math and weighted_sample_with_replacement resamples from a weighted sample and replaces the original sample, as is obvious from the name.
Step68: We got 5 samples from state A and 5 samples from state B
Step69: This time we got 2 samples from state A and 8 samples from state B
Step70: Our implementation of Monte Carlo Localization uses the range scan method.
Step72: Let's define the motion model as a function P_motion_sample.
Step74: Define the sensor model as a function P_sensor.
Step75: Initializing variables.
Step76: Let's run monte_carlo_localization with these parameters to find a sample distribution S.
Step77: Let's plot the values in the sample distribution S.
Step78: The distribution is highly concentrated at (5, 3), but the robot is not very confident about its position as some other cells also have high probability values.
Step79: In this case, the robot is 99.9% certain that it is at position (6, 7).
Step80: The DTAgentProgram function is pretty self-explanatory.
Step81: The DecisionNetwork class inherits from BayesNet and has a few extra helper methods.
|
<ASSISTANT_TASK:>
Python Code:
from probability import *
from utils import print_table
from notebook import psource, pseudocode, heatmap
psource(ProbDist)
p = ProbDist('Flip')
p['H'], p['T'] = 0.25, 0.75
p['T']
p = ProbDist(freqs={'low': 125, 'medium': 375, 'high': 500})
p.varname
(p['low'], p['medium'], p['high'])
p.values
p = ProbDist('Y')
p['Cat'] = 50
p['Dog'] = 114
p['Mice'] = 64
(p['Cat'], p['Dog'], p['Mice'])
p.normalize()
(p['Cat'], p['Dog'], p['Mice'])
p.show_approx()
event = {'A': 10, 'B': 9, 'C': 8}
variables = ['C', 'A']
event_values(event, variables)
psource(JointProbDist)
variables = ['X', 'Y']
j = JointProbDist(variables)
j
j[1,1] = 0.2
j[dict(X=0, Y=1)] = 0.5
(j[1,1], j[0,1])
j.values('X')
full_joint = JointProbDist(['Cavity', 'Toothache', 'Catch'])
full_joint[dict(Cavity=True, Toothache=True, Catch=True)] = 0.108
full_joint[dict(Cavity=True, Toothache=True, Catch=False)] = 0.012
full_joint[dict(Cavity=True, Toothache=False, Catch=True)] = 0.016
full_joint[dict(Cavity=True, Toothache=False, Catch=False)] = 0.064
full_joint[dict(Cavity=False, Toothache=True, Catch=True)] = 0.072
full_joint[dict(Cavity=False, Toothache=False, Catch=True)] = 0.144
full_joint[dict(Cavity=False, Toothache=True, Catch=False)] = 0.008
full_joint[dict(Cavity=False, Toothache=False, Catch=False)] = 0.576
psource(enumerate_joint)
evidence = dict(Toothache=True)
variables = ['Cavity', 'Catch'] # variables not part of evidence
ans1 = enumerate_joint(variables, evidence, full_joint)
ans1
evidence = dict(Cavity=True, Toothache=True)
variables = ['Catch'] # variables not part of evidence
ans2 = enumerate_joint(variables, evidence, full_joint)
ans2
ans2/ans1
psource(enumerate_joint_ask)
query_variable = 'Cavity'
evidence = dict(Toothache=True)
ans = enumerate_joint_ask(query_variable, evidence, full_joint)
(ans[True], ans[False])
psource(BayesNode)
alarm_node = BayesNode('Alarm', ['Burglary', 'Earthquake'],
{(True, True): 0.95,(True, False): 0.94, (False, True): 0.29, (False, False): 0.001})
john_node = BayesNode('JohnCalls', ['Alarm'], {True: 0.90, False: 0.05})
mary_node = BayesNode('MaryCalls', 'Alarm', {(True, ): 0.70, (False, ): 0.01}) # Using string for parents.
# Equivalant to john_node definition.
burglary_node = BayesNode('Burglary', '', 0.001)
earthquake_node = BayesNode('Earthquake', '', 0.002)
john_node.p(False, {'Alarm': True, 'Burglary': True}) # P(JohnCalls=False | Alarm=True)
psource(BayesNet)
burglary
type(burglary.variable_node('Alarm'))
burglary.variable_node('Alarm').cpt
psource(enumerate_all)
psource(enumeration_ask)
ans_dist = enumeration_ask('Burglary', {'JohnCalls': True, 'MaryCalls': True}, burglary)
ans_dist[True]
psource(make_factor)
psource(all_events)
f5 = make_factor('MaryCalls', {'JohnCalls': True, 'MaryCalls': True}, burglary)
f5
f5.cpt
f5.variables
new_factor = make_factor('MaryCalls', {'Alarm': True}, burglary)
new_factor.cpt
psource(Factor.pointwise_product)
psource(pointwise_product)
psource(Factor.sum_out)
psource(sum_out)
psource(elimination_ask)
elimination_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx()
%%timeit
enumeration_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx()
%%timeit
elimination_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx()
psource(BayesNode.sample)
psource(prior_sample)
N = 1000
all_observations = [prior_sample(sprinkler) for x in range(N)]
rain_true = [observation for observation in all_observations if observation['Rain'] == True]
answer = len(rain_true) / N
print(answer)
N = 1000
all_observations = [prior_sample(sprinkler) for x in range(N)]
rain_true = [observation for observation in all_observations if observation['Rain'] == True]
answer = len(rain_true) / N
print(answer)
rain_and_cloudy = [observation for observation in rain_true if observation['Cloudy'] == True]
answer = len(rain_and_cloudy) / len(rain_true)
print(answer)
psource(rejection_sampling)
psource(consistent_with)
p = rejection_sampling('Cloudy', dict(Rain=True), sprinkler, 1000)
p[True]
psource(weighted_sample)
weighted_sample(sprinkler, dict(Rain=True))
psource(likelihood_weighting)
likelihood_weighting('Cloudy', dict(Rain=True), sprinkler, 200).show_approx()
psource(gibbs_ask)
gibbs_ask('Cloudy', dict(Rain=True), sprinkler, 200).show_approx()
%%timeit
all_observations = [prior_sample(sprinkler) for x in range(1000)]
rain_true = [observation for observation in all_observations if observation['Rain'] == True]
len([observation for observation in rain_true if observation['Cloudy'] == True]) / len(rain_true)
%%timeit
rejection_sampling('Cloudy', dict(Rain=True), sprinkler, 1000)
%%timeit
likelihood_weighting('Cloudy', dict(Rain=True), sprinkler, 200)
%%timeit
gibbs_ask('Cloudy', dict(Rain=True), sprinkler, 200)
psource(HiddenMarkovModel)
umbrella_transition_model = [[0.7, 0.3], [0.3, 0.7]]
umbrella_sensor_model = [[0.9, 0.2], [0.1, 0.8]]
hmm = HiddenMarkovModel(umbrella_transition_model, umbrella_sensor_model)
hmm.sensor_dist(ev=True)
psource(forward)
umbrella_prior = [0.5, 0.5]
belief_day_1 = forward(hmm, umbrella_prior, ev=True)
print ('The probability of raining on day 1 is {:.2f}'.format(belief_day_1[0]))
belief_day_2 = forward(hmm, belief_day_1, ev=True)
print ('The probability of raining in day 2 is {:.2f}'.format(belief_day_2[0]))
psource(backward)
b = [1, 1]
backward(hmm, b, ev=True)
pseudocode('Forward-Backward')
umbrella_prior = [0.5, 0.5]
prob = forward_backward(hmm, ev=[T, T], prior=umbrella_prior)
print ('The probability of raining in Day 0 is {:.2f} and in Day 1 is {:.2f}'.format(prob[0][0], prob[1][0]))
psource(fixed_lag_smoothing)
umbrella_transition_model = [[0.7, 0.3], [0.3, 0.7]]
umbrella_sensor_model = [[0.9, 0.2], [0.1, 0.8]]
hmm = HiddenMarkovModel(umbrella_transition_model, umbrella_sensor_model)
e_t = F
evidence = [T, F, T, F, T]
fixed_lag_smoothing(e_t, hmm, d=2, ev=evidence, t=4)
e_t = T
evidence = [T, T, F, T, T]
fixed_lag_smoothing(e_t, hmm, d=1, ev=evidence, t=4)
fixed_lag_smoothing(e_t, hmm, d=5, ev=evidence, t=4)
psource(particle_filtering)
umbrella_transition_model = [[0.7, 0.3], [0.3, 0.7]]
umbrella_sensor_model = [[0.9, 0.2], [0.1, 0.8]]
hmm = HiddenMarkovModel(umbrella_transition_model, umbrella_sensor_model)
particle_filtering(T, 10, hmm)
particle_filtering([F, T, F, F, T], 10, hmm)
psource(monte_carlo_localization)
m = MCLmap([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0],
[0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0]])
heatmap(m.m, cmap='binary')
def P_motion_sample(kin_state, v, w):
Sample from possible kinematic states.
Returns from a single element distribution (no uncertainity in motion)
pos = kin_state[:2]
orient = kin_state[2]
# for simplicity the robot first rotates and then moves
orient = (orient + w)%4
for _ in range(orient):
v = (v[1], -v[0])
pos = vector_add(pos, v)
return pos + (orient,)
def P_sensor(x, y):
Conditional probability for sensor reading
# Need not be exact probability. Can use a scaled value.
if x == y:
return 0.8
elif abs(x - y) <= 2:
return 0.05
else:
return 0
a = {'v': (0, 0), 'w': 0}
z = (2, 4, 1, 6)
S = monte_carlo_localization(a, z, 1000, P_motion_sample, P_sensor, m)
grid = [[0]*17 for _ in range(11)]
for x, y, _ in S:
if 0 <= x < 11 and 0 <= y < 17:
grid[x][y] += 1
print("GRID:")
print_table(grid)
heatmap(grid, cmap='Oranges')
a = {'v': (0, 1), 'w': 0}
z = (2, 3, 5, 7)
S = monte_carlo_localization(a, z, 1000, P_motion_sample, P_sensor, m, S)
grid = [[0]*17 for _ in range(11)]
for x, y, _ in S:
if 0 <= x < 11 and 0 <= y < 17:
grid[x][y] += 1
print("GRID:")
print_table(grid)
heatmap(grid, cmap='Oranges')
psource(DTAgentProgram)
psource(DecisionNetwork)
psource(InformationGatheringAgent)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: $$
|
<ASSISTANT_TASK:>
Python Code:
import math
print(math.log(0.2) + math.log(0.5))
print(math.log(0.2 * 0.5))
print(math.log(math.exp(0.2) * math.exp(0.7)))
print(0.2 + 0.7)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Give the Hist mark the data you want to perform as the sample argument, and also give 'x' and 'y' scales.
Step2: The midpoints of the resulting bins and their number of elements can be recovered via the read-only traits x and y
Step3: Tuning the bins
Step4: Histogram Styling
|
<ASSISTANT_TASK:>
Python Code:
# Create a sample of Gaussian draws
np.random.seed(0)
x_data = np.random.randn(1000)
x_sc = LinearScale()
y_sc = LinearScale()
hist = Bins(sample=x_data, scales={'x': x_sc, 'y': y_sc}, padding=0,)
ax_x = Axis(scale=x_sc, tick_format='0.2f')
ax_y = Axis(scale=y_sc, orientation='vertical')
Figure(marks=[hist], axes=[ax_x, ax_y], padding_y=0)
hist.x, hist.y
x_sc = LinearScale()
y_sc = LinearScale()
hist = Bins(sample=x_data, scales={'x': x_sc, 'y': y_sc}, padding=0,)
ax_x = Axis(scale=x_sc, tick_format='0.2f')
ax_y = Axis(scale=y_sc, orientation='vertical')
Figure(marks=[hist], axes=[ax_x, ax_y], padding_y=0)
# Changing the number of bins
hist.bins = 'sqrt'
# Changing the range
hist.min = 0
# Normalizing the count
x_sc = LinearScale()
y_sc = LinearScale()
hist = Bins(sample=x_data, scales={'x': x_sc, 'y': y_sc}, density=True)
ax_x = Axis(scale=x_sc, tick_format='0.2f')
ax_y = Axis(scale=y_sc, orientation='vertical')
Figure(marks=[hist], axes=[ax_x, ax_y], padding_y=0)
# changing the color
hist.colors=['orangered']
# stroke and opacity update
hist.stroke = 'orange'
hist.opacities = [0.5] * len(hist.x)
# Laying the histogram on its side
hist.orientation = 'horizontal'
ax_x.orientation = 'vertical'
ax_y.orientation = 'horizontal'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: PCA
Step2: If identificator are plotted to the scatter plot
Step3: Conclusions
Step4: Scaling Features
Step5: t-SNE
Step6: LAMOST
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import h5py
import numpy as np
import sklearn.preprocessing
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
import astropy.io.fits as fits
import matplotlib
import matplotlib.pyplot as plt
import spectraldl.ondrejov as ondrejov
import spectraldl.preprocessing as preprocessing
with h5py.File('data/data.hdf5') as f:
X = f['X'][...]
y = f['y'][...]
def plot_scatter(X, y):
'''Plot scatter plot of point from X. X is of shape (n_samples, 2).'''
fig, ax = plt.subplots()
sc = ax.scatter(X[:, 0], X[:, 1], c=y, alpha=0.25)
fig.colorbar(sc)
pca = PCA(n_components=2)
X_pca = pca.fit_transform(X)
print('explained variance: {}%'.format(np.sum(pca.explained_variance_ratio_) * 100))
plot_scatter(X_pca, y)
fig, (ax1, ax2) = plt.subplots(2, 1)
with fits.open('data/ondrejov/ui300025.fits') as hdulist:
ondrejov.plot_spectrum(hdulist, ax1)
ax1.set_title('max flux: ' + str(np.max(ondrejov.get_fluxes(hdulist))))
with fits.open('data/ondrejov//td260020.fits') as hdulist:
ondrejov.plot_spectrum(hdulist, ax2)
ax2.set_title('max flux: ' + str(np.max(ondrejov.get_fluxes(hdulist))))
fig.tight_layout()
# minmax scale to range (-1, 1)
# axis=1 means to scale individual samples
X_minmax = sklearn.preprocessing.minmax_scale(X, feature_range=(-1, 1), axis=1)
X_minmax_pca = PCA(n_components=140)
plot_scatter(X_minmax_pca.fit_transform(X_minmax), y)
X_scale = sklearn.preprocessing.scale(X, axis=1)
X_scale_pca_model = PCA(n_components=140)
X_scale_pca = X_scale_pca_model.fit_transform(X_scale)
plot_scatter(X_scale_pca, y)
np.max(X), np.min(X), np.max(X_scale), np.min(X_scale)
fig, axs = plt.subplots(2, 2)
ax1, ax2, ax3, ax4 = axs.ravel()
for ax in axs.ravel():
ax.set_ylabel('number of spectra')
ax.set_xlabel('flux')
ax1.set_title('maxima of original spectra')
ax1.hist(np.max(X, axis=1), log=True)
ax2.set_title('minima of original spectra')
ax2.hist(np.min(X, axis=1), log=True)
ax3.set_title('maxima of scaled spectra')
ax3.hist(np.max(X_scale, axis=1), log=True)
ax4.set_title('minima of scaled spectra')
ax4.hist(np.min(X_scale, axis=1), log=True)
fig.tight_layout()
fig, ax = plt.subplots()
ax.set_title('explained variance ratio')
ax.set_ylabel('variance ration')
ax.set_xlabel('principal components')
cut = 10
xticks = np.arange(1, cut + 1)
ax.set_xticks(xticks)
ax.bar(xticks, X_scale_pca_model.explained_variance_ratio_[:cut]);
# each class plotted individually
cmap = plt.get_cmap('viridis')
norm = matplotlib.colors.Normalize(vmin=0, vmax=2)
fig, axs = plt.subplots(1, 3)
titles = ['emission', 'absorption', 'double-peak']
labels = [0, 1, 2]
colors = [cmap(norm(l)) for l in labels]
for title, label, ax, color in zip(titles, labels, axs, colors):
ax.set_title(title)
ax.scatter(X_scale_pca[y == label][:, 0], X_scale_pca[y == label][:, 1], alpha=0.25, c=color)
# publication plot
fig, (ax1, ax2) = plt.subplots(1, 2)
cmap = plt.get_cmap('viridis')
norm = matplotlib.colors.Normalize(vmin=0, vmax=2)
for label, name in zip([0, 2, 1], ['emission', 'double-peak', 'absorption']):
idx = y == label
color = cmap(norm(label))
ax1.scatter(X_pca[:, 0][idx], X_pca[:, 1][idx], c=color, alpha=0.5, label=name)
ax2.scatter(X_scale_pca[:, 0][idx], X_scale_pca[:, 1][idx], c=color, alpha=0.5, label=name)
ax1.set_title('original dataset')
ax1.set_xlabel('PC1')
ax1.set_ylabel('PC2')
ax1.legend(loc='lower right')
ax2.set_title('dataset with scaled samples')
ax2.set_xlabel('PC1')
ax2.set_ylabel('PC2')
ax2.legend(loc='lower right')
fig.tight_layout();
# publication plot
fig, ax = plt.subplots()
cmap = plt.get_cmap('viridis')
norm = matplotlib.colors.Normalize(vmin=0, vmax=2)
for label, name in zip([0, 2, 1], ['emission', 'double-peak', 'absorption']):
idx = y == label
color = cmap(norm(label))
ax.scatter(X_pca[:, 0][idx], X_pca[:, 1][idx], c=color, alpha=0.5, label=name)
ax.set_title('dataset with scaled samples')
ax.set_xlabel('PC1')
ax.set_ylabel('PC2')
ax.legend(loc='lower right')
fig.tight_layout();
# minmax scale to range (-1, 1)
X_f_minmax = sklearn.preprocessing.minmax_scale(X, feature_range=(-1, 1))
X_f_minmax_pca = PCA(n_components=2).fit_transform(X_f_minmax)
plot_scatter(X_f_minmax_pca, y)
X_f_scale = sklearn.preprocessing.scale(X)
X_f_scale_pca = PCA(n_components=2).fit_transform(X_f_scale)
plot_scatter(X_f_scale_pca, y)
# firstly reduce to 30 dimensionsions with PCA as in t-SNE paper
X_pca = PCA(n_components=30).fit_transform(X_scale)
tsne = TSNE(
n_components=2, # 2D out array
#perplexity=40, # should be 5-50
#learning_rate=100, # should be 100-1000
init='pca', # start with PCA position
verbose=2,
)
X_tsne = tsne.fit_transform(X_pca)
# publication plot
fig, ax = plt.subplots()
cmap = plt.get_cmap('viridis')
norm = matplotlib.colors.Normalize(vmin=0, vmax=2)
for label, name in enumerate(['emission', 'absorption', 'double-peak']):
idx = y == label
color = cmap(norm(label))
ax.scatter(X_tsne[:, 0][idx], X_tsne[:, 1][idx], c=color, alpha=0.5, label=name)
ax.set_title('t-SNE')
ax.legend()
ax.tick_params(which='both', bottom=False, left=False, labelbottom=False, labelleft=False)
def sample_lamost(size):
with h5py.File('data/data.hdf5') as f:
# I found this random sampling fastest for h5py
# bool index array of False values
idx = np.zeros((f['X_lam'].shape[0], ), dtype=np.bool)
# set appriproiate number of indexes to True
idx[:size] = True
# randomly shuffle the index
np.random.shuffle(idx)
X_lam = f['X_lam'][idx, :]
return X_lam
X_lam = sample_lamost(5000)
def foo_matrix(X_lam, X_ond, y_ond):
size_lam, size_ond = X_lam.shape[0], X_ond.shape[0]
size = size_lam + size_ond
X = np.zeros((size, 140), dtype=np.float64)
y = np.zeros((size, ), dtype=np.int8)
X[:size_lam, :] = X_lam
y[:size_lam] = -1
X[size_lam:, :] = X_ond
y[size_lam:] = y_ond
return X, y
X, y = sklearn.utils.shuffle(X, y)
sample_size = 1000
X, y = X[:sample_size], y[:sample_size]
X_all, y_all = foo_matrix(X_lam, X, y)
pca = PCA(n_components=30)
X_all_scaled = preprocessing.scale_samples(X_all)
X_all_pca = pca.fit_transform(X_all_scaled)
# publication plot
fig, ax = plt.subplots()
cmap = plt.get_cmap('viridis')
norm = matplotlib.colors.Normalize(vmin=-1, vmax=2)
for label, name in zip([-1, 1, 0, 2], ['lamost', 'absorption', 'emission', 'double-peak']):
idx = y_all == label
color = cmap(norm(label))
ax.scatter(X_all_pca[:, 0][idx], X_all_pca[:, 1][idx], s=10, c=color, alpha=0.5, label=name)
ax.set_title('pricinal component analysis of LAMOST')
ax.set_xlabel('PC1')
ax.set_ylabel('PC2')
ax.legend();
tsne_all = TSNE(
n_components=2, # 2D out array
#perplexity=40, # should be 5-50
#learning_rate=100, # should be 100-1000
init='pca', # start with PCA position
verbose=2,
)
X_all_tsne = tsne_all.fit_transform(X_all_pca)
# publication plot
fig, ax = plt.subplots()
cmap = plt.get_cmap('viridis')
norm = matplotlib.colors.Normalize(vmin=-1, vmax=2)
for label, name in zip([-1, 1, 0, 2], ['lamost', 'absorption', 'emission', 'double-peak']):
idx = y_all == label
color = cmap(norm(label))
ax.scatter(X_all_tsne[:, 0][idx], X_all_tsne[:, 1][idx], c=color, alpha=0.5, label=name)
ax.set_title('t-SNE of LAMOST')
ax.legend();
ax.tick_params(which='both', bottom=False, left=False, labelbottom=False, labelleft=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: or from GitHub for the latest version.
Step2: Demo Starts
|
<ASSISTANT_TASK:>
Python Code:
!pip install meterstick
!git clone https://github.com/google/meterstick.git
import sys, os
sys.path.append(os.getcwd())
import itertools
import numpy as np
import pandas as pd
from meterstick import confidence_interval_display
np.random.seed(42)
metrics = ('Click', 'Latency', 'a very very very looooooooooong metric')
ctrl_id = 42
platform = ('Mobile', 'Desktop', 'Tablet')
ctrl_vals = [137, 28, 999.9, 158, 40, 6.66666, -10, -20.1, 33]
expr_ids = (42, 222, 666, 'Experiment Foo')
n_row = len(metrics) * len(platform) * len(expr_ids)
test_df = pd.DataFrame(
itertools.product(expr_ids, platform, metrics),
columns=[
'Experiment_Id',
'Platform',
'Metric',
])
test_df['Control_Value'] = ctrl_vals * (n_row // len(ctrl_vals))
test_df['Ratio'] = 2 * (np.random.rand(n_row) - 0.5)
test_df['CI_Range'] = 2 * np.random.rand(n_row)
test_df['Value'] = test_df.Control_Value * (test_df.Ratio / 100 + 1)
test_df['Country'] = 'US'
ctrl_id = 42
test_df.loc[test_df.Experiment_Id == ctrl_id,
'Value'] = test_df.loc[test_df.Experiment_Id == ctrl_id,
'Control_Value']
confidence_interval_display.render(test_df, dims=['Country', 'Platform'])
# You can manually specify the columns.
test_df_copy = test_df.copy()
test_df_copy.rename(
columns={
'Experiment_Id': 'expr',
'Metric': 'metric',
'Control_Value': 'control',
'Ratio': 'ratio',
'CI_Range': 'ci',
'Value': 'val'
},
inplace=True)
confidence_interval_display.render(
test_df_copy,
dims=['Country', 'Platform'],
metric='metric',
ratio='ratio',
value='val',
ci_range='ci',
control_value='control',
expr_id='expr')
test_df_copy = test_df.copy()
test_df_copy['Control_Id'] = None
confidence_interval_display.render(test_df_copy, dims=['Country', 'Platform'])
# If you don't want to aggregate your dimensions...
confidence_interval_display.render(
test_df,
dims=['Country', 'Platform'],
aggregate_dimensions=False)
# By default we look for Dim_1, Dim2, ... as dimension columns.
test_df.rename(columns={'Country': 'Dim_1', 'Platform': 'Dim_2'}, inplace=True)
confidence_interval_display.render(test_df)
# You can set control experiment. We'll only display its Value.
confidence_interval_display.render(test_df, ctrl_id=ctrl_id)
# Hide the controls if it doesn't matter to you.
confidence_interval_display.render(test_df, ctrl_id=ctrl_id, show_control=False)
# You can also have multiple controls.
confidence_interval_display.render(
test_df,
ctrl_id={
42: [222],
666: ['Experiment Foo']
})
# Auto determines control values if control rows are missing. Your control rows
# are supposed to have the same values in the Control_Value and Value columns.
# If not, we skip. To demonstrate, here we introduce inconsistency in 'Click'.
df = test_df.loc[test_df.Experiment_Id != 42].copy()
df.loc[(df.Experiment_Id == 222) & (df.Metric == 'Click'), 'Control_Value'] = -1
confidence_interval_display.render(
df,
ctrl_id=42,
auto_decide_control_vals=True)
# You can flip the coloring scheme for some metrics.
confidence_interval_display.render(test_df, flip_color=['Latency'])
# You can add descriptions for your experiments.
test_df['Description'] = ''
test_df.loc[test_df['Experiment_Id'] == 42, 'Description'] = 'The Answer'
# You can even use raw html.
test_df.loc[test_df['Experiment_Id'] == 666,
'Description'] = '<b>Bold</b>'
confidence_interval_display.render(test_df)
# You can customize the formatting of values. By default there are 'percent',
# 'absolute' and 'pp'. You can also provide your own formating string template.
confidence_interval_display.render(
test_df,
metric_formats={
'Ratio': 'pp',
'Value': 'percent'
})
# You don't need Value if you don't want to show_control.
test_df2 = test_df.copy()
del test_df2['Value']
confidence_interval_display.render(test_df2, show_control=False)
# You can hide the null control values.
confidence_interval_display.render(
test_df2, show_control=False, hide_null_ctrl=True)
# You can order and slice the metrics.
confidence_interval_display.render(test_df, metric_order=['Latency', 'Click'])
# You can customize the order of dimension slices.
confidence_interval_display.render(
test_df,
sort_by=[{
'column': 'Dim_2',
'order': ['Mobile', 'Desktop', 'Tablet']
}])
# You can sort by values of a metric. The supported value columns are 'Value',
# 'Ratio', 'CI_Lower', 'CI_Upper'. See the doc of get_formatted_df() also.
confidence_interval_display.render(
test_df,
sort_by=[{
'column': ('Ratio', 'Click'),
'ascending': False
}])
# Of course you can sort both by dimesnion and by values.
confidence_interval_display.render(
test_df,
sort_by=[{
'column': 'Dim_2',
'order': ['Mobile', 'Desktop', 'Tablet']
}, {
'column': ('CI_Lower', 'Click'),
}])
pre_agg_df = confidence_interval_display.render(
test_df, return_pre_agg_df=True)
mask = pre_agg_df.applymap(lambda c: bool(
isinstance(c, list) and c[-1] and c[-2] and c[-1] - c[-2] < 1))
formatted_df = confidence_interval_display.render(
test_df, return_formatted_df=True)
formatted_df.mask(
mask,
formatted_df.applymap('<div class="tight-ci">{}</div>'.format),
inplace=True)
extra_css = '.tight-ci {border-color: purple; border-style: dotted;}'
confidence_interval_display.display_formatted_df(formatted_df, extra_css)
# Test for no dimensions.
df_no_dim = test_df.groupby(['Experiment_Id', 'Metric']).mean().reset_index()
confidence_interval_display.render(df_no_dim, ctrl_id=42)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is the basic idea of a list comprehension. If you're familiar with mathematical notation this format should feel familiar for example
Step2: Example 3
Step3: Example 4
Step4: Example 5
|
<ASSISTANT_TASK:>
Python Code:
# Grab every letter in string
lst = [x for x in 'word']
# Check
lst
# Square numbers in range and turn into list
lst = [x**2 for x in range(0,11)]
lst
# Check for even numbers in a range
lst = [x for x in range(11) if x % 2 == 0]
lst
# Convert Celsius to Fahrenheit
celsius = [0,10,20.1,34.5]
fahrenheit = [ ((float(9)/5)*temp + 32) for temp in Celsius ]
fahrenheit
lst = [ x**2 for x in [x**2 for x in range(11)]]
lst
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Iris Flower Data
Step2: Create Random Forest Classifier
Step3: Select Features With Importance Greater Than Threshold
Step4: View Selected Important Features
Step5: Train Model With Selected Important Features
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn.ensemble import RandomForestClassifier
from sklearn import datasets
from sklearn.feature_selection import SelectFromModel
# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Create random forest classifier
clf = RandomForestClassifier(random_state=0, n_jobs=-1)
# Create object that selects features with importance greater than or equal to a threshold
selector = SelectFromModel(clf, threshold=0.3)
# Feature new feature matrix using selector
X_important = selector.fit_transform(X, y)
# View first five observations of the features
X_important[0:5]
# Train random forest using most important featres
model = clf.fit(X_important, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'bnu-esm-1-1', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/mean_variance.png" style="height
Step6: Checkpoint
Step7: <img src="image/weight_biases.png" style="height
Step8: <img src="image/learn_rate_tune.png" style="height
Step9: Test
|
<ASSISTANT_TASK:>
Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
a = 0.1
b = 0.9
grayscale_min = 0
grayscale_max = 255
return a + ( ( (image_data - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) )
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
features_count = 784
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.initialize_all_variables()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
# TODO: Find the best parameters for each configuration
epochs = 5
batch_size = 100
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
# TODO: Set the epochs, batch_size, and learning_rate with the best parameters from problem 3
epochs = 5
batch_size = 100
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Task #2
Step2: Task #3
Step3: Task #4
Step4: Task #5
Step5: Task #5
Step6: Task #6
Step7: Task #7
|
<ASSISTANT_TASK:>
Python Code:
price = 300
import math
math.sqrt( price )
import math
math.sqrt( price )
stock_index = "SP500"
stock_index[2:]
stock_index = "SP500"
price = 300
print('The {quote} is at {price} today'.format(quote=stock_index,price=price))
stock_info = {'sp500':{'today':300,'yesterday': 250}, 'info':['Time',[24,7,365]]}
stock_info['sp500']['yesterday']
stock_info['info'][1][2]
def source_finder(str):
index = str.find('--')
return str[index + 2:]
source_finder("PRICE:345.324:SOURCE--QUANDL")
def price_finder(str):
return str.upper().find('PRICE') != -1
price_finder("What is the price?")
price_finder("DUDE, WHAT IS PRICE!!!")
price_finder("The price is 300")
price_finder("There are no prize is 300")
def count_price(str):
return str.upper().count('PRICE')
s = 'Wow that is a nice price, very nice Price! I said price 3 times.'
count_price(s)
s = 'ANOTHER pRiCe striNG should reTURN 1'
count_price(s)
def avg_price(prices):
return sum(prices) / len(prices)
avg_price([3,4,5])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pascal array $\mathcal{P}$
Step2: OEIS content about $\mathcal{P}$
|
<ASSISTANT_TASK:>
Python Code:
%run "../src/start_session.py"
%run "../src/recurrences.py"
import oeis
d = IndexedBase('d')
n, k = symbols('n k')
pascal_recurrence_spec = recurrence_spec(recurrence_eq=Eq(d[n+1, k+1], d[n, k] + d[n, k+1]),
recurrence_symbol=d,
variables=[n,k])
pascal_recurrence_spec
unfolded = pascal_recurrence_spec.unfold(depth=2)
unfolded
instantiated = unfolded.instantiate(strategy=raw(substitutions={n:9,k:4}))
instantiated
known_binomials = {d[n,k]:binomial(n,k) for n in [7] for k in range(2,6)}
checked = instantiated.instantiate(strategy=raw(substitutions=known_binomials))
checked
checked.subsume()
based_recurrence_spec = unfolded.instantiate(strategy=based(arity=doubly_indexed()))
based_recurrence_spec
based_recurrence_spec.subsume()
ipython_latex_description(rec_spec=pascal_recurrence_spec, depths=range(6), arity=doubly_indexed())
s = oeis.oeis_search(id=7318)
s()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now that we have created a toy example for playing around with the DataFrame, let's print it out in different ways.
|
<ASSISTANT_TASK:>
Python Code:
import modin.config as cfg
cfg.StorageFormat.put('omnisci')
# Note: Importing notebooks dependencies. Do not change this code!
import numpy as np
import pandas
import sys
import modin
pandas.__version__
modin.__version__
# Implement your answer here. You are also free to play with the size
# and shape of the DataFrame, but beware of exceeding your memory!
import pandas as pd
frame_data = np.random.randint(0, 100, size=(2**10, 2**5))
df = pd.DataFrame(frame_data)
# ***** Do not change the code below! It verifies that
# ***** the exercise has been done correctly. *****
try:
assert df is not None
assert frame_data is not None
assert isinstance(frame_data, np.ndarray)
except:
raise AssertionError("Don't change too much of the original code!")
assert "modin.pandas" in sys.modules, "Not quite correct. Remember the single line of code change (See above)"
import modin.pandas
assert pd == modin.pandas, "Remember the single line of code change (See above)"
assert hasattr(df, "_query_compiler"), "Make sure that `df` is a modin.pandas DataFrame."
print("Success! You only need to change one line of code!")
# When working with non-string column labels it could happen that some backend logic would try to insert a column
# with a string name to the frame, so we do add_prefix()
df = df.add_prefix("col")
# Print the first 10 lines.
df.head(10)
df.count()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-3', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Recap - Decision Tree Classifier
|
<ASSISTANT_TASK:>
Python Code:
print(iris.DESCR[:172] + ' ...')
print(iris.feature_names)
print(iris.data[45:54])
print(iris.target[45:54])
print(iris.target_names)
lfeat = iris.feature_names
df_iris = pd.DataFrame(iris.data, columns = lfeat)
model = DecisionTreeClassifier()
data = df_iris[lfeat].values
df_iris["Species"] = iris.target
target = df_iris["Species"].values
model.fit(data, target)
expected = target
predicted = model.predict(data)
print(metrics.classification_report(expected, predicted))
print(metrics.confusion_matrix(expected, predicted))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Theoretical background
Step2: Three-qubit W state, step 1
Step3: Three-qubit W state
Step4: Three-qubit W state, full circuit
Step5: Now you get an histogram compatible with the final state $|W_{3}\rangle$ through the following steps
Step6: Now, if you used a simulator, you get an histogram clearly compatible with the state
|
<ASSISTANT_TASK:>
Python Code:
# useful additional packages
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import time
from pprint import pprint
# importing Qiskit
from qiskit import Aer, IBMQ
from qiskit.backends.ibmq import least_busy
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.tools.visualization import plot_histogram
IBMQ.load_accounts()
"Choice of the backend"
# using local qasm simulator
backend = Aer.get_backend('qasm_simulator')
# using IBMQ qasm simulator
# backend = IBMQ.get_backend('ibmq_qasm_simulator')
# using real device
# backend = least_busy(IBMQ.backends(simulator=False))
flag_qx2 = True
if backend.name() == 'ibmqx4':
flag_qx2 = False
print("Your choice for the backend is: ", backend, "flag_qx2 is: ", flag_qx2)
# Here, two useful routine
# Define a F_gate
def F_gate(circ,q,i,j,n,k) :
theta = np.arccos(np.sqrt(1/(n-k+1)))
circ.ry(-theta,q[j])
circ.cz(q[i],q[j])
circ.ry(theta,q[j])
circ.barrier(q[i])
# Define the cxrv gate which uses reverse CNOT instead of CNOT
def cxrv(circ,q,i,j) :
circ.h(q[i])
circ.h(q[j])
circ.cx(q[j],q[i])
circ.h(q[i])
circ.h(q[j])
circ.barrier(q[i],q[j])
# 3-qubit W state Step 1
n = 3
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[2]) #start is |100>
F_gate(W_states,q,2,1,3,1) # Applying F12
for i in range(3) :
W_states.measure(q[i] , c[i])
# circuits = ['W_states']
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 3-qubit (step 1) on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 3-qubit (step 1) on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
# 3-qubit W state, first and second steps
n = 3
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[2]) #start is |100>
F_gate(W_states,q,2,1,3,1) # Applying F12
F_gate(W_states,q,1,0,3,2) # Applying F23
for i in range(3) :
W_states.measure(q[i] , c[i])
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 3-qubit (steps 1 + 2) on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 3-qubit (steps 1 + 2) on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
# 3-qubit W state
n = 3
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[2]) #start is |100>
F_gate(W_states,q,2,1,3,1) # Applying F12
F_gate(W_states,q,1,0,3,2) # Applying F23
if flag_qx2 : # option ibmqx2
W_states.cx(q[1],q[2]) # cNOT 21
W_states.cx(q[0],q[1]) # cNOT 32
else : # option ibmqx4
cxrv(W_states,q,1,2)
cxrv(W_states,q,0,1)
for i in range(3) :
W_states.measure(q[i] , c[i])
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 3-qubit on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 3-qubit on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
# 4-qubit W state
n = 4
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[3]) #start is |1000>
F_gate(W_states,q,3,2,4,1) # Applying F12
F_gate(W_states,q,2,1,4,2) # Applying F23
F_gate(W_states,q,1,0,4,3) # Applying F34
cxrv(W_states,q,2,3) # cNOT 21
if flag_qx2 : # option ibmqx2
W_states.cx(q[1],q[2]) # cNOT 32
W_states.cx(q[0],q[1]) # cNOT 43
else : # option ibmqx4
cxrv(W_states,q,1,2)
cxrv(W_states,q,0,1)
for i in range(4) :
W_states.measure(q[i] , c[i])
# circuits = ['W_states']
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 4-qubit ', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 4-qubit on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
# 5-qubit W state
n = 5
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[4]) #start is |10000>
F_gate(W_states,q,4,3,5,1) # Applying F12
F_gate(W_states,q,3,2,5,2) # Applying F23
F_gate(W_states,q,2,1,5,3) # Applying F34
F_gate(W_states,q,1,0,5,4) # Applying F45
W_states.cx(q[3],q[4]) # cNOT 21
cxrv(W_states,q,2,3) # cNOT 32
if flag_qx2 : # option ibmqx2
W_states.cx(q[1],q[2]) # cNOT 43
W_states.cx(q[0],q[1]) # cNOT 54
else : # option ibmqx4
cxrv(W_states,q,1,2)
cxrv(W_states,q,0,1)
for i in range(5) :
W_states.measure(q[i] , c[i])
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 5-qubit on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 5-qubit on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Step5: Checking out the results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, [None, image_size])
targets_ = tf.placeholder(tf.float32, [None, image_size])
# Output of hidden layer, single fully connected layer here with ReLU activation
encoded = tf.layers.dense(inputs=inputs_, units=encoding_dim, activation=tf.nn.relu)
# Output layer logits, fully connected layer with no activation
logits = tf.layers.dense(inputs=encoded, units=784)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits)
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
# Create the session
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementing a Neural Network
Step2: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
Step3: Forward pass
Step4: Forward pass
Step5: Backward pass
Step6: Train the network
Step8: Load the data
Step9: Train a network
Step10: Debug the training
Step11: Tune your hyperparameters
Step12: Run on the test set
|
<ASSISTANT_TASK:>
Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
scores = net.loss(X)
print('Your scores:')
print(scores)
print()
print('correct scores:')
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print(correct_scores)
print()
# The difference should be very small. We get < 1e-7
print('Difference between your scores and correct scores:')
print(np.sum(np.abs(scores - correct_scores)))
loss, _ = net.loss(X, y, reg=0.05)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print('Difference between your loss and correct loss:')
print(np.sum(np.abs(loss - correct_loss)))
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.05)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.05)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=5e-6,
num_iters=100, verbose=False)
print('Final training loss: ', stats['loss_history'][-1])
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=100,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.25, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print('Validation accuracy: ', val_acc)
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.tight_layout()
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.tight_layout()
plt.title('Classification accuracy history')
plt.legend()
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
best_net = None # store the best model into this
best_acc = -1
best_stats = None
from cs231n.pca import PCA
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
hidden_sizes = [150]
learning_rates = [1e-3]
batch_sizes = [200]#[150]
regularizations = [0.25]#[0.5]
input_sizes = [32 * 32 * 3]
for in_size in input_sizes:
# X_train_pca = PCA(X_train, in_size)
# X_val_pca = PCA(X_val, in_size)
for h_size in hidden_sizes:
for lr in learning_rates:
for batch in batch_sizes:
for reg in regularizations:
print('>>>>> input_size=%d, hidden_size=%d, lr=%.5f, batch_size=%3d, reg=%.2f'
% (in_size, h_size, lr, batch, reg))
net = TwoLayerNet(in_size, h_size, num_classes)
stats = net.train(X_train_pca, y_train, X_val_pca, y_val,
num_iters=10000, batch_size=batch,
learning_rate=lr, learning_rate_decay=0.95,
reg=reg, verbose=False, dropout=False)
# Predict on the validation set
val_acc = (net.predict(X_val_pca) == y_val).mean()
print('Validation accuracy: ', val_acc)
if val_acc>best_acc:
best_acc = val_acc
best_net = net
best_stats = stats
plt.plot(best_stats['loss_history'])
plt.tight_layout()
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
test_acc = (best_net.predict(X_test) == y_test).mean()
print('Test accuracy: ', test_acc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating the plot
|
<ASSISTANT_TASK:>
Python Code:
dirPath = os.path.realpath('.')
fileName = 'assets/coolingExample.xlsx'
filePath = os.path.join(dirPath, fileName)
df = pd.read_excel(filePath,header=0)
cols = df.columns
# Create a trace
trace = go.Scatter(
x = df[cols[0]],
y = df[cols[1]]
)
data = [trace]
# Edit the layout
layout = dict(title='Temperature vs. Time',
xaxis=dict(title='Time'),
yaxis=dict(title='Temperature (C)'),
)
fig = dict(data=data, layout=layout)
py.iplot(fig, filename='Thermal Data')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Starting from here we introduce several Selenium tricks for manipulating the page (such as clicking the Page Down key on the keyboard).
Step2: old=""
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
browser = webdriver.Chrome()
url = "http://rate.am/en/armenian-dram-exchange-rates/banks/cash"
browser.get(url) #will wait until page is fully loaded
browser.find_element_by_xpath("//label[contains(text(),'Non-cash')]").click()
#browser.current_url
page = browser.page_source
browser.close()
all_tables = pd.read_html(page)
all_tables[2]
cols = [i for i in range(5,13)]
cols.append(1)
all_tables[2].iloc[2:19,cols]
browser = webdriver.Chrome()
browser.get(url)
button = browser.find_element_by_tag_name("html")
button.send_keys(Keys.PAGE_DOWN)
browser.get("https://www.bloomberg.com/")
browser.implicitly_wait(30)
browser.find_element_by_partial_link_text("S&P")
#EC(presense_of_element_located())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run the next cell to load the "SIGNS" dataset you are going to use.
Step2: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
Step3: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
Step5: 1.1 - Create placeholders
Step7: Expected Output
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
Step15: Expected output
|
<ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1)
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(shape=[None, n_H0, n_W0, n_C0], dtype=np.float32)
Y = tf.placeholder(shape=[None, n_y], dtype=np.float32)
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Returns:
parameters -- a dictionary of tensors containing W1, W2
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable('W1',
[4, 4, 3, 8],
initializer=tf.contrib.layers.xavier_initializer(seed = 0))
W2 = tf.get_variable('W2',
[2, 2, 8, 16],
initializer=tf.contrib.layers.xavier_initializer(seed = 0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1 = " + str(parameters["W1"].eval()[1,1,1]))
print("W2 = " + str(parameters["W2"].eval()[1,1,1]))
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, sride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1,
ksize=[1, 8, 8, 1],
strides=[1, 8, 8, 1],
padding='SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1, W2, strides=[1, 1, 1, 1], padding='SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2,
ksize=[1, 4, 4, 1],
strides=[1, 4, 4, 1],
padding='SAME')
# FLATTEN
P2 = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(P2, 6, activation_fn=None)
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = " + str(a))
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Z3,
labels=Y))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a))
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost, the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , temp_cost = sess.run([optimizer, cost],
feed_dict={X: minibatch_X,
Y: minibatch_Y})
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load dataset
Step2: Utilities function
Step3: Extract data
Step4: Modified imputation method using MLPRegressor
Step5: Feature Augmentation method from Bestagini
Step6: Neural Network
Step7: Validation with Leave One Well Out on Training Dataset
Step8: Applying to Test Dataset
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(1337)
import warnings
warnings.filterwarnings("ignore")
import time as tm
import pandas as pd
from keras.models import Sequential, Model
from keras.constraints import maxnorm
from keras.layers import Dense, Dropout, Activation
from keras.utils import np_utils
from sklearn.metrics import f1_score, recall_score, accuracy_score, confusion_matrix
from sklearn.model_selection import LeaveOneGroupOut
from sklearn import preprocessing
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
%matplotlib inline
training_data = pd.read_csv('../data/facies_vectors.csv')
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0, 2], [1], [4], [3, 5], [4, 6, 7], [5, 7], [5, 6, 8], [6, 7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '.g')
ax[1].plot(logs.ILD_log10, logs.Depth, '.')
ax[2].plot(logs.DeltaPHI, logs.Depth, '.', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '.', color='r')
ax[4].plot(logs.PE, logs.Depth, '.', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
X = training_data.drop(['Formation', 'Well Name', 'Depth', 'Facies', 'FaciesLabels'], axis=1).values
y = training_data['Facies'].values - 1
wells = training_data["Well Name"].values
from sklearn.neural_network import MLPRegressor
reg = MLPRegressor()
DataImpAll = training_data.drop(['Formation', 'Well Name', 'Depth', 'FaciesLabels'], axis=1).copy()
DataImp = DataImpAll.dropna(axis = 0, inplace=False)
Ximp=DataImp.loc[:, DataImp.columns != 'PE']
Yimp=DataImp.loc[:, 'PE']
reg.fit(Ximp, Yimp)
X[np.array(DataImpAll.PE.isnull()),4] = reg.predict(DataImpAll.loc[DataImpAll.PE.isnull(),:].drop('PE',axis=1,inplace=False))
training_data.ix[:,"PE"] = X[:,4]
# Feature windows concatenation function
def augment_features_window(X, N_neig):
# Parameters
N_row = X.shape[0]
N_feat = X.shape[1]
# Zero padding
X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
this_row = np.hstack((this_row, X[r+c]))
X_aug[r-N_neig] = this_row
return X_aug
# Feature gradient computation function
def augment_features_gradient(X, depth):
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X, axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# Feature augmentation function
def augment_features(X, well, depth, N_neig=1):
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
# Marine Models
Org_data = training_data
training_data = training_data[training_data["NM_M"]==1]
X = training_data.drop(['Formation', 'Well Name', 'Depth', 'Facies', 'FaciesLabels'], axis=1).values
y = training_data['Facies'].values - 1
wells = training_data["Well Name"].values
well = training_data['Well Name'].values
depth = training_data['Depth'].values
X, padded_rows = augment_features(X, well, depth, N_neig=1)
X1org = X
y1org = y
def fDNN(in_dim, out_dim):
# Model
model = Sequential()
model.add(Dense(152, input_dim=in_dim, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(out_dim, activation='softmax'))
# Compilation
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
logo = LeaveOneGroupOut()
nb_classes = 9
epoch = 10
bats = 20
t0 = tm.time()
f1s_ls = []
acc_ls = []
adj_ls = []
from scipy.signal import medfilt
for train, test in logo.split(X, y, groups=wells):
well_name = wells[test[0]]
# Scaling
scaler = preprocessing.MinMaxScaler().fit(X)
X_tr = scaler.transform(X[train])
X_te = scaler.transform(X[test])
Y_tr = np_utils.to_categorical(y[train], nb_classes)
in_dim = len(X_tr[0])
# Method initialization
mlp = fDNN(in_dim, nb_classes)
# Training
mlp.fit(X_tr, Y_tr, nb_epoch=epoch, batch_size=bats, verbose=0)
# Predict
y_hat = mlp.predict_classes(X_te, verbose=0)
y_hat = medfilt(y_hat, kernel_size=5)
try:
f1s = f1_score(y[test], y_hat, average="weighted", labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
except:
f1s = 0
try:
conf = confusion_matrix(y[test], y_hat, labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
acc = f1_score(y[test], y_hat, average="micro", labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
except:
acc = 0
try:
acc_adj = accuracy_adjacent(conf, adjacent_facies)
except:
acc_adj = 0
f1s_ls += [f1s]
acc_ls += [acc]
adj_ls += [acc_adj]
print("{:>20s} f1w:{:.3f} | f1m:{:.3f} | acc_adj:{:.3f}".format(well_name, f1s, acc, acc_adj))
t1 = tm.time()
print("Avg F1w", np.average(f1s_ls)*100, "Avg F1m", np.average(acc_ls)*100, "Avg Adj", np.average(adj_ls)*100)
print((t1-t0), "seconds")
# Non - Marine
training_data = Org_data
training_data = training_data[training_data["NM_M"]==2]
X = training_data.drop(['Formation', 'Well Name', 'Depth', 'Facies', 'FaciesLabels'], axis=1).values
y = training_data['Facies'].values - 1
wells = training_data["Well Name"].values
well = training_data['Well Name'].values
depth = training_data['Depth'].values
X, padded_rows = augment_features(X, well, depth, N_neig=1)
X2org =X
y2org = y
f1s_ls = []
acc_ls = []
adj_ls = []
from scipy.signal import medfilt
for train, test in logo.split(X, y, groups=wells):
well_name = wells[test[0]]
# Scaling
scaler = preprocessing.MinMaxScaler().fit(X)
X_tr = scaler.transform(X[train])
X_te = scaler.transform(X[test])
Y_tr = np_utils.to_categorical(y[train], nb_classes)
in_dim = len(X_tr[0])
# Method initialization
mlp = fDNN(in_dim, nb_classes)
# Training
mlp.fit(X_tr, Y_tr, nb_epoch=epoch, batch_size=bats, verbose=0)
# Predict
y_hat = mlp.predict_classes(X_te, verbose=0)
y_hat = medfilt(y_hat, kernel_size=5)
try:
f1s = f1_score(y[test], y_hat, average="weighted", labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
except:
f1s = 0
try:
conf = confusion_matrix(y[test], y_hat, labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
acc = f1_score(y[test], y_hat, average="micro", labels=[0, 1, 2, 3, 4, 5, 6, 7, 8])
except:
acc = 0
try:
acc_adj = accuracy_adjacent(conf, adjacent_facies)
except:
acc_adj = 0
f1s_ls += [f1s]
acc_ls += [acc]
adj_ls += [acc_adj]
print("{:>20s} f1w:{:.3f} | f1m:{:.3f} | acc_adj:{:.3f}".format(well_name, f1s, acc, acc_adj))
t1 = tm.time()
print("Avg F1w", np.average(f1s_ls)*100, "Avg F1m", np.average(acc_ls)*100, "Avg Adj", np.average(adj_ls)*100)
print((t1-t0), "seconds")
Org_blind_data = pd.read_csv('../data/nofacies_data.csv')
blind_data = Org_blind_data[Org_blind_data["NM_M"]==1]
X_blind = blind_data.drop(['Formation', 'Well Name', 'Depth'], axis=1).values
well_blind = blind_data['Well Name'].values
depth_blind = blind_data['Depth'].values
X_blind, padded_rows = augment_features(X_blind, well_blind, depth_blind, N_neig=1)
# Scaling
scl = preprocessing.MinMaxScaler().fit(X1org)
X_train = scl.transform(X1org)
X_blind = scl.transform(X_blind)
Y_train = np_utils.to_categorical(y1org, nb_classes)
# Method initialization
model = fDNN(in_dim, nb_classes)
# Training
model.fit(X_train, Y_train, nb_epoch=epoch, batch_size=bats, verbose=0)
# Predict
y_blind = model.predict_classes(X_blind, verbose=0)
y_blind = medfilt(y_blind, kernel_size=5)
Org_blind_data.ix[Org_blind_data["NM_M"]==1,"Facies"] = y_blind + 1 # return the original value (1-9)
blind_data = Org_blind_data[Org_blind_data["NM_M"]==2]
X_blind = blind_data.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1).values
well_blind = blind_data['Well Name'].values
depth_blind = blind_data['Depth'].values
X_blind, padded_rows = augment_features(X_blind, well_blind, depth_blind, N_neig=1)
# Scaling
scl = preprocessing.MinMaxScaler().fit(X2org)
X_train = scl.transform(X2org)
X_blind = scl.transform(X_blind)
Y_train = np_utils.to_categorical(y2org, nb_classes)
# Method initialization
model = fDNN(in_dim, nb_classes)
# Training
model.fit(X_train, Y_train, nb_epoch=epoch, batch_size=bats, verbose=0)
# Predict
y_blind = model.predict_classes(X_blind, verbose=0)
y_blind = medfilt(y_blind, kernel_size=5)
Org_blind_data.ix[Org_blind_data["NM_M"]==2,"Facies"] = y_blind + 1 # return the original value (1-9)
blind_data.to_csv("PA_Team_Submission_4.csv")
make_facies_log_plot(
Org_blind_data[Org_blind_data['Well Name'] == 'STUART'],
facies_colors)
make_facies_log_plot(
Org_blind_data[Org_blind_data['Well Name'] == 'CRAWFORD'],
facies_colors)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lib
Step2: Demo
Step3: hypernetx_to_graphistry_bipartite
Step4: hypernetx_to_graphistry_nodes
|
<ASSISTANT_TASK:>
Python Code:
# ! pip install hypernetx -q
# ! pip install graphistry -q
import pandas as pd
class HyperNetXG:
def __init__(self, graphistry):
self.graphistry = graphistry
def normalize_id(self, id):
t = type(id)
if t == float or t == int:
return '__id__' + str(id)
return str(id)
def hypernetx_to_graphistry_bipartite(self, h):
nodes_df = pd.concat(
[pd.DataFrame({
'node': [self.normalize_id(x) for x in list(H.nodes)],
'type': 'hypernode'}),
pd.DataFrame({
'node': [self.normalize_id(x) for x in H.edges],
'type': 'hyperedge'})],
ignore_index=True,
sort=False)
edges_df = pd.concat(
[ pd.DataFrame({'src': [], 'dst': []}) ] +
[
pd.DataFrame({
'src': self.normalize_id(k),
'dst': [self.normalize_id(x) for x in list(es)]
})
for k, es in H.incidence_dict.items()
], ignore_index=True, sort=False)
return self.graphistry.bind(
source='src',
destination='dst',
node='node').nodes(nodes_df).edges(edges_df)
def __hyperedge_to_graph(self, k, es):
lst = list(es)
edges_df = pd.concat([
pd.DataFrame({'src': [], 'dst': [], 'hyperedge': []})] + [
pd.DataFrame({
'src': self.normalize_id(lst[i]),
'dst': [self.normalize_id(x) for x in lst[i+1:]],
'hyperedge': self.normalize_id(k)})
for i in range(0, len(lst))
], ignore_index=True, sort=False)
return edges_df
def hypernetx_to_graphistry_nodes(self, h):
hg = self.hypernetx_to_graphistry_bipartite(h)
nodes_df = pd.DataFrame({
'node': [self.normalize_id(x) for x in list(h.nodes)],
'type': 'hypernode'})
edges_df = pd.concat(
[pd.DataFrame({'src': [], 'dst': [], 'hyperedge': []})] +
[
self.__hyperedge_to_graph(k, es)
for (k, es) in h.incidence_dict.items()
])
return self.graphistry.bind(
source='src',
destination='dst',
node='node').settings(url_params={'edgeCurvature': 0}).nodes(nodes_df).edges(edges_df)
import hypernetx as hnx
import graphistry
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
scenes = [
('FN', 'TH'),
('TH', 'JV'),
('BM', 'FN', 'JA'),
('JV', 'JU', 'CH', 'BM'),
('JU', 'CH', 'BR', 'CN', 'CC', 'JV', 'BM'),
('TH', 'GP'),
('GP', 'MP'),
('MA', 'GP')
]
H = hnx.Hypergraph(dict(enumerate(scenes)))
hg = HyperNetXG(graphistry)
g = hg.hypernetx_to_graphistry_bipartite(H)
g._nodes.sample(3)
g._edges.sample(3)
g.plot()
hg.hypernetx_to_graphistry_bipartite(H.dual())._edges.sample(3)
hg.hypernetx_to_graphistry_bipartite(H.dual()).plot()
g = hg.hypernetx_to_graphistry_nodes(H)
g._edges.sample(3)
hg.hypernetx_to_graphistry_nodes(H).plot()
hg.hypernetx_to_graphistry_nodes(H.dual()).plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lecture
Step3: 2. List Methods -- 100xp, status
Step5: 3. List Methods II -- 100xp, status
|
<ASSISTANT_TASK:>
Python Code:
Instructions:
+ Use the upper() method on room and store the result in room_up.
Use the dot notation.
+ Print out room and room_up. Did both change?
+ Print out the number of o's on the variable room by calling count()
on room and passing the letter "o" as an input to the method.
- We're talking about the variable room, not the word "room"!
# string to experiment with: room
room = "poolhouse"
# Use upper() on room: room_up
room_up = room.upper()
# Print out room and room_up
print(room)
print( "\n" + room_up )
# Print out the number of o's in room
print("\n" + str( room.count("o") ) )
Instructions:
+ Use the index() method to get the index of the element
in areas that is equal to 20.0. Print out this index.
+ Call count() on areas to find out how many times 14.5
appears in the list. Again, simply print out this number.
# first let's look more about these methods
help(str.count)
print(2*"\n===================================================")
help(str.index)
# Create list areas
areas = [11.25, 18.0, 20.0, 10.75, 9.50]
# Print out the index of the element 20.0
print( "\nThe index of the element 20.0 is: " + str( areas.index( 20 ) ) )
# Print out how often 14.5 appears in areas
print("\nThe number of times 14.5 occurs is: " + str( areas.count( 14.5 ) ) )
Instructions:
+ Use the append method twice to add the size of the
poolhouse and the garage again:
- 24.5 and 15.45, respectively.
- Add them in order
+ Print out the areas.
+ Use the reverse() method to reverse the order of the
elements in areas.
+ Print out the area once more.
# Let's look at the help on these methods
help( list.append )
print("=====================================================")
help( list.remove )
print("=====================================================")
help( list.reverse )
# Create list areas
areas = [11.25, 18.0, 20.0, 10.75, 9.50]
# Use append twice to add poolhouse and garage size
areas.append( 24.5 )
areas.append( 15.45 )
# Print out areas
print("\nThe new list contains two new items: " + str( areas ) )
# Reverse the orders of the elements in areas
areas.reverse()
# Print out areas
print("\nThe new list has been reversed: " + str( areas ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Starting logging light once every second
Step2: 3. Modifying the light
Step3: 4. Plot values over time
|
<ASSISTANT_TASK:>
Python Code:
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
from pynq.lib import Pmod_ALS
# ALS sensor is on PMODB
my_als = Pmod_ALS(base.PMODB)
my_als.read()
my_als.start_log()
my_als.stop_log()
log = my_als.get_log()
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(range(len(log)), log, 'ro')
plt.title('ALS Sensor log')
plt.axis([0, len(log), min(log), max(log)])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Adding, Updating, and Removing elements from a set
Step2: Set Membership & Length
Step3: Set Intersection, Disjoint, Union, and Difference
Step4: Subsets and supersets
Step5: Symmetric difference
Step6: Printing, Copying, & Clearing a set
|
<ASSISTANT_TASK:>
Python Code:
# from a list
a = set([1,2,3,4])
a
# using curly braces
a = {1,2,3,4}
a
# using a tuple
a = set((1,2,3,4))
a
# start with and empty set and add elements to it
a = set()
a.add('hello')
a
# be careful in assigning a string as an element to a set. If assigned as below, it'll be broken up and only unique
# characters are added to the set
a = set('this is a string')
a
# this will do the same thing
a = set(('string'))
# if you want to assign a string as an element then do
a = set()
a.add('string')
a
# or
a = set(('string',))
a
# or
a = set(['string'])
a
# or
a = {'string'}
a
# this assignment will not work since we're one element of the set is a list which is a mutable object.
# A set's elements should be immutable
a = {"a", (1,2), [3,4]}
# uniquness of the elements of a set
a = {1,2,3,3,4,4,7,8,-1,3,4} # we contains only one copy of a repeated element
a
A = {1,2,3,4}
A.add('string')
A
# update a set
A.update([26, 12, 9, 14])
print A
# A.discard(x) will remove x from the set. If x is not in the set, nothing happens.
A = {'a', 'b', 'c', 'd', 'e'}
A.discard('b')
print "A after discarding the element 'b'", A
A.discard('f')
print "A after discarding the element 'f' (which does not belong to A)", A
# A.remove(x) works like A.discard(x) but if x is not in the set A, an KeyError exception is thrown
A = {'a', 'b', 'c', 'd', 'e'}
A.remove('b')
print "A after A.remove('b')", A
A.remove('f') # a KeyError is thrown
print "A after A.remove('f') ('f' does not belong to A)", A
A = {'1', '2', '3', 1, 2, 3}
print "Is '1' in A?", '1' in A
print "Is '4' not in A?", '4' not in A
print "Is 5 in A", 5 in A
print "A is %s elements long"%len(A)
# A.intersection(B) or A&B retrurns a set with the common elements between A and B.
A = {"a","b","c","d","e"}
B = {"c","d","e","f","g"}
print "The intersection of A and B is ", A.intersection(B)
print "The intersection of A and B using A&B notation is ", A&B
# get the common elements of more than 2 sets
A = set([1,2,3,4,5])
B = set([4,5,6,7,8])
C = set([4,5,1,8,9])
print "First way to get the intersection of 3 sets ", set.intersection(A, B, C)
print "Second wat to get the intersection of 3 sets ", A&B&C
# A.issdisjoint(B) returns True if the two sets have a null intersection, otherwise it returns False
A = {'a', 'b', 'c', 'd', 'e', 'f'}
B = {'c', 'd', 'e'}
C = {'g', 'h', 'i', 'j'}
print "A and B have no common elements? ", A.isdisjoint(B)
print "A and C have no common elements? ", A.isdisjoint(C)
# A.union(B) or A | B returns a set of the merged elements of A and B
# the same element in both sets is counted once of course
A = set([1,2,3])
B = set([2,3,4])
print "The merging of A and B is ", A.union(B)
print "The merging of A and B using another notation is ", A | B
# A.difference(B) or A-B returns the a set of the difference between A and B.
# This can be chanied to include many sets
A = {'a', 'b', 'c', 'd', 'e'}
B = {'c', 'd', 'g', 'h'}
C = {'d', 'b', 'f'}
print "The elements that are in A but not in B", A.difference(B)
print "The elemets that are in A but not in B or C", A.difference(B).difference(C)
print "The elements that are in A but not in B", A-B
print "The elemets that are in A but not in B or C", A-B-C
# A.difference_update(B) or A = A - B find the difference between A and B and assigns it back to A.
# In other words, it removes the elements from A that are in B and assigns it to A.
A = {'a', 'b', 'c', 'd', 'e'}
B = {'c', 'd', 'g', 'h'}
A.difference_update(B)
print "A after A.difference_update(B)", A
A = {'a', 'b', 'c', 'd', 'e'}
B = {'c', 'd', 'g', 'h'}
A = A - B
print "A after A=A-B", A
A = {'a', 'b', 'c', 'd', 'e', 'f'}
B = {'c', 'd', 'e'}
print "Is A a subset of B? ", A.issubset(B)
print "Is A a superset of B? ", A.issuperset(B)
print "Is B a subset of A? ", B.issubset(A)
print "Is B a superset of A? ", B.issuperset(A)
# using the other notation
print "Is A a subset of B? ", A <= B
print "Is A a superset of B? ", A >= B
print "Is B a subset of A? ", B <= A
print "Is B a superset of A? ", B >= A
print "Is B a superset of B? ", B >= B
print "Is B a subset of B? ", B <= B
print "Is A a proper subset of B? ", A < B
print "Is B a proper subset of A? ", B < A
print "Is A a proper superset of B? ", A > B
print "Is B a proper superset of A? ", B > A
print "Is B a proper subset of B? ", B < B
print "Is B a proper superset of B? ", B > B
# pop() removes and returns an arbitrary set element. The method raises a KeyError if the set is empty
x = {"a","b","c","d","e"}
x.pop()
x.pop()
A = set([1, 2, 3])
B = set([3, 4, 5])
print "Symmetric diffrence between A and B ", A.symmetric_difference(B)
print "Symmetric difference between A and B using A ^ B is ", A ^ B
A.symmetric_difference_update(B) # this assigns the results of symmetric_difference to A
print "After symmetric_difference_update A is ", A
# print the elements of the set
A = set("James")
for n in A:
print n,
# print the elements of A on the same line
A = {'string', 1,2,3,4,5,3,'a',(1,2,3)}
for n in A:
print n,
# print the elements of A each on it's own line
for n in A:
print n
# A.clear() clear the elemets of A
A = {'string', 1,2,3,4,5,3,'a',(1,2,3)}
print "A before A.clear()", A
A.clear()
print "A after A.clear()", A
# A.copy() copies the contents of A into another set B so clearing A wont affect the contents of B
A = {1,2,3,3,4,4,7,8,-1,3,4} # we contains only one copy of a repeated element
B = A.copy()
print "A is ", A
print "B after B=A.copy() is", B
A.clear()
print "A after A.clear()", A
print "B after A.clear()", B
# B = A makes a "pointer" from A to B so modifying A will also modify B
A = {1,2,3,3,4,4,7,8,-1,3,4} # we contains only one copy of a repeated element
B = A
print "A is ", A
print "B after B=A.copy() is", B
A.clear()
print "A after A.clear()", A
print "B after A.clear()", B
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='dataReading'></a>
Step2: Simple positional match using ra/dec
Step3: apply standard cuts as in old catalog
Step4: now match to Gaia DR2...
Step5: Select good matches and compare both catalogs to Gaia DR2
Step6: Final Figures for the Paper
Step7: Final Gmag-based recalibration
Step8: Recalibrate Dec residuals
Step9: Now save correction arrays, then apply to original file, and then test
Step10: paper plot showing the jump in Gaia Gmag
Step11: TEMP code for color corrections to go from 3.1 to 3.2 and 3.3
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from astropy.table import Table
from astropy.coordinates import SkyCoord
from astropy import units as u
from astropy.table import hstack
import matplotlib.pyplot as plt
import numpy as np
from astroML.plotting import hist
# for astroML installation see https://www.astroml.org/user_guide/installation.html
## automatically reload any modules read below that might have changed (e.g. plots)
%load_ext autoreload
%autoreload 2
# importing ZI and KT tools:
import ZItools as zit
import KTtools as ktt
ZIdataDir = "/Users/ivezic/Work/Science/CalibrationV2/SDSS_SSC/Data"
# the original SDSS catalog from 2007
sdssOldCat = ZIdataDir + "/" + "stripe82calibStars_v2.6.dat"
# INPUT: Karun's new catalog from 2020
sdssNewCatIn = ZIdataDir + "/" + "N2020_stripe82calibStars.dat"
readFormat = 'csv'
# OUTPUT: with Gmag-based gray corrections
sdssNewCatOut = ZIdataDir + "/" + "stripe82calibStars_v3.1.dat"
# Gaia DR2
GaiaDR2Cat = ZIdataDir + "/" + "Stripe82_GaiaDR2.dat"
# Gaia DR2 with BP and RP data
GaiaDR2CatBR = ZIdataDir + "/" + "Stripe82_GaiaDR2_BPRP.dat"
# both new and old files use identical data structure
colnamesSDSS = ['calib_fla', 'ra', 'dec', 'raRMS', 'decRMS', 'nEpochs', 'AR_val',
'u_Nobs', 'u_mMed', 'u_mMean', 'u_mErr', 'u_rms_scatt', 'u_chi2',
'g_Nobs', 'g_mMed', 'g_mMean', 'g_mErr', 'g_rms_scatt', 'g_chi2',
'r_Nobs', 'r_mMed', 'r_mMean', 'r_mErr', 'r_rms_scatt', 'r_chi2',
'i_Nobs', 'i_mMed', 'i_mMean', 'i_mErr', 'i_rms_scatt', 'i_chi2',
'z_Nobs', 'z_mMed', 'z_mMean', 'z_mErr', 'z_rms_scatt', 'z_chi2']
%%time
# old
sdssOld = Table.read(sdssOldCat, format='ascii', names=colnamesSDSS)
np.size(sdssOld)
%%time
# new
sdssNew = Table.read(sdssNewCatIn, format=readFormat, names=colnamesSDSS)
np.size(sdssNew)
sdssOld_coords = SkyCoord(ra = sdssOld['ra']*u.degree, dec= sdssOld['dec']*u.degree)
sdssNew_coords = SkyCoord(ra = sdssNew['ra']*u.degree, dec= sdssNew['dec']*u.degree)
# this is matching sdssNew to sdssOld, so that indices are into sdssNew catalog
# makes sense in this case since the sdssOld catalog is (a little bit) bigger
# than sdssNew (1006849 vs 1005470)
idx, d2d, d3d = sdssNew_coords.match_to_catalog_sky(sdssOld_coords)
# object separation is an object with units,
# I add that as a column so that one can
# select based on separation to the nearest matching object
new_old = hstack([sdssNew, sdssOld[idx]], table_names = ['new', 'old'])
new_old['sep_2d_arcsec'] = d2d.arcsec
# good matches between the old and new catalogs
MAX_DISTANCE_ARCSEC = 0.5
sdss = new_old[(new_old['sep_2d_arcsec'] < MAX_DISTANCE_ARCSEC)]
print(np.size(sdss))
mOK3 = sdss[sdss['ra_new']<1]
mOK3 = zit.selectCatalog(sdss, mOK3)
print(996147/1006849)
print(993774/1006849)
print(991472/1006849)
colnamesGaia = ['ra', 'dec', 'nObs', 'Gmag', 'flux', 'fluxErr', 'pmra', 'pmdec']
colnamesGaia = colnamesGaia + ['BPmag', 'BPeI', 'RPmag', 'RPeI', 'BRef']
gaia = Table.read(GaiaDR2CatBR, format='ascii', names=colnamesGaia)
gaia['raG'] = gaia['ra']
gaia['decG'] = gaia['dec']
gaia['GmagErr'] = gaia['fluxErr'] / gaia['flux']
gaia['BR'] = gaia['BPmag'] - gaia['RPmag']
gaia['GBP'] = gaia['Gmag'] - gaia['BPmag']
gaia['GRP'] = gaia['Gmag'] - gaia['RPmag']
sdss_coords = SkyCoord(ra = sdss['ra_old']*u.degree, dec= sdss['dec_old']*u.degree)
gaia_coords = SkyCoord(ra = gaia['raG']*u.degree, dec= gaia['decG']*u.degree)
# this is matching gaia to sdss, so that indices are into sdss catalog
# makes sense in this case since the sdss catalog is bigger than gaia
idxG, d2dG, d3dG = gaia_coords.match_to_catalog_sky(sdss_coords)
# object separation is an object with units,
# I add that as a column so that one can
# select based on separation to the nearest matching object
gaia_sdss = hstack([gaia, sdss[idxG]], table_names = ['gaia', 'sdss'])
gaia_sdss['sepSG_2d_arcsec'] = d2dG.arcsec
### code for generating new quantities, such as dra, ddec, colors, differences in mags, etc
def derivedColumns(matches):
matches['dra'] = (matches['ra_new']-matches['ra_old'])*3600
matches['ddec'] = (matches['dec_new']-matches['dec_old'])*3600
matches['ra'] = matches['ra_old']
ra = matches['ra']
matches['raW'] = np.where(ra > 180, ra-360, ra)
matches['dec'] = matches['dec_old']
matches['u'] = matches['u_mMed_old']
matches['g'] = matches['g_mMed_old']
matches['r'] = matches['r_mMed_old']
matches['i'] = matches['i_mMed_old']
matches['z'] = matches['z_mMed_old']
matches['ug'] = matches['u_mMed_old'] - matches['g_mMed_old']
matches['gr'] = matches['g_mMed_old'] - matches['r_mMed_old']
matches['ri'] = matches['r_mMed_old'] - matches['i_mMed_old']
matches['gi'] = matches['g_mMed_old'] - matches['i_mMed_old']
matches['du'] = matches['u_mMed_old'] - matches['u_mMed_new']
matches['dg'] = matches['g_mMed_old'] - matches['g_mMed_new']
matches['dr'] = matches['r_mMed_old'] - matches['r_mMed_new']
matches['di'] = matches['i_mMed_old'] - matches['i_mMed_new']
matches['dz'] = matches['z_mMed_old'] - matches['z_mMed_new']
# Gaia
matches['draGold'] = -3600*(matches['ra_old'] - matches['raG'])
matches['draGnew'] = -3600*(matches['ra_new'] - matches['raG'])
matches['ddecGold'] = -3600*(matches['dec_old'] - matches['decG'])
matches['ddecGnew'] = -3600*(matches['dec_new'] - matches['decG'])
# photometric
matches['gGr_old'] = matches['Gmag'] - matches['r_mMed_old']
matches['gGr_new'] = matches['Gmag'] - matches['r_mMed_new']
matches['gRPr_new'] = matches['RPmag'] - matches['r_mMed_new']
return
derivedColumns(gaia_sdss)
# doGaiaAll(mOK)
def doGaiaGmagCorrection(d, Cstr, Gmax=20.0, yMax=0.03):
# Cstr = 'gGr_old' or 'gGr_new'
gi = d['gi']
Gr = d[Cstr]
Gmag = d['Gmag']
zit.qpBM(d, 'gi', -1, 4.5, Cstr, -2, 1.0, 56)
xBin, nPts, medianBin, sigGbin = zit.fitMedians(gi, Gr, -0.7, 4.0, 47, 0)
data = np.array([xBin, medianBin, sigGbin])
Ndata = xBin.size
### HERE WE ARE FITTING 7-th ORDER POLYNOMIAL TO Gmag-rSDSS vs. g-i ###
# get best-fit parameters
thetaCloc = zit.best_theta(data,7)
# generate best fit lines on a fine grid
xfit = np.linspace(-1.1, 4.3, 1000)
yfit = zit.polynomial_fit(thetaCloc, xfit)
## added "Poly" because switched to piecewise linear interpolation below
d['gGrFitPoly'] = zit.polynomial_fit(thetaCloc, gi)
d['dgGrPoly'] = d[Cstr] - d['gGrFitPoly']
### PIECEWISE LINEAR INTERPOLATION (AS FOR ALL OTHER COLORS AND SURVEYS)
d['gGrFit'] = np.interp(gi, xBin, medianBin)
d['dgGr'] = d[Cstr] - d['gGrFit']
# SELECT FOR RECALIBRATION wrt RA and Dec
giMin = 0.4
giMax = 3.0
Dc = d[(d['gi']>giMin)&(d['gi']<giMax)]
print('N before and after color cut:', np.size(d), np.size(Dc))
DcB = Dc[(Dc['Gmag']>14.5)&(Dc['Gmag']<Gmax)]
DcB['GrResid'] = DcB['dgGr'] - np.median(DcB['dgGr'])
zit.printStats(DcB['dgGr'])
DcBok = DcB[np.abs(DcB['dgGr'])<0.1]
print(np.size(DcB), np.size(DcBok))
zit.qpBM(DcBok, 'Gmag', 14.5, Gmax, 'GrResid', -1*yMax, yMax, 56)
zit.qpBM(DcBok, 'dec', -1.3, 1.3, 'GrResid', -1*yMax, yMax, 126)
zit.qpBM(DcBok, 'raW', -51.5, 60, 'GrResid', -1*yMax, yMax, 112)
return thetaCloc, DcBok
## first limit astrometric distance and
## require at least 4 epochs as in the old catalog
MAX_DISTANCE_ARCSEC = 0.5
m1 = gaia_sdss[(gaia_sdss['sepSG_2d_arcsec'] < MAX_DISTANCE_ARCSEC)]
a1 = m1['g_Nobs_new']
a2 = m1['r_Nobs_new']
a3 = m1['i_Nobs_new']
mOK = m1[(a1>3)&(a2>3)&(a3>3)]
print(len(new_old))
print(len(m1))
print(len(mOK))
def plotAstro2Ddiagrams(d):
### plots
plotNameRoot = 'astroVSpm_RA_pm'
plotName = plotNameRoot + '.png'
kw = {"Xstr":'pmra', "Xmin":-40, "Xmax":40, "Xlabel":'R.A. proper motion (mas/yr)', \
"Ystr":'draGnew', "Ymin":-0.5, "Ymax":0.5, "Ylabel":'raw SDSS R.A. - Gaia R.A. (arcsec)', \
"XminBin":-35, "XmaxBin":35, "nBin":70, \
"plotName":plotName, "Nsigma":0, "offset":-0.1, "symbSize":0.05}
kw["nBinX"] = 90
kw["nBinY"] = 40
kw["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(d, kw)
print('made plot', plotName)
# need to fit draGnew vs. pmra and correct for the mean trend, then plot vs. r mag
pmra = d['pmra']
draGnew = d['draGnew']
xBin, nPts, medianBin, sigGbin = zit.fitMedians(pmra, draGnew, -60, 60, 120, 0)
### PIECEWISE LINEAR INTERPOLATION
d['draGnewFit'] = np.interp(d['pmra'], xBin, medianBin)
draCorr = d['draGnew'] - d['draGnewFit']
draCorrOK = np.where(np.abs(draCorr) < 0.25, draCorr, 0)
d['draGnewCorr'] = draCorrOK
plotNameRoot = 'astroVSpm_RA_r'
plotName = plotNameRoot + '.png'
kw = {"Xstr":'r_mMed_new', "Xmin":14, "Xmax":21, "Xlabel":'SDSS r magnitude', \
"Ystr":'draGnewCorr', "Ymin":-0.12, "Ymax":0.12, "Ylabel":'corr. SDSS R.A. - Gaia R.A. (arcsec)', \
"XminBin":14, "XmaxBin":21, "nBin":30, \
"plotName":plotName, "Nsigma":0, "offset":0.050, "symbSize":0.05}
kw["nBinX"] = 30
kw["nBinY"] = 24
kw["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(d, kw)
print('made plot', plotName)
plotNameRoot = 'astroVSpm_Dec_pm'
plotName = plotNameRoot + '.png'
kw = {"Xstr":'pmdec', "Xmin":-40, "Xmax":40, "Xlabel":'Dec. proper motion (mas/yr)', \
"Ystr":'ddecGnew', "Ymin":-0.5, "Ymax":0.5, "Ylabel":'raw SDSS Dec. - Gaia Dec. (arcsec)', \
"XminBin":-35, "XmaxBin":35, "nBin":70, \
"plotName":plotName, "Nsigma":0, "offset":-0.1, "symbSize":0.05}
kw["nBinX"] = 90
kw["nBinY"] = 40
kw["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(d, kw)
print('made plot', plotName)
### produce astrometric plots showing correlation with proper motions
plotAstro2Ddiagrams(mOK)
# print(np.std(mOK['draGnew']), np.std(mOK['ddecGnew']))
#mOK
x = mOK['draGnewCorr']
xOK = x[np.abs(x)<0.25]
print(np.std(xOK), zit.sigG(xOK))
zit.qpBM(mOK, 'pmra', -50, 50, 'draGnew', -0.6, 0.6, 50)
zit.qpBM(mOK, 'pmdec', -50, 50, 'ddecGnew', -0.6, 0.6, 50)
theta, mOKc = doGaiaGmagCorrection(mOK, 'gGr_new')
thetaLoc = theta
## for zero point calibration, in addition to color cut in doGaiaAll, take 16 < G < 19.5
mOKcB = mOKc[(mOKc['Gmag']>16)&(mOKc['Gmag']<19.5)]
mOKcB['GrResid'] = mOKcB['dgGr'] - np.median(mOKcB['dgGr'])
mOKcBok = mOKcB[np.abs(mOKcB['dgGr'])<0.1]
print(np.size(mOKc), np.size(mOKcB), np.size(mOKcBok))
print(np.std(mOKcBok['GrResid']), zit.sigG(mOKcBok['GrResid']))
zit.qpBM(mOKcBok, 'dec', -1.3, 1.3, 'GrResid', -0.03, 0.03, 260)
zit.qpBM(mOKcBok, 'raW', -51.5, 60, 'GrResid', -0.03, 0.03, 112)
def plotGmag2Ddiagrams(d):
### plots
plotNameRoot = 'GrVSgi'
plotName = plotNameRoot + '.png'
kw = {"Xstr":'gi', "Xmin":0.0, "Xmax":3.5, "Xlabel":'SDSS g-i', \
"Ystr":'gGr_new', "Ymin":-1.25, "Ymax":0.25, "Ylabel":'Gaia Gmag - SDSS r', \
"XminBin":-0.5, "XmaxBin":4.0, "nBin":90, \
"plotName":plotName, "Nsigma":3, "offset":0.0, "symbSize":0.05}
kw["nBinX"] = 90
kw["nBinY"] = 40
kw["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(d, kw)
print('made plot', plotName)
def plotGmag2DdiagramsX(d, kw):
# Gaia G
print('-----------')
print(' stats for SDSS r binning medians:')
plotName = plotNameRoot + '_Gmag.png'
kwOC = {"Xstr":'Gmag', "Xmin":14.3, "Xmax":21.01, "Xlabel":'Gaia G (mag)', \
"Ystr":kw['Ystr'], "Ymin":-0.06, "Ymax":0.06, "Ylabel":Ylabel, \
"XminBin":14.5, "XmaxBin":21.0, "nBin":130, \
"plotName":plotName, "Nsigma":3, "offset":0.01, "symbSize":kw['symbSize']}
zit.plotdelMag(goodC, kwOC)
plotName = plotNameRoot + '_Gmag_Hess.png'
kwOC["plotName"] = plotName
kwOC["nBinX"] = 130
kwOC["nBinY"] = 50
kwOC["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(goodC, kwOC)
print('made plot', plotName)
print('------------------------------------------------------------------')
def plotGmagCorrections(d, kw):
### REDEFINE residuals to correspond to "SDSS-others", as other cases
d['redef'] = -1*d[kw['Ystr']]
kw['Ystr'] = 'redef'
goodC = d[np.abs(d['redef'])<0.1]
### plots
plotNameRoot = kw['plotNameRoot']
# RA
print(' stats for RA binning medians:')
plotName = plotNameRoot + '_RA.png'
Ylabel = 'residuals for (Gmag$_{SDSS}$ - Gmag$_{GaiaDR2}$) '
kwOC = {"Xstr":'raW', "Xmin":-52, "Xmax":60.5, "Xlabel":'R.A. (deg)', \
"Ystr":kw['Ystr'], "Ymin":-0.07, "Ymax":0.07, "Ylabel":Ylabel, \
"XminBin":-51.5, "XmaxBin":60, "nBin":112, \
"plotName":plotName, "Nsigma":3, "offset":0.01, "symbSize":kw['symbSize']}
zit.plotdelMag(goodC, kwOC)
plotName = plotNameRoot + '_RA_Hess.png'
kwOC["plotName"] = plotName
kwOC["nBinX"] = 112
kwOC["nBinY"] = 50
kwOC["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(goodC, kwOC)
print('made plot', plotName)
# Dec
print('-----------')
print(' stats for Dec binning medians:')
plotName = plotNameRoot + '_Dec.png'
kwOC = {"Xstr":'dec', "Xmin":-1.3, "Xmax":1.3, "Xlabel":'Declination (deg)', \
"Ystr":kw['Ystr'], "Ymin":-0.07, "Ymax":0.07, "Ylabel":Ylabel, \
"XminBin":-1.266, "XmaxBin":1.264, "nBin":252, \
"plotName":plotName, "Nsigma":3, "offset":0.01, "symbSize":kw['symbSize']}
zit.plotdelMag(goodC, kwOC)
plotName = plotNameRoot + '_Dec_Hess.png'
kwOC["plotName"] = plotName
kwOC["nBinX"] = 252
kwOC["nBinY"] = 50
kwOC["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(goodC, kwOC)
print('made plot', plotName)
# Gaia G
print('-----------')
print(' stats for SDSS r binning medians:')
plotName = plotNameRoot + '_Gmag.png'
kwOC = {"Xstr":'Gmag', "Xmin":14.3, "Xmax":21.01, "Xlabel":'Gaia G (mag)', \
"Ystr":kw['Ystr'], "Ymin":-0.06, "Ymax":0.06, "Ylabel":Ylabel, \
"XminBin":14.5, "XmaxBin":21.0, "nBin":130, \
"plotName":plotName, "Nsigma":3, "offset":0.01, "symbSize":kw['symbSize']}
zit.plotdelMag(goodC, kwOC)
plotName = plotNameRoot + '_Gmag_Hess.png'
kwOC["plotName"] = plotName
kwOC["nBinX"] = 130
kwOC["nBinY"] = 50
kwOC["cmap"] = 'plasma'
ktt.plotdelMagBW_KT(goodC, kwOC)
print('made plot', plotName)
print('------------------------------------------------------------------')
mOK['GrResid'] = mOK['dgGr'] - np.median(mOK['dgGr']) + 0.006
mOKok = mOK[np.abs(mOK['dgGr'])<0.1]
print(np.size(mOK), np.size(mOKok))
keywords = {"Ystr":'GrResid', "plotNameRoot":'GmagCorrection', "symbSize":0.05}
plotGmagCorrections(mOKok, keywords)
!cp GmagCorrection_Gmag_Hess.png GmagCorrectionTest_Gmag_Hess.png
mOKokX = mOKok[(mOKok['Gmag']>15)&(mOKok['Gmag']<15.5)]
print(np.median(mOKokX['GrResid']))
mOKokX = mOKok[(mOKok['Gmag']>16)&(mOKok['Gmag']<16.2)]
print(np.median(mOKokX['GrResid']))
keywords = {"Ystr":'GrResid', "plotNameRoot":'GmagCorrection', "symbSize":0.05}
plotGmagCorrections(mOKcBok, keywords)
# for calibration: giMin = 0.4 & giMax = 3.0
mOKB = mOK[(mOK['Gmag']>16)&(mOK['Gmag']<19.5)]
plotGmag2Ddiagrams(mOKB)
mOKB
RAbin, RAnPts, RAmedianBin, RAsigGbin = zit.fitMedians(mOKcBok['raW'], mOKcBok['GrResid'], -51.5, 60.0, 112, 1)
decOK = mOKcBok['dec_new']
GrResid = mOKcBok['GrResid']
fig,ax = plt.subplots(1,1,figsize=(8,6))
ax.scatter(decOK, GrResid, s=0.01, c='blue')
ax.set_xlim(-1.3,1.3)
ax.set_ylim(-0.06,0.06)
ax.set_ylim(-0.04,0.04)
ax.set_xlabel('Declination (deg)')
ax.set_ylabel('Gaia G - SDSS G')
xBin, nPts, medianBin, sigGbin = zit.fitMedians(decOK, GrResid, -1.266, 1.264, 252, 0)
ax.scatter(xBin, medianBin, s=30.0, c='black', alpha=0.9)
ax.scatter(xBin, medianBin, s=15.0, c='yellow', alpha=0.5)
TwoSigP = medianBin + 2*sigGbin
TwoSigM = medianBin - 2*sigGbin
ax.plot(xBin, TwoSigP, c='yellow')
ax.plot(xBin, TwoSigM, c='yellow')
xL = np.linspace(-100,100)
ax.plot(xL, 0*xL+0.00, c='yellow')
ax.plot(xL, 0*xL+0.01, c='red')
ax.plot(xL, 0*xL-0.01, c='red')
dCleft = -1.3
ax.plot(0*xL+dCleft, xL, c='red')
alltheta = []
for i in range(0,12):
decCol = -1.2655 + (i+1)*0.2109
ax.plot(0*xL+decCol, xL, c='red')
xR = xBin[(xBin>dCleft)&(xBin<decCol)]
yR = medianBin[(xBin>dCleft)&(xBin<decCol)]
dyR = sigGbin[(xBin>dCleft)&(xBin<decCol)]
data = np.array([xR, yR, dyR])
theta2 = zit.best_theta(data,5)
alltheta.append(theta2)
yfit = zit.polynomial_fit(theta2, xR)
ax.plot(xR, yfit, c='cyan', lw=2)
dCleft = decCol
rrr = yR - yfit
# print(i, np.median(rrr), np.std(rrr)) # 2 milimag scatter
# print(i, theta2)
plt.savefig('GmagDecCorrections.png')
# let's now correct all mags with this correction
thetaRecalib = alltheta
decLeft = -1.3
for i in range(0,12):
decRight = -1.2655 + (i+1)*0.2109
decArr = np.linspace(decLeft, decRight, 100)
thetaBin = thetaRecalib[i]
ZPfit = zit.polynomial_fit(thetaBin, decArr)
if (i==0):
decCorrGrid = decArr
ZPcorr = ZPfit
else:
decCorrGrid = np.concatenate([decCorrGrid, decArr])
ZPcorr = np.concatenate([ZPcorr, ZPfit])
decLeft = decRight
mOKtest = mOK[mOK['r_Nobs_new']>3]
# Dec correction
decGrid2correct = mOKtest['dec_new']
ZPcorrectionsDec = np.interp(decGrid2correct, decCorrGrid, ZPcorr)
# RA correction
raWGrid2correct = mOKtest['raW']
ZPcorrectionsRA = np.interp(raWGrid2correct, RAbin, RAmedianBin)
print(np.std(ZPcorrectionsDec), np.std(ZPcorrectionsRA))
fig,ax = plt.subplots(1,1,figsize=(8,6))
ax.scatter(decGrid2correct, ZPcorrectionsDec, s=0.01, c='blue')
ax.plot(decCorrGrid, ZPcorr, c='red')
ax.set_xlim(-1.3,1.3)
ax.set_ylim(-0.02,0.02)
ax.set_xlabel('Declination (deg)')
ax.set_ylabel('Correction')
fig,ax = plt.subplots(1,1,figsize=(8,6))
ax.scatter(raWGrid2correct, ZPcorrectionsRA, s=0.01, c='blue')
ax.plot(RAbin, RAmedianBin, c='red')
ax.set_xlim(-52,61)
ax.set_ylim(-0.05,0.05)
ax.set_xlabel('RA (deg)')
ax.set_ylabel('Correction')
np.min(ZPcorrectionsRA)
mOKtest['ZPcorrectionsRA'] = ZPcorrectionsRA
mOKtest['ZPcorrectionsDec'] = ZPcorrectionsDec
mOKtest['r_mMed_new'] = mOKtest['r_mMed_new'] + mOKtest['ZPcorrectionsRA'] + mOKtest['ZPcorrectionsDec']
mOKtest['gGr_new'] = mOKtest['Gmag'] - mOKtest['r_mMed_new']
mOKtest['gGrFit'] = zit.polynomial_fit(thetaCloc, mOKtest['gi'])
mOKtest['dgGr'] = mOKtest['gGr_new'] - mOKtest['gGrFit']
d = mOKtest
gi = d['gi']
Gr = d['gGr_new']
Gmag = d['Gmag']
zit.qpBM(d, 'gi', -1, 4.5, 'gGr_new', -2, 1.0, 56)
thetaCtest, DcBokTest_new = doGaiaGmagCorrection(mOKtest, 'gGr_new')
keywords = {"Ystr":'gGr_new', "plotNameRoot":'GmagCorrectionTest', "symbSize":0.05}
mOKtest2 = mOKtest[(mOKtest['gi']>0.4)&(mOKtest['gi']<3.0)]
x = mOKtest2[(mOKtest2['Gmag']>14.5)&(mOKtest2['Gmag']<15.5)]
mOKtest2['gGr_new'] = mOKtest2['gGr_new'] - np.median(x['gGr_new'])
plotGmagCorrections(mOKtest2, keywords)
# final refers to the July 2020 analysis, before the paper submission
#np.savetxt('ZPcorrectionsRA_v3.1_final.dat', (RAbin, RAmedianBin))
#np.savetxt('ZPcorrectionsDec_v3.1_final.dat', (decCorrGrid, ZPcorr))
sdssOut = sdss[sdss['ra_new']<1]
sdssOut = zit.selectCatalog(sdss, sdssOut)
sdssOut.sort('calib_fla_new')
# read back gray zero point recalibration files
zpRAgrid, zpRA = np.loadtxt('ZPcorrectionsRA_v3.1_final.dat')
zpDecgrid, zpDec = np.loadtxt('ZPcorrectionsDec_v3.1_final.dat')
sdssOut
# Dec correction
decGrid2correct = sdssOut['dec_new']
ZPcorrectionsDec = np.interp(decGrid2correct, zpDecgrid, zpDec)
# RA correction
ra = sdssOut['ra_new']
raWGrid2correct = np.where(ra > 180, ra-360, ra)
ZPcorrectionsRA = np.interp(raWGrid2correct, zpRAgrid, zpRA)
print('gray std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec))
for b in ('u', 'g', 'r', 'i', 'z'):
for mtype in ('_mMed_new', '_mMean_new'):
mstr = b + mtype
# applying here gray corrections
sdssOut[mstr] = sdssOut[mstr] + ZPcorrectionsRA + ZPcorrectionsDec
SSCindexRoot = 'CALIBSTARS_'
outFile = ZIdataDir + "/" + "stripe82calibStars_v3.1_noheader_final.dat"
newSSC = open(outFile,'w')
df = sdssOut
Ngood = 0
for i in range(0, np.size(df)):
Ngood += 1
NoldCat = df['calib_fla_new'][i]
strNo = f'{Ngood:07}'
SSCindex = SSCindexRoot + strNo
SSCrow = zit.getSSCentry(df, i)
zit.SSCentryToOutFileRow(SSCrow, SSCindex, newSSC)
newSSC.close()
print(Ngood, 'rows in file', outFile)
np.size(zpDec)
### need to figure out where were ZPcorrections2_rz_Dec.dat etc produced ...
## color corrections
for mtype in ('_mMed', '_mMean'):
## u band from u-r color
color = 'ur'
zpcFilename = 'ZPcorrections_' + color + '_RA.dat'
zpcRAgrid, zpcRA = np.loadtxt(zpcFilename)
zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'
zpcDecgrid, zpcDec = np.loadtxt(zpcFilename)
ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA)
ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec)
print('u-r std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec))
mstr = 'u' + mtype
sdssOut[mstr] = sdssOut[mstr] - ZPcorrectionsRA - ZPcorrectionsDec
## g band from g-r color
color = 'gr'
zpcFilename = 'ZPcorrections_' + color + '_RA.dat'
zpcRAgrid, zpcRA = np.loadtxt(zpcFilename)
zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'
zpcDecgrid, zpcDec = np.loadtxt(zpcFilename)
ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA)
ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec)
print('g-r std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec))
mstr = 'g' + mtype
sdssOut[mstr] = sdssOut[mstr] - ZPcorrectionsRA - ZPcorrectionsDec
## i band from r-i color
color = 'ri'
zpcFilename = 'ZPcorrections_' + color + '_RA.dat'
zpcRAgrid, zpcRA = np.loadtxt(zpcFilename)
zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'
zpcDecgrid, zpcDec = np.loadtxt(zpcFilename)
ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA)
ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec)
mstr = 'i' + mtype
print('r-i std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec))
sdssOut[mstr] = sdssOut[mstr] + ZPcorrectionsRA + ZPcorrectionsDec
## i band from r-z color
color = 'rz'
zpcFilename = 'ZPcorrections_' + color + '_RA.dat'
zpcRAgrid, zpcRA = np.loadtxt(zpcFilename)
zpcFilename = 'ZPcorrections_' + color + '_Dec.dat'
zpcDecgrid, zpcDec = np.loadtxt(zpcFilename)
ZPcorrectionsRA = np.interp(raWGrid2correct, zpcRAgrid, zpcRA)
ZPcorrectionsDec = np.interp(decGrid2correct, zpcDecgrid, zpcDec)
mstr = 'z' + mtype
print('r-z std RA/Dec:', np.std(ZPcorrectionsRA), np.std(ZPcorrectionsDec))
sdssOut[mstr] = sdssOut[mstr] + ZPcorrectionsRA + ZPcorrectionsDec
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fetching the MNIST dataset
Step2: It's 70000 examples of handwritten digits of size 28x28 pixels, labeled from 0 to 9.
Step3: Pick the first 15 images for visualization
Step4: Creating a support vector classifier
Step5: Apply learning on the first half of the digits
Step6: Predictions
Step7: Scikit-learn SVM doesn't seem to be of the fastest kind, need to check parameters.
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib notebook
from sklearn import datasets, svm, metrics, utils
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original', data_home='./data')
mnist.data, mnist.target = utils.shuffle(mnist.data, mnist.target)
n_samples = len(mnist.data)
fig = plt.figure()
for i in range(15):
img = mnist.data[i].reshape(28, 28)
ax = fig.add_subplot(3, 5, i+1)
ax.axis('off')
ax.imshow(img, cmap=plt.cm.gray, interpolation='nearest')
ax.set_title('# {}'.format(i))
# gamma = 0.001, which was in the example for 8x8 images, seems to
# heavily overfit 28x28 MNIST data with N=10000 samples, predicts only 1
#classifier = svm.SVC(gamma=0.001)
# TODO: There must be a bug!
classifier = svm.SVC()
N = n_samples//2
N = 10000
classifier.fit(mnist.data[:N], mnist.target[:N])
expected = mnist.target[N:2*N]
predicted = classifier.predict(mnist.data[N:2*N])
print("Classification report for classifier %s:\n%s\n"
% (classifier, metrics.classification_report(expected, predicted)))
print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted))
expected, predicted
np.unique(predicted)
expected = mnist.target[:N]
predicted = classifier.predict(mnist.data[:N])
mnist.data[N:2*N].shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step7: Class EF
Step8: Now define properties so that the individual components can be accessed like name atrributes,
Step14: Class MemberLoad
Step15: Load Type PL
Step16: Load Type PLA
Step17: Load Type UDL
Step19: Load Type LVL
Step20: Load Type CM
Step21: makeMemberLoad() factory function
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import sys
from salib import extend
class EF(object):
Class EF represents the 6 end forces acting on a 2-D, planar, beam element.
def __init__(self,c0=0.,v1=0.,m2=0.,c3=0.,v4=0.,m5=0.):
Initialize an instance with the 6 end forces. If the first
argument is a 6-element array, initialize from a copy of that
array and ignore any other arguments.
if np.isscalar(c0):
self.fefs = np.matrix([c0,v1,m2,c3,v4,m5],dtype=np.float64).T
else:
self.fefs = c0.copy()
def __getitem__(self,ix):
Retreive one of the forces by numer. This allows allows unpacking
of all 6 end forces into 6 variables using something like:
c0,v1,m2,c3,v4,m5 = self
return self.fefs[ix,0]
def __add__(self,other):
Add this set of end forces to another, returning the sum.
assert type(self) is type(other)
new = self.__class__(self.fefs+other.fefs)
return new
def __sub__(self,other):
Subtract the other from this set of forces, returning the difference.
assert type(self) is type(other)
new = self.__class__(self.fefs-other.fefs)
return new
def __mul__(self,scale):
Multiply this set of forces by the scalar value, returning the product.
if scale == 1.0:
return self
return self.__class__(self.fefs*scale)
__rmul__ = __mul__
def __repr__(self):
return '{}({},{},{},{},{},{})'.format(self.__class__.__name__,*list(np.array(self.fefs.T)[0]))
##test:
f = EF(1,2,0,4,1,6)
f
##test:
g = f+f+f
g
##test:
f[1]
##test:
f[np.ix_([3,0,1])]
##test:
g[(3,0,1)]
##test:
f0,f1,f2,f3,f4,f5 = g
f3
##test:
g, g*5, 5*g
@extend
class EF:
@property
def c0(self):
return self.fefs[0,0]
@c0.setter
def c0(self,v):
self.fefs[0,0] = v
@property
def v1(self):
return self.fefs[1,0]
@v1.setter
def v1(self,v):
self.fefs[1,0] = v
@property
def m2(self):
return self.fefs[2,0]
@m2.setter
def m2(self,v):
self.fefs[2,0] = v
@property
def c3(self):
return self.fefs[3,0]
@c3.setter
def c3(self,v):
self.fefs[3,0] = v
@property
def v4(self):
return self.fefs[4,0]
@v4.setter
def v4(self,v):
self.fefs[4,0] = v
@property
def m5(self):
return self.fefs[5,0]
@m5.setter
def m5(self,v):
self.fefs[5,0] = v
##test:
f = EF(10.,11,12,13,15,15)
f, f.c0, f.v1, f.m2, f.c3, f.v4, f.m5
##test:
f.c0 *= 2
f.v1 *= 3
f.m2 *= 4
f.c3 *= 5
f.v4 *= 6
f.m5 *= 7
f
class MemberLoad(object):
TABLE_MAP = {} # map from load parameter names to column names in table
def fefs(self):
Return the complete set of 6 fixed end forces produced by the load.
raise NotImplementedError()
def shear(self,x):
Return the shear force that is in equilibrium with that
produced by the portion of the load to the left of the point at
distance 'x'. 'x' may be a scalar or a 1-dimensional array
of values.
raise NotImplementedError()
def moment(self,x):
Return the bending moment that is in equilibrium with that
produced by the portion of the load to the left of the point at
distance 'x'. 'x' may be a scalar or a 1-dimensional array
of values.
raise NotImplementedError()
@extend
class MemberLoad:
@property
def vpts(self):
Return a descriptor of the points at which the shear force must
be evaluated in order to draw a proper shear force diagram for this
load. The descriptor is a 3-tuple of the form: (l,r,d) where 'l'
is the leftmost point, 'r' is the rightmost point and 'd' is the
degree of the curve between. One of 'r', 'l' may be None.
raise NotImplementedError()
@property
def mpts(self):
Return a descriptor of the points at which the moment must be
evaluated in order to draw a proper bending moment diagram for this
load. The descriptor is a 3-tuple of the form: (l,r,d) where 'l'
is the leftmost point, 'r' is the rightmost point and 'd' is the
degree of the curve between. One of 'r', 'l' may be None.
raise NotImplementedError()
class PL(MemberLoad):
TABLE_MAP = {'P':'W1','a':'A'}
def __init__(self,L,P,a):
self.L = L
self.P = P
self.a = a
def fefs(self):
P = self.P
L = self.L
a = self.a
b = L-a
m2 = -P*a*b*b/(L*L)
m5 = P*a*a*b/(L*L)
v1 = (m2 + m5 - P*b)/L
v4 = -(m2 + m5 + P*a)/L
return EF(0.,v1,m2,0.,v4,m5)
def shear(self,x):
return -self.P*(x>self.a)
def moment(self,x):
return self.P*(x-self.a)*(x>self.a)
def __repr__(self):
return '{}(L={},P={},a={})'.format(self.__class__.__name__,self.L,self.P,self.a)
##test:
p = PL(1000.,300.,400.)
p, p.fefs()
@extend
class MemberLoad:
EPSILON = 1.0E-6
@extend
class PL:
@property
def vpts(self):
return (self.a-self.EPSILON,self.a+self.EPSILON,0)
@property
def mpts(self):
return (self.a,None,1)
##test:
p = PL(1000.,300.,400.)
p.vpts
##test:
p.mpts
class PLA(MemberLoad):
TABLE_MAP = {'P':'W1','a':'A'}
def __init__(self,L,P,a):
self.L = L
self.P = P
self.a = a
def fefs(self):
P = self.P
L = self.L
a = self.a
c0 = -P*(L-a)/L
c3 = -P*a/L
return EF(c0=c0,c3=c3)
def shear(self,x):
return 0.
def moment(self,x):
return 0.
def __repr__(self):
return '{}(L={},P={},a={})'.format(self.__class__.__name__,self.L,self.P,self.a)
##test:
p = PLA(10.,P=100.,a=4.)
p.fefs()
@extend
class PLA:
@property
def vpts(self):
return (0.,self.L,0)
@property
def mpts(self):
return (0.,self.L,0)
class UDL(MemberLoad):
TABLE_MAP = {'w':'W1'}
def __init__(self,L,w):
self.L = L
self.w = w
def __repr__(self):
return '{}(L={},w={})'.format(self.__class__.__name__,self.L,self.w)
def fefs(self):
L = self.L
w = self.w
return EF(0.,-w*L/2., -w*L*L/12., 0., -w*L/2., w*L*L/12.)
def shear(self,x):
l = x*(x>0.)*(x<=self.L) + self.L*(x>self.L) # length of loaded portion
return -(l*self.w)
def moment(self,x):
l = x*(x>0.)*(x<=self.L) + self.L*(x>self.L) # length of loaded portion
d = (x-self.L)*(x>self.L) # distance from loaded portion to x: 0 if x <= L else x-L
return self.w*l*(l/2.+d)
@property
def vpts(self):
return (0.,self.L,1)
@property
def mpts(self):
return (0.,self.L,2)
##test:
w = UDL(12,10)
w,w.fefs()
class LVL(MemberLoad):
TABLE_MAP = {'w1':'W1','w2':'W2','a':'A','b':'B','c':'C'}
def __init__(self,L,w1,w2=None,a=None,b=None,c=None):
if a is not None and b is not None and c is not None and L != (a+b+c):
raise Exception('Cannot specify all of a, b & c')
if a is None:
if b is not None and c is not None:
a = L - (b+c)
else:
a = 0.
if c is None:
if b is not None:
c = L - (a+b)
else:
c = 0.
if b is None:
b = L - (a+c)
if w2 is None:
w2 = w1
self.L = L
self.w1 = w1
self.w2 = w2
self.a = a
self.b = b
self.c = c
def fefs(self):
This mess was generated via sympy. See:
../../examples/cive3203-notebooks/FEM-2-Partial-lvl.ipynb
L = float(self.L)
a = self.a
b = self.b
c = self.c
w1 = self.w1
w2 = self.w2
m2 = -b*(15*a*b**2*w1 + 5*a*b**2*w2 + 40*a*b*c*w1 + 20*a*b*c*w2 + 30*a*c**2*w1 + 30*a*c**2*w2 + 3*b**3*w1 + 2*b**3*w2 + 10*b**2*c*w1 + 10*b**2*c*w2 + 10*b*c**2*w1 + 20*b*c**2*w2)/(60.*(a + b + c)**2)
m5 = b*(20*a**2*b*w1 + 10*a**2*b*w2 + 30*a**2*c*w1 + 30*a**2*c*w2 + 10*a*b**2*w1 + 10*a*b**2*w2 + 20*a*b*c*w1 + 40*a*b*c*w2 + 2*b**3*w1 + 3*b**3*w2 + 5*b**2*c*w1 + 15*b**2*c*w2)/(60.*(a + b + c)**2)
v4 = -(b*w1*(a + b/2.) + b*(a + 2*b/3.)*(-w1 + w2)/2. + m2 + m5)/L
v1 = -b*(w1 + w2)/2. - v4
return EF(0.,v1,m2,0.,v4,m5)
def __repr__(self):
return '{}(L={},w1={},w2={},a={},b={},c={})'\
.format(self.__class__.__name__,self.L,self.w1,self.w2,self.a,self.b,self.c)
def shear(self,x):
c = (x>self.a+self.b) # 1 if x > A+B else 0
l = (x-self.a)*(x>self.a)*(1.-c) + self.b*c # length of load portion to the left of x
return -(self.w1 + (self.w2-self.w1)*(l/self.b)/2.)*l
def moment(self,x):
c = (x>self.a+self.b) # 1 if x > A+B else 0
# note: ~c doesn't work if x is scalar, thus we use 1-c
l = (x-self.a)*(x>self.a)*(1.-c) + self.b*c # length of load portion to the left of x
d = (x-(self.a+self.b))*c # distance from right end of load portion to x
return ((self.w1*(d+l/2.)) + (self.w2-self.w1)*(l/self.b)*(d+l/3.)/2.)*l
@property
def vpts(self):
return (self.a,self.a+self.b,1 if self.w1==self.w2 else 2)
@property
def mpts(self):
return (self.a,self.a+self.b,2 if self.w1==self.w2 else 3)
class CM(MemberLoad):
TABLE_MAP = {'M':'W1','a':'A'}
def __init__(self,L,M,a):
self.L = L
self.M = M
self.a = a
def fefs(self):
L = float(self.L)
A = self.a
B = L - A
M = self.M
m2 = B*(2.*A - B)*M/L**2
m5 = A*(2.*B - A)*M/L**2
v1 = (M + m2 + m5)/L
v4 = -v1
return EF(0,v1,m2,0,v4,m5)
def shear(self,x):
return x*0.
def moment(self,x):
return -self.M*(x>self.A)
@property
def vpts(self):
return (None,None,0)
@property
def mpts(self):
return (self.A-self.EPSILON,self.A+self.EPSILON,1)
def __repr__(self):
return '{}(L={},M={},a={})'.format(self.__class__.__name__,self.L,self.M,self.a)
def makeMemberLoad(L,data,ltype=None):
def all_subclasses(cls):
_all_subclasses = []
for subclass in cls.__subclasses__():
_all_subclasses.append(subclass)
_all_subclasses.extend(all_subclasses(subclass))
return _all_subclasses
if ltype is None:
ltype = data.get('TYPE',None)
for c in all_subclasses(MemberLoad):
if c.__name__ == ltype and hasattr(c,'TABLE_MAP'):
MAP = c.TABLE_MAP
argv = {k:data[MAP[k]] for k in MAP.keys()}
return c(L,**argv)
raise Exception('Invalid load type: {}'.format(ltype))
##test:
ml = makeMemberLoad(12,{'TYPE':'UDL', 'W1':10})
ml, ml.fefs()
def unmakeMemberLoad(load):
type = load.__class__.__name__
ans = {'TYPE':type}
for a,col in load.TABLE_MAP.items():
ans[col] = getattr(load,a)
return ans
##test:
unmakeMemberLoad(ml)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Boston Housing Dataset
Step2: Fit A Linear Regression
Step3: View Intercept Term
Step4: View Coefficients
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_boston
import warnings
# Suppress Warning
warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd")
# Load data
boston = load_boston()
X = boston.data
y = boston.target
# Create linear regression
regr = LinearRegression()
# Fit the linear regression
model = regr.fit(X, y)
# View the intercept
model.intercept_
# View the feature coefficients
model.coef_
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: And how you will edit text files (it will often ask you to edit messages and other information, and thus wants to know how you like to edit your files)
Step2: Stage 0
Step3: Then paste the key into your Gitlab or github account
Step4: We are going to create a test repo for git to play
Step5: Let's look at what git did
Step6: The folder is empty now.
Step7: Now you can see that there is a hidden folder .git (notice the dot that marks it as a hidden folder), which is the GIT repo
Step8: Now let's edit our first file in the test directory with a text editor... I'm doing it programatically here for automation purposes, but you'd normally be editing by hand
Step9: ls lists the contents of the current working directory
Step10: git add
Step11: We can now ask git about what happened with status
Step12: git commit
Step13: In the commit above, we used the -m flag to specify a message at the command line. If we don't do that, git will open the editor we specified in our configuration above and require that we enter a message. By default, git refuses to record changes that don't have a message to go along with them (though you can obviously 'cheat' by using an empty or meaningless string
Step14: git diff
Step15: And now we can ask git what is different
Step16: The cycle of git virtue
Step17: git log revisited
Step18: Sometimes it's handy to see a very summarized version of the log
Step19: Git supports aliases
Step20: git mv and rm
Step21: Note that these changes must be committed too, to become permanent! In git's world, until something hasn't been committed, it isn't permanently recorded anywhere.
Step22: And git rm works in a similar fashion removing the file from your repo.
Step23: We are now going to try two different routes of development
Step24: As you can see there are no emjois file in master yet
Step25: Using remotes as a single user
Step26: Since the above cell didn't produce any output after the git remote -v call, it means we have no remote repositories configured. We will now proceed to do so. Once logged into Github or Gitlab, go to the new repository page and make a repository called test. Do not check the box that says Initialize this repository with a README, since we already have an existing repository here. That option is useful when you're starting first at Github and don't have a repo made already on a local computer.
Step27: And now we go back to the master branch, where we change the same file
Step28: So now let's see what happens if we try to merge the trouble branch into master
Step29: Let's see what git has put into our file
|
<ASSISTANT_TASK:>
Python Code:
!git config --global user.name "joaquin" # replace joaquin by your name
!git config --global user.email "user@gmail.com" # replace j by your email
# Put here your preferred editor. If this is not set, git will honor the $EDITOR environment variable
# On Windows: Notepad works, Notepad++, sublime or atom
# On mac/linux: vim, sublime or atom as a basic option
!git config --global core.editor nvim # my lightweight neovim editor
# And while we're at it, we also turn on the use of color, which is very useful
!git config --global color.ui "auto"
#! ssh-keygen -f ~/.ssh/id_rsa -t rsa -N ''
#! less ~/.ssh/id_rsa.pub
!git
! mkdir playground_repo
cd playground_repo
ls -la
! git init
ls -la
ls -l .git
!echo "My first bit of text in the repo" > README.md
ls
!git add README.md
!git status
!git commit -a -m "First commit"
!git log
!echo "And now we add a second line..." >> README.md
!git diff
!git commit -a -m "added second line."
!git log
!git log --oneline --topo-order --graph
# We create our alias (this saves it in git's permanent configuration file):
!git config --global alias.slog "log --oneline --topo-order --graph"
# And now we can use it
!git slog
!git mv README.md README.markdown
!git status
!git commit -a -m "I like this new name better"
!echo "Let's look at the log again:"
!git slog
!git status
!ls
!git branch emojis
!git checkout emojis
!echo "Some emojis :smile:, :horse:, :cat:" > emojis.md
!git add emojis.md
!git commit -a -m "Adding some emojis"
!git slog
!git checkout master
!git slog
!ls
!echo "All the while, more work goes on in master..." >> README.markdown
!git commit -a -m "The mainline keeps moving"
!git slog
ls
!git merge emojis -m 'merge emojis'
!git slog
!git remote -v
!git branch trouble
!git checkout trouble
!echo "This is going to be a problem..." >> README.markdown
!git commit -a -m "Adding a file for trouble"
!git checkout master
!echo "At the same time master keeps working on same line will cause a MERGE CONFLICT ..." >> README.markdown
!git commit -a -m "Keep working on the experiment"
!git checkout master
!git merge trouble
#!git commit -a -m "Completed merge of trouble, fixing conflicts along the way"
#!git slog
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Config
Step2: Identifier for storing these features on disk and referring to them later.
Step3: Load Data
Step4: Build features
Step5: Visualize
Step6: Build final features
Step7: Save features
|
<ASSISTANT_TASK:>
Python Code:
from pygoose import *
from collections import defaultdict
import seaborn as sns
import nltk
nltk.download('stopwords')
project = kg.Project.discover()
feature_list_id = 'wm_intersect'
df_train = pd.read_csv(project.data_dir + 'train.csv').fillna('none')
df_test = pd.read_csv(project.data_dir + 'test.csv').fillna('none')
df_all_pairs = pd.concat([
df_train[['question1', 'question2']],
df_test[['question1', 'question2']]
], axis=0).reset_index(drop='index')
stops = set(nltk.corpus.stopwords.words('english'))
def word_match_share(pair):
q1 = str(pair[0]).lower().split()
q2 = str(pair[1]).lower().split()
q1words = {}
q2words = {}
for word in q1:
if word not in stops:
q1words[word] = 1
for word in q2:
if word not in stops:
q2words[word] = 1
if len(q1words) == 0 or len(q2words) == 0:
# The computer-generated chaff includes a few questions that are nothing but stopwords
return 0
shared_words_in_q1 = [w for w in q1words.keys() if w in q2words]
shared_words_in_q2 = [w for w in q2words.keys() if w in q1words]
R = (len(shared_words_in_q1) + len(shared_words_in_q2)) / (len(q1words) + len(q2words))
return R
wms = kg.jobs.map_batch_parallel(
df_all_pairs[['question1', 'question2']].as_matrix(),
item_mapper=word_match_share,
batch_size=1000,
)
q_dict = defaultdict(dict)
for i in progressbar(range(len(wms))):
q_dict[df_all_pairs.question1[i]][df_all_pairs.question2[i]] = wms[i]
q_dict[df_all_pairs.question2[i]][df_all_pairs.question1[i]] = wms[i]
def q1_q2_intersect(row):
return len(set(q_dict[row['question1']]).intersection(set(q_dict[row['question2']])))
def q1_q2_wm_ratio(row):
q1 = q_dict[row['question1']]
q2 = q_dict[row['question2']]
inter_keys = set(q1.keys()).intersection(set(q2.keys()))
if len(inter_keys) == 0:
return 0
inter_wm = 0
total_wm = 0
for q, wm in q1.items():
if q in inter_keys:
inter_wm += wm
total_wm += wm
for q, wm in q2.items():
if q in inter_keys:
inter_wm += wm
total_wm += wm
if total_wm == 0:
return 0
return inter_wm / total_wm
df_train['q1_q2_wm_ratio'] = df_train.apply(q1_q2_wm_ratio, axis=1, raw=True)
df_test['q1_q2_wm_ratio'] = df_test.apply(q1_q2_wm_ratio, axis=1, raw=True)
df_train['q1_q2_intersect'] = df_train.apply(q1_q2_intersect, axis=1, raw=True)
df_test['q1_q2_intersect'] = df_test.apply(q1_q2_intersect, axis=1, raw=True)
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
intersect_counts = df_train.q1_q2_intersect.value_counts()
sns.barplot(intersect_counts.index[:20], intersect_counts.values[:20])
plt.subplot(1, 2, 2)
df_train['q1_q2_wm_ratio'].plot.hist()
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
sns.violinplot(x='is_duplicate', y='q1_q2_wm_ratio', data=df_train)
plt.subplot(1, 2, 2)
sns.violinplot(x='is_duplicate', y='q1_q2_intersect', data=df_train)
df_train.plot.scatter(x='q1_q2_intersect', y='q1_q2_wm_ratio', figsize=(12, 6))
print(df_train[['q1_q2_intersect', 'q1_q2_wm_ratio']].corr())
columns_to_keep = [
'q1_q2_intersect',
'q1_q2_wm_ratio',
]
X_train = df_train[columns_to_keep].values
X_test = df_test[columns_to_keep].values
feature_names = columns_to_keep
project.save_features(X_train, X_test, feature_names, feature_list_id)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run some basic tests
Step2: Test 2
Step3: 2.2 Wider Gaussian
Step4: Test 3
Step5: As expected we see that the KL-divergence when comparing to random noise is much higher than the Gaussian distributions which are similar but still different to our test distribution.
Step6: Reshape all the arrays into their two-dimensional postage stamp form. Then add an offset to them so they all have a minimum value just above 0. When doing the relative entropy calculation we also normalize the stamps.
Step7: This looks really promising! There is a clear separation between the peaks for False objects and True sources. There is an area of overlap in the results. To see how well we do when training a classifier to decide True/False when looking at a postage stamps we'll train a logistic regression next.
Step8: We'll use the LogisticRegression classifier from scikit-learn to learn a boundary on the relative entropy value.
Step9: What do the false positives look like?
Step10: Using "real" false positives
|
<ASSISTANT_TASK:>
Python Code:
central_source = np.zeros((21, 21))
central_source[10,10] = 1.
gaussian_test = ndimage.gaussian_filter(central_source, 1.4)
imshow(gaussian_test)
colorbar()
from scipy.special import rel_entr
kl_div_same_dist = np.sum(rel_entr(gaussian_test, gaussian_test))
print("The KL Divergence of the test Gaussian with itself is: {}".format(kl_div_same_dist))
off_center_source = np.zeros((21, 21))
off_center_source[8,10] = 1.
gaussian_off_cent = ndimage.gaussian_filter(off_center_source, 1.4)
plt.imshow(gaussian_off_cent)
# We add a very small offset from 0 to avoid issues in the KL-div calculation
kl_div_off_center_dist = np.nansum(rel_entr(gaussian_test+1e-12, gaussian_off_cent+1e-12))
print("The KL Divergence of the test Gaussian with the offset Gaussian is: {}".format(kl_div_off_center_dist))
gaussian_doublewide = ndimage.gaussian_filter(central_source, 2.8)
plt.imshow(gaussian_doublewide)
# We add a very small offset from 0 to avoid issues in the KL-div calculation
kl_div_doublewide_dist = np.nansum(rel_entr(gaussian_test+1e-12, gaussian_doublewide+1e-12))
print("The KL Divergence of the test Gaussian with the wider Gaussian is: {}".format(kl_div_doublewide_dist))
poisson_noise = np.random.RandomState(42).poisson(100, size=(21,21))
poisson_noise = poisson_noise/np.sum(poisson_noise)
plt.imshow(poisson_noise)
plt.colorbar()
# We add a very small offset from 0 to avoid issues in the KL-div calculation
kl_div_poisson_dist = np.nansum(rel_entr(gaussian_test+1e-12, poisson_noise+1e-12))
print("The KL Divergence of the test Gaussian with the Poisson noise is: {}".format(kl_div_poisson_dist))
full_false = np.load('/epyc/users/smotherh/kbmod_epyc/notebooks/normed_individual_false.npy')
keep_idx = []
idx = 0
for f_im in full_false:
if np.isnan(f_im[0][0]):
idx += 1
continue
else:
keep_idx.append(idx)
idx += 1
full_false = full_false[keep_idx]
full_true = np.load('/epyc/users/smotherh/kbmod_epyc/notebooks/normed_individual_simulated.npy')
imshow(full_false[0].reshape(21,21))
imshow(full_true[0].reshape(21,21))
false_values = []
for f_im in full_false:
false_stamp = f_im.reshape(21,21)
false_stamp -= np.min(false_stamp)
false_stamp += 1e-12
#false_kl = special.kl_div(gaussian_test, (false_stamp/np.sum(false_stamp)))
false_kl = special.rel_entr(gaussian_test, (false_stamp/np.sum(false_stamp)))
false_values.append(np.nansum(false_kl))
false_values = np.array(false_values)
true_values = []
for t_im in full_true:
true_stamp = t_im.reshape(21,21)
true_stamp -= np.min(true_stamp)
true_stamp += 1e-12
#true_kl = special.kl_div(gaussian_test, (true_stamp/np.sum(true_stamp)))
true_kl = special.rel_entr(gaussian_test, (true_stamp/np.sum(true_stamp)))
true_values.append(np.nansum(true_kl))
true_values = np.array(true_values)
len(full_false)
len(false_values), len(true_values)
fig = plt.figure(figsize=(10,6))
n, bins, _ = hist(false_values, alpha=0.4, label='False', bins=20, range=(0, 5), density=True)
hist(true_values, alpha=0.4, label='True', bins=bins, density=True)
legend()
xlabel('Relative Entropy')
ylabel('Density')
rand_state = np.random.RandomState(24)
false_selection = rand_state.choice(np.arange(len(false_values)), size=40000, replace=False)
train_false, test_false = false_values[false_selection[:30000]], false_values[false_selection[30000:]]
true_selection = rand_state.choice(np.arange(len(true_values)), size=40000, replace=False)
train_true, test_true = true_values[true_selection[:30000]], true_values[true_selection[30000:]]
train_data = np.array(train_false)
train_labels = np.zeros(len(train_false))
train_data = np.append(train_data, train_true)
train_labels = np.append(train_labels, np.ones(len(train_true)))
train_idx = np.arange(len(train_data))
rand_state.shuffle(train_idx)
train_data = train_data[train_idx]
train_labels = train_labels[train_idx]
test_data = np.array(test_false)
test_labels = np.zeros(len(test_false))
test_data = np.append(test_data, test_true)
test_labels = np.append(test_labels, np.ones(len(test_true)))
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression()
classifier.fit(train_data.reshape(-1, 1), train_labels)
fig = plt.figure(figsize=(10,6))
plt.plot(np.linspace(0,5,100), classifier.predict_proba(np.linspace(0,5,100).reshape(-1,1))[:,0], lw=12)
plt.scatter(test_data, np.abs(1-test_labels), c='r', s=8, zorder=10)
plt.ylabel('False Object Probability')
plt.xlabel('KL-Divergence Value')
plt.xlim(0,5)
test_results = classifier.predict(test_data.reshape(-1,1))
test_prob = classifier.predict_proba(test_data.reshape(-1,1))
true_positive = 0
false_positive = 0
true_negative = 0
false_negative = 0
true_positive_idx = []
false_positive_idx = []
true_negative_idx = []
false_negative_idx = []
idx = 0
for test_label, test_result in zip(test_labels, test_results):
if test_label == 1 and test_result == 1:
true_positive += 1
true_positive_idx.append(idx)
elif test_label == 1 and test_result == 0:
false_negative += 1
false_negative_idx.append(idx)
elif test_label == 0 and test_result == 1:
false_positive += 1
false_positive_idx.append(idx)
elif test_label == 0 and test_result == 0:
true_negative += 1
true_negative_idx.append(idx)
else:
print('error')
idx += 1
matrix = [[true_positive, false_positive], [false_negative, true_negative]]
true_positive
matrix
fig = plt.figure(figsize=(10,10))
plt.imshow(matrix)
plt.text(-0.0,0,'True Positive:\n {}'.format(true_positive), horizontalalignment='center', verticalalignment='center', size=18, color='K')
plt.text(1.,1.,'True Negative:\n {}'.format(true_negative), horizontalalignment='center', verticalalignment='center', size=18, color='K')
plt.text(-0.0,1,'False Negative:\n {}'.format(false_negative), horizontalalignment='center', verticalalignment='center', size=18, color='w')
plt.text(1.,0.,'False Positive:\n {}'.format(false_positive), horizontalalignment='center', verticalalignment='center', size=18, color='w')
plt.colorbar()
fig = plt.figure(figsize=(14,14))
for i in range(36):
fig.add_subplot(6,6,i+1)
plt.imshow(full_false[false_selection[30000+false_positive_idx[i]]].reshape(21,21))
plt.title('{:.2f}'.format(test_prob[false_positive_idx[i]][0]))
plt.tight_layout()
plt.suptitle('False Positive')
plt.subplots_adjust(top=0.95)
fig = plt.figure(figsize=(14,14))
for i in range(36):
fig.add_subplot(6,6,i+1)
plt.imshow(full_false[false_selection[30000+true_negative_idx[i]]].reshape(21,21))
plt.title('{:.2f}'.format(test_prob[true_negative_idx[i]][0]))
plt.tight_layout()
plt.suptitle('True Negative')
plt.subplots_adjust(top=0.95)
fig = plt.figure(figsize=(14,14))
for i in range(36):
fig.add_subplot(6,6,i+1)
plt.imshow(full_true[true_selection[30000+false_negative_idx[i]-10000]].reshape(21,21))
plt.title('{:.2f}'.format(test_prob[false_negative_idx[i]][0]))
plt.tight_layout()
plt.suptitle('False Negative')
plt.subplots_adjust(top=0.95)
accuracy = (true_positive + true_negative) / (true_positive + true_negative + false_positive + false_negative)
print("Accuracy = {:.2f}%".format(100*accuracy))
real_fake = np.load('/epyc/users/smotherh/kbmod_epyc/notebooks/normed_individual_real_false_positives.npy')
real_true = np.load('/epyc/users/smotherh/kbmod_epyc/notebooks/normed_individual_real.npy')
real_true_values = []
for t_im in real_true:
true_stamp = t_im.reshape(21,21)
true_stamp -= np.min(true_stamp)
true_stamp += 1e-12
#true_kl = special.kl_div(gaussian_test, (true_stamp/np.sum(true_stamp)))
true_kl = special.rel_entr(gaussian_test, (true_stamp/np.sum(true_stamp)))
real_true_values.append(np.nansum(true_kl))
real_true_values = np.array(real_true_values)
%%time
real_fake_values = []
for t_im in real_fake:
fake_stamp = t_im.reshape(21,21)
fake_stamp -= np.min(fake_stamp)
fake_stamp += 1e-12
#true_kl = special.kl_div(gaussian_test, (true_stamp/np.sum(true_stamp)))
fake_kl = special.rel_entr(gaussian_test, (fake_stamp/np.sum(fake_stamp)))
real_fake_values.append(np.nansum(fake_kl))
real_fake_values = np.array(real_fake_values)
real_true_results = classifier.predict(real_true_values.reshape(-1,1))
real_true_prob = classifier.predict_proba(real_true_values.reshape(-1,1))
%%time
real_fake_results = classifier.predict(real_fake_values.reshape(-1,1))
real_fake_prob = classifier.predict_proba(real_fake_values.reshape(-1,1))
fig = figure(figsize=(10, 6))
hist(real_fake_prob[:,1])
xlabel('True Object Probability', size=16)
ylabel('Count', size=16)
# Fraction of false positives correctly classified at 0.5 level
(len(real_fake_results) - np.sum(real_fake_results))/len(real_fake_results)
# Fraction of True Objects correctly classified at 0.5 level
np.sum(real_true_results)/len(real_true_results)
real_true_results[:20]
fig = figure(figsize(12,12))
for i in range(len(real_true)):
fig.add_subplot(4,4,i+1)
imshow(real_true[i].reshape(21,21))
title('Prob. True: {:.2f}'.format(real_true_prob[i][1]))
plt.tight_layout()
plt.suptitle('True Sources')
plt.subplots_adjust(top=0.94)
fig = figure(figsize(16,16))
for i in range(49):
fig.add_subplot(7,7,i+1)
imshow(real_fake[i].reshape(21,21))
title('Prob. True: {:.2f}'.format(real_fake_prob[i][1]))
plt.tight_layout()
plt.suptitle('False positives')
plt.subplots_adjust(top=0.94)
misclass = np.where(real_fake_prob[:,1] > 0.95)
len(misclass[0])
fig = figure(figsize(16,16))
for i, idx in list(enumerate(misclass[0]))[:49]:
fig.add_subplot(7,7,i+1)
imshow(real_fake[idx].reshape(21,21))
title('Prob. True: {:.2f}'.format(real_fake_prob[idx][1]))
plt.tight_layout()
plt.suptitle('False positives with highest probability of true source')
plt.subplots_adjust(top=0.94)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Definindo função que calcula coreness
Step2: Definindo função que calcula PageRank
Step3: Lista de todos os códigos de países, para facilitar a iteração pelos arquivos.
Step4: Lista de todos os códigos de indústrias, para facilitar a iteração pelos arquivos.
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('..')
import socnet as sn
import numpy as np
from scipy.stats.stats import pearsonr
from scipy.optimize import minimize
def cost(c, A, mask):
row = np.matrix(c)
C = np.multiply(row.transpose(), row)
correlation, _ = pearsonr(C[~mask].flat, A[~mask].flat)
return -correlation
def calculate_coreness(g):
length = g.number_of_nodes()
c = np.random.rand(length)
A = sn.build_matrix(g)
mask = np.eye(length, dtype=bool)
bounds = [(0, 1) for _ in range(length)]
result = minimize(cost, c, args=(A, mask), bounds=bounds)
norm = 0
for n in g.nodes:
g.nodes[n]['coreness'] = result['x'][n]
norm += g.nodes[n]['coreness'] ** 2
for n in g.nodes:
g.nodes[n]['coreness'] /= norm
def equals(a, b):
return abs(a - b) < 0.000000001
def calculate_pagerank(g):
length = g.number_of_nodes()
k = 10
scale = 0.8
residue = (1 - scale) / length
R = sn.build_matrix(g)
for n in g.nodes:
total = np.sum(R[n,])
if equals(total, 0):
R[n, n] = 1
else:
R[n,] /= total
R = scale * R + residue
Rt = R.transpose()
rank = 1 / length
r = np.full((length, 1), rank)
for _ in range(k):
r = Rt.dot(r)
for n in g.nodes:
g.nodes[n]['pagerank'] = r[n, 0]
country_codes = ['all-countries',
'AUS', 'AUT', 'BEL', 'BGR', 'BRA', 'CAN', 'CHN', 'CYP',
'CZE', 'DEU', 'DNK', 'ESP', 'EST', 'FIN', 'FRA', 'GBR',
'GRC', 'HUN', 'IDN', 'IND', 'IRL', 'ITA', 'JPN', 'KOR',
'LTU', 'LUX', 'LVA', 'MEX', 'MLT', 'NLD', 'POL', 'PRT',
'ROM', 'RUS', 'SVK', 'SVN', 'SWE', 'TUR', 'TWN', 'USA',
]
industry_codes = ['all-industries',
'c1', 'c2', 'c3', 'c4', 'c5', 'c6', 'c7',
'c8', 'c9', 'c10', 'c11', 'c12', 'c13', 'c14',
'c15', 'c16', 'c17', 'c18', 'c19', 'c20', 'c21',
'c22', 'c23', 'c24', 'c25', 'c26', 'c27', 'c28',
'c29', 'c30', 'c31', 'c32', 'c33', 'c34', 'c35',
]
file = open('in-out.txt', 'w')
for year in range(1995, 2007):
for code in industry_codes:
g = sn.load_graph('grafos/{}-{}.gml'.format(year, code))
for n in g.nodes:
ind = sum([g.edges[m, n]['weight'] for m in g.predecessors(n)])
oud = sum([g.edges[n, m]['weight'] for m in g.successors(n)])
file.write('{}\t{}\t{}\t{}\t{}\n'.format(year, code, g.nodes[n]['code'], ind, oud))
file.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We have a SAT problem to which we want to find solutions using Grover and SAT oracle combination. The SAT problem is specified in the DIMACS CNF format. We read one of the sample cnf files to load the problem.
Step2: In order to run an algorithm we need to create a configuration dictionary with the parameters for the algorithm and any other dependent objects it requires. So we first define a dictionary for the algorithm. We name GROVER as the algorithm and as it has no further parameters we are done. GROVER needs an oracle so we configure one. Here we use the SAT oracle which will allow us to solve an optimization SAT problem by searching solution space. We configure the oracle with the problem we loaded above. We then combine the dictionaries into the final single params dictionary that is passed to run_algorithm.
Step3: As seen above, a satisfying solution to the specified sample SAT problem is obtained, with the absolute values indicating the variable indices, and the signs the True/False assignments, similar to the DIMACS format.
|
<ASSISTANT_TASK:>
Python Code:
import pylab
from qiskit_aqua import run_algorithm
from qiskit_aqua.input import get_input_instance
from qiskit.tools.visualization import circuit_drawer, plot_histogram
with open('3sat3-5.cnf', 'r') as f:
sat_cnf = f.read()
print(sat_cnf)
algorithm_cfg = {
'name': 'Grover'
}
oracle_cfg = {
'name': 'SAT',
'cnf': sat_cnf
}
params = {
'problem': {'name': 'search', 'random_seed': 50},
'algorithm': algorithm_cfg,
'oracle': oracle_cfg,
'backend': {'name': 'qasm_simulator'}
}
result = run_algorithm(params)
print(result['result'])
pylab.rcParams['figure.figsize'] = (8, 4)
plot_histogram(result['measurements'])
circuit_drawer(result['circuit'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: State tomography of an entangled Bell-state
Step2: Visualization of the ideal state
Step3: We may visualize the final state using the plot_state function. By default this creates a city plot, which is a 2D-bar plot of real and imaginary matrix elements of a density matrix $\rho$. Here, we instead ask it to plot the Pauli state vector.
Step4: We can compare the ideal entangled state to a non-entangled mixed state $\frac{1}{2}(|00\rangle\langle00| +|11\rangle\langle11|)$
Step5: Building tomography measurement circuits
Step6: Testing experiments on a simulator
Step7: Extracting tomography data from results
Step8: Reconstructing state from count data
Step9: We can compare the reconstructed state to the target state vector. We use the Fidelity function, which for comparing a density matrix $\rho$ to a pure state $|\psi\rangle$ is given by $F = \sqrt{\langle \psi| \rho |\psi\rangle}$. This may be done by the function state_fidelity from the qiskit.tools.qi module.
Step10: Note that since our simulator is perfect, the output state should be exactly the Bell state, so we should obtain F = 1. Why is it not, in our case? Since we can never directly see the final state, we must obtain information about it via measurements. We would only obtain the true probabilities for the state in the limit of infinite measurement shots. Hence, we have statistical error in our reconstruction from imperfect information about the state itself. Try running with a different number of shots on the simulator and see how it affects the fidelity of the reconstruction.
Step11: You can monitor the status of your job, and ask for the result.
Step12: As before, we can extract the tomography data and reconstruct the measured density matrix for the 2-qubit Bell state prepared by our test circuit
Step13: Finally, we compare the fidelity of the experimentally measured state to the ideal state. Notice that the fidelity is lower than for the simulation. This is because in a real experiment, there is not only the statistical error in the reconstruction, but also experimental error in the device when implementing the gates in the preparation and measurement circuits, as well as error in the measurements themselves.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from time import sleep # used for polling jobs
# importing the QISKit
from qiskit import Aer, IBMQ
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit import register, execute
# import tomography library
import qiskit.tools.qcvv.tomography as tomo
# useful additional packages
from qiskit.tools.visualization import plot_state, plot_histogram
from qiskit.tools.qi.qi import state_fidelity, concurrence, purity, outer
from qiskit.wrapper.jupyter import *
from qiskit.backends.ibmq import least_busy
# Load saved IBMQ accounts
IBMQ.load_accounts()
# Create a 2-qubit quantum register
qr = QuantumRegister(2)
cr = ClassicalRegister(2)
# quantum circuit to make an entangled Bell state
bell_circ = QuantumCircuit(qr, cr, name='bell')
bell_circ.h(qr[1])
bell_circ.cx(qr[1], qr[0])
backend = Aer.get_backend('statevector_simulator')
job = execute(bell_circ, backend=backend)
bell_psi = job.result().get_statevector(bell_circ)
bell_rho = outer(bell_psi) # construct the density matrix from the state vector
# plot the state
plot_state(bell_rho,'paulivec')
rho_mixed = np.array([[1,0,0,0],[0,0,0,0],[0,0,0,0],[0,0,0,1]])/2
plot_state(rho_mixed, 'paulivec')
# Construct state tomography set for measurement of qubits [0, 1] in the Pauli basis
bell_tomo_set = tomo.state_tomography_set([0, 1])
# Add the state tomography measurement circuits to the Quantum Program
bell_tomo_circuits = tomo.create_tomography_circuits(bell_circ, qr, cr, bell_tomo_set)
print('Created State tomography circuits:')
for circ in bell_tomo_circuits:
print(circ.name)
# Use the local simulator
backend = Aer.get_backend('qasm_simulator')
# Take 5000 shots for each measurement basis
shots = 5000
# Run the simulation
bell_tomo_job = execute(bell_tomo_circuits, backend=backend, shots=shots)
bell_tomo_result = bell_tomo_job.result()
print(bell_tomo_result)
bell_tomo_data = tomo.tomography_data(bell_tomo_result, bell_circ.name, bell_tomo_set)
rho_fit = tomo.fit_tomography_data(bell_tomo_data)
# calculate fidelity, concurrence and purity of fitted state
F_fit = state_fidelity(rho_fit, bell_psi)
con = concurrence(rho_fit)
pur = purity(rho_fit)
# plot
plot_state(rho_fit, 'paulivec')
print('Fidelity =', F_fit)
print('concurrence = ', str(con))
print('purity = ', str(pur))
%%qiskit_job_status
# Register API token for online backends
# Use the IBM Quantum Experience (choose least busy device)
backend = least_busy(IBMQ.backends())
# Take 1000 shots for each measurement basis
# Note: reduce this number for larger number of qubits
shots = 1000
# set max credits to spend
max_credits = 8
# Run the experiment
job_on_device = execute(bell_tomo_circuits, backend=backend, shots=shots, max_credits=max_credits)
# Wait for the result and get it when ready
bell_tomo_result_device = job_on_device.result()
bell_tomo_data = tomo.tomography_data(bell_tomo_result_device, 'bell', bell_tomo_set)
# Reconstruct experimentally measured density matrix
rho_fit_real = tomo.fit_tomography_data(bell_tomo_data)
F_fit_real = state_fidelity(rho_fit_real, bell_psi)
plot_state(rho_fit_real, 'paulivec')
print('Fidelity with ideal state')
print('F =', F_fit_real)
# calculate concurrence and purity
con = concurrence(rho_fit_real)
pur = purity(rho_fit_real)
print('concurrence = ', str(con))
print('purity = ', str(pur))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Encoding the words
Step3: Encoding the labels
Step4: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Step5: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.
Step6: Exercise
Step7: Training, Validation, Test
Step8: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step9: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Step10: Embedding
Step11: LSTM cell
Step12: RNN forward pass
Step13: Output
Step14: Validation accuracy
Step15: Batching
Step16: Training
Step17: Testing
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
len(non_zero_idx)
reviews_ints[-1]
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
labels = np.array([labels[ii] for ii in non_zero_idx])
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-3', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will need the TensorFlow Constrained Optimization (TFCO) library.
Step2: Constrained Optimization Problem
Step3: Plotting Functions
Step4: Evaluation Metrics
Step5: Create Linear Model
Step6: Formulate Optimization Problem
Step7: Train Model
Step8: Summarize and Plot Results
Step9: Experimental Results
Step10: (2) Pairwise Equal Opportunity
Step11: Cross-group + Within-group Comparisons
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
!pip install git+https://github.com/google-research/tensorflow_constrained_optimization
import tensorflow_constrained_optimization as tfco
def create_dataset(num_queries, num_docs):
# Create a synthetic 2-dimensional training dataset with 1000 queries,
# with 1 positive document and 10 negative document each
# and with two protected groups for each document.
num_posdocs = 1
num_negdocs = num_docs - 1
dimension = 2
num_groups = 2
tot_pairs = num_queries * num_posdocs * num_negdocs
# Conditional distributions are Gaussian: Conditioned on the label and the
# protected group, the feature distribution is a Gaussian.
# Positive documents, Group 0
mu_10 = [1,0]
sigma_10 = np.array([[1, 0], [0, 1]])
# Positive documents, Group 1
mu_11 = [-1.5, 0.75]
sigma_11 = np.array([[1, 0], [0, 1]]) * 0.5
# Negative documents, Group 0
mu_00 = [-1,-1]
sigma_00 = np.array([[1, 0], [0, 1]])
# Negative documents, Group 1
mu_01 = [-2,-1]
sigma_01 = np.array([[1, 0], [0, 1]])
# Generate positive documents
posdocs_groups = (np.random.rand(num_queries, num_posdocs) <= 0.1) * 1
posdocs_mask = np.dstack([posdocs_groups] * dimension)
posdocs0 = np.random.multivariate_normal(mu_10, sigma_10,
size=(num_queries, num_posdocs))
posdocs1 = np.random.multivariate_normal(mu_11, sigma_11,
size=(num_queries, num_posdocs))
posdocs = (1 - posdocs_mask) * posdocs0 + posdocs_mask * posdocs1
# Generate negative documents
negdocs_groups = (np.random.rand(num_queries, num_negdocs) <= 0.1) * 1
negdocs_mask = np.dstack([negdocs_groups] * dimension)
negdocs0 = np.random.multivariate_normal(mu_00, sigma_00,
size=(num_queries, num_negdocs))
negdocs1 = np.random.multivariate_normal(mu_01, sigma_01,
size=(num_queries, num_negdocs))
negdocs = (1 - negdocs_mask) * negdocs0 + negdocs_mask * negdocs1
# Concatenate positive and negative documents for each query
# (along axis 1, where documents are arranged)
features = np.concatenate((posdocs, negdocs), axis=1)
# Concatenate the associated labels:
# (for each query, first num_posdocs documents are positive, remaining negative)
poslabels = np.tile([1], reps=(num_queries, num_posdocs))
neglabels = np.tile([-1], reps=(num_queries, num_negdocs))
labels = np.concatenate((poslabels, neglabels), axis=1)
# Concatenate the protected groups
groups = np.concatenate((posdocs_groups, negdocs_groups), axis=1)
dataset = {
'features': features,
'labels': labels,
'groups': groups,
'num_queries': num_queries,
'num_posdocs': num_posdocs,
'num_negdocs': num_negdocs,
'dimension': dimension,
'num_groups': num_groups,
'tot_pairs': tot_pairs
}
return dataset
def plot_data(dataset, ax=None):
# Plot data set.
features = dataset["features"]
labels = dataset["labels"]
groups = dataset["groups"]
# Create axes if not specified
if not ax:
_, ax = plt.subplots(1, 1, figsize=(4.0, 4.0))
ax.set_xlabel("Feature 0")
ax.set_ylabel("Feature 1")
# Plot positive points in group 0
data = features[(labels==1) & (groups==0), :]
ax.plot(data[:,0], data[:,1], 'bx', label="Pos, Group 0")
# Plot positive points in group 1
data = features[(labels==1) & (groups==1), :]
ax.plot(data[:,0], data[:,1], 'bo', label="Pos, Group 1")
# Plot negative points in group 0
data = features[(labels==-1) & (groups==0), :]
ax.plot(data[:,0], data[:,1], 'rx', label="Neg, Group 0")
# Plot negative points in group 1
data = features[(labels==-1) & (groups==1), :]
ax.plot(data[:,0], data[:,1], 'ro', label="Neg, Group 1")
ax.legend(loc = "upper right")
def plot_model(weights, ax, x_range, y_range, fmt):
# Plot model decision boundary.
ax.plot([x_range[0], x_range[1]],
[-x_range[0] * weights[0] / weights[1],
-x_range[1] * weights[0] / weights[1]],
fmt)
ax.set_ylim(y_range)
# Sample data set.
dataset = create_dataset(num_queries=500, num_docs=10)
plot_data(dataset)
def get_mask(dataset, pos_group, neg_group=None):
# Returns a boolean mask selecting positive-negative document pairs where
# the protected group for the positive document is pos_group and
# the protected group for the negative document (if specified) is neg_group.
groups = dataset['groups']
num_negdocs = dataset['num_negdocs']
# Repeat group membership positive docs as many times as negative docs.
mask_pos = groups[:,0] == pos_group
mask_pos_rep = np.repeat(mask_pos.reshape(-1,1), num_negdocs, axis=1)
if neg_group is None:
return mask_pos_rep
else:
mask_neg = groups[:,1:] == neg_group
return mask_pos_rep & mask_neg
def error_rate(model, dataset):
# Returns error rate of model on dataset.
features = dataset["features"]
num_negdocs = dataset['num_negdocs']
scores = np.matmul(features, model)
pos_scores_rep = np.repeat(scores[:, 0].reshape(-1,1), num_negdocs, axis=1)
neg_scores = scores[:, 1:]
diff = pos_scores_rep - neg_scores
return np.mean(diff.reshape((-1)) <= 0)
def group_error_rate(model, dataset, pos_group, neg_group=None):
# Returns error rate of model on data set, considering only document pairs,
# where the protected group for the positive document is pos_group, and the
# protected group for the negative document (if specified) is neg_group.
features = dataset['features']
num_negdocs = dataset['num_negdocs']
scores = np.matmul(features, model)
pos_scores_rep = np.repeat(scores[:, 0].reshape(-1,1), num_negdocs, axis=1)
neg_scores = scores[:, 1:]
mask = get_mask(dataset, pos_group, neg_group)
diff = pos_scores_rep - neg_scores
diff = diff * mask
return np.sum(diff.reshape((-1)) < 0) * 1.0 / np.sum(mask)
def group_tensors(predictions, dataset, pos_group, neg_group=None):
# Returns predictions and labels for document-pairs where the protected group
# for the positive document is pos_group, and the protected group for the
# negative document (if specified) is neg_group.
mask = get_mask(dataset, pos_group, neg_group)
mask = np.reshape(mask, (-1))
group_labels = lambda: tf.constant(np.ones(np.sum(mask)), dtype=tf.float32)
group_predictions = lambda: tf.boolean_mask(predictions(), mask)
return group_predictions, group_labels
def linear_model(dataset):
# Creates a linear ranking model, and returns a nullary function returning
# predictions on the dataset, and the model weights.
# Create variables containing the model parameters.
weights = tf.Variable(tf.ones(dataset["dimension"], dtype=tf.float32),
name="weights")
# Create a constant tensor containing the features.
features_tensor = tf.constant(dataset["features"], dtype=tf.float32)
# Create a nullary function that returns applies the linear model to the
# features and returns the tensor with the predictions.
def predictions():
predicted_scores = tf.tensordot(features_tensor, weights, axes=(2, 0))
# Compute ranking errors and flatten tensor.
pos_scores = tf.slice(predicted_scores, begin=[0,0],
size=[-1, dataset["num_posdocs"]])
neg_scores = tf.slice(predicted_scores, begin=[0, dataset["num_posdocs"]],
size=[-1,-1])
pos_scores_rep = tf.tile(pos_scores, multiples=(1, dataset["num_negdocs"]))
predictions_tensor = tf.reshape(pos_scores_rep - neg_scores, shape=[-1])
return predictions_tensor
return predictions, weights
def formulate_problem(dataset, constraint_groups=[], constraint_slack=None):
# Returns a RateMinimizationProblem object and the linear model weights.
#
# Formulates a constrained problem that optimizes the error rate for a linear
# model on the specified dataset, subject to pairwise fairness constraints
# specified by the constraint_groups and the constraint_slack.
#
# Args:
# dataset: Dataset dictionary returned by create_dataset()
# constraint_groups: List containing tuples of the form
# ((pos_group0, neg_group0), (pos_group1, neg_group1)), specifying the
# group memberships for the document pairs to compare in the constraints.
# constraint_slack: slackness "\epsilon" allowed in the constraints.
# Create linear model: we get back a nullary function returning the
# predictions on the dataset, and the model weights.
predictions, weights = linear_model(dataset)
# Create a nullary function returning a constant tensor with the labels.
labels = lambda: tf.constant(np.ones(dataset["tot_pairs"]), dtype=tf.float32)
# Context for the optimization objective.
context = tfco.rate_context(predictions, labels)
# Constraint set.
constraint_set = []
# Context for the constraints.
for ((pos_group0, neg_group0), (pos_group1, neg_group1)) in constraint_groups:
# Context for group 0.
group0_predictions, group0_labels = group_tensors(
predictions, dataset, pos_group0, neg_group0)
context_group0 = tfco.rate_context(group0_predictions, group0_labels)
# Context for group 1.
group1_predictions, group1_labels = group_tensors(
predictions, dataset, pos_group1, neg_group1)
context_group1 = tfco.rate_context(group1_predictions, group1_labels)
# Add constraints to constraint set.
constraint_set.append(
tfco.false_negative_rate(context_group0) <= (
tfco.false_negative_rate(context_group1) + constraint_slack)
)
constraint_set.append(
tfco.false_negative_rate(context_group1) <= (
tfco.false_negative_rate(context_group0) + constraint_slack)
)
# Formulate constrained minimization problem.
problem = tfco.RateMinimizationProblem(
tfco.error_rate(context), constraint_set)
return problem, weights
# Pairwise fairness constraint types.
MARGINAL_EQUAL_OPPORTUNITY = 0
CROSS_GROUP_EQUAL_OPPORTUNITY = 1
CROSS_AND_WITHIN_GROUP_EQUAL_OPPORTUNITY = 2
def train_model(train_set, params):
# Trains the model and returns the trained model weights, objective and
# maximum constraint violation values.
# Set up problem and model.
if params['flag_constrained']:
# Constrained optimization.
if params['constraint_type'] == MARGINAL_EQUAL_OPPORTUNITY:
constraint_groups = [((0, None), (1, None))]
elif params['constraint_type'] == CROSS_GROUP_EQUAL_OPPORTUNITY:
constraint_groups = [((0, 1), (1, 0))]
else:
constraint_groups = [((0, 1), (1, 0)), ((0, 0), (1, 1))]
else:
# Unconstrained optimization.
constraint_groups = []
problem, weights = formulate_problem(
train_set, constraint_groups, params["constraint_bound"])
# Set up optimization problem.
optimizer = tfco.ProxyLagrangianOptimizerV2(
tf.keras.optimizers.Adagrad(learning_rate=params["learning_rate"]),
num_constraints=problem.num_constraints)
# List of trainable variables.
var_list = (
[weights] + problem.trainable_variables + optimizer.trainable_variables())
# List of objectives and constraint violations during course of training.
objectives = []
constraints = []
# Run loops * iterations_per_loop full batch iterations.
for ii in xrange(params['loops']):
for ii in xrange(params['iterations_per_loop']):
optimizer.minimize(problem, var_list=var_list)
objectives.append(problem.objective())
if params['flag_constrained']:
constraints.append(max(problem.constraints()))
return weights.numpy(), objectives, constraints
def evaluate_results(model, test_set, params):
# Returns overall and group error rates for model on test set.
if params['constraint_type'] == MARGINAL_EQUAL_OPPORTUNITY:
return (error_rate(model, test_set),
[group_error_rate(model, test_set, 0),
group_error_rate(model, test_set, 1)])
else:
return (error_rate(model, test_set),
[[group_error_rate(model, test_set, 0, 0),
group_error_rate(model, test_set, 0, 1)],
[group_error_rate(model, test_set, 1, 0),
group_error_rate(model, test_set, 1, 1)]])
def display_results(model, objectives, constraints, test_set, params, ax):
# Prints evaluation results for model on test data
# and plots its decision boundary
# Evaluate model on test set and print results.
error, group_error = evaluate_results(model, test_set, params)
if params['constraint_type'] == MARGINAL_EQUAL_OPPORTUNITY:
if params['flag_constrained']:
print 'Constrained', '\t \t',
else:
print 'Test Error\t \t', 'Overall', '\t', 'Group 0', '\t', 'Group 1',
print '\t', 'Diff'
print 'Unconstrained', '\t \t',
print("%.3f" % (error_rate(model, test_set)) + '\t\t' +
"%.3f" % group_error[0] + '\t\t' + "%.3f" % group_error[1] + '\t\t'
"%.3f" % abs(group_error[0] - group_error[1]))
elif params['constraint_type'] == CROSS_GROUP_EQUAL_OPPORTUNITY:
if params['flag_constrained']:
print 'Constrained', '\t \t',
else:
print 'Test Error\t \t', 'Overall', '\t', 'Group 0/1', '\t', 'Group 1/0',
print '\t', 'Diff'
print 'Unconstrained', '\t \t',
print("%.3f" % error + '\t\t' +
"%.3f" % group_error[0][1] + '\t\t' +
"%.3f" % group_error[1][0] + '\t\t' +
"%.3f" % abs(group_error[0][1] - group_error[1][0]))
else:
if params['flag_constrained']:
print 'Constrained', '\t \t',
else:
print 'Test Error\t \t', 'Overall', '\t', 'Group 0/1', '\t', 'Group 1/0',
print '\t', 'Diff', '\t',
print '\t', 'Group 0/0', '\t', 'Group 1/1', '\t', 'Diff'
print 'Unconstrained', '\t \t',
print("%.3f" % error + '\t\t' +
"%.3f" % group_error[0][1] + '\t\t' +
"%.3f" % group_error[1][0] + '\t\t' +
"%.3f" % abs(group_error[0][1] - group_error[1][0]) + '\t\t' +
"%.3f" % group_error[0][0] + '\t\t' +
"%.3f" % group_error[1][1] + '\t\t' +
"%.3f" % abs(group_error[0][0] - group_error[1][1]))
# Plot decision boundary and progress of training objective/constraint viol.
if params['flag_constrained']:
ax[0].set_title("Model: Constrained")
else:
ax[0].set_title("Model: Unconstrained")
features = train_set['features']
plot_data(train_set, ax[0])
plot_model(model, ax[0],
[features[:, :, 0].min(), features[:, :, 0].max()],
[features[:, :, 1].min(), features[:, :, 1].max()],
"k--")
if params['flag_constrained']:
ax[1].set_title("Objective (Hinge)")
ax[1].set_xlabel("Number of epochs")
objective_curve, = ax[1].plot(range(params['loops']), objectives)
ax[2].set_title("Constraint Violation")
ax[2].set_xlabel("Number of epochs")
constraint_curve, = ax[2].plot(range(params['loops']), constraints)
np.random.seed(121212)
train_set = create_dataset(num_queries=500, num_docs=10)
test_set = create_dataset(num_queries=500, num_docs=10)
model_params = {
'loops': 25,
'iterations_per_loop': 10,
'learning_rate': 1.0,
'constraint_type': MARGINAL_EQUAL_OPPORTUNITY,
'constraint_bound': 0.01}
# Plot training stats, data and model.
ff, ax = plt.subplots(1, 4, figsize=(16.0, 3.5))
# Unconstrained optimization.
model_params['flag_constrained'] = False
model, objectives, constraints = train_model(train_set, model_params)
display_results(model, objectives, constraints, test_set, model_params, [ax[0]])
# Constrained optimization.
model_params['flag_constrained'] = True
model, objectives, constraints = train_model(train_set, model_params)
display_results(model, objectives, constraints, test_set, model_params, ax[1:])
ff.tight_layout()
model_params = {
'loops': 50,
'iterations_per_loop': 10,
'learning_rate': 1.0,
'constraint_type': CROSS_GROUP_EQUAL_OPPORTUNITY,
'constraint_bound': 0.01}
# Plot training stats, data and model
ff, ax = plt.subplots(1, 4, figsize=(16.0, 3.5))
model_params['flag_constrained'] = False
model, objectives, constraints = train_model(train_set, model_params)
display_results(model, objectives, constraints, test_set, model_params, [ax[0]])
model_params['flag_constrained'] = True
model, objectives, constraints = train_model(train_set, model_params)
display_results(model, objectives, constraints, test_set, model_params, ax[1:])
ff.tight_layout()
model_params = {
'loops': 250,
'iterations_per_loop': 10,
'learning_rate': 0.2,
'constraint_type': CROSS_AND_WITHIN_GROUP_EQUAL_OPPORTUNITY,
'constraint_bound': 0.01
}
# Plot training stats, data and model
ff, ax = plt.subplots(1, 4, figsize=(16.0, 3.5))
model_params['flag_constrained'] = False
model, objectives, constraints = train_model(train_set, model_params)
display_results(model, objectives, constraints, test_set, model_params, [ax[0]])
model_params['flag_constrained'] = True
model, objectives, constraints = train_model(train_set, model_params)
display_results(model, objectives, constraints, test_set, model_params, ax[1:])
ff.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Data
Step11: Show first 3 training examples and their labels (“0”, “1”, and “2” correspond to “entailment”, “contradiction”, and “neutral”, respectively ).
Step12: Model
Step13: The $i$'th word in A computes a weighted average of "relevant" words in B, and vice versa, as follows
Step14: Comparing
Step15: Aggregation
Step18: Putting it altogether
Step30: Training
Step32: Testing
Step33: Examples from training set
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import math
from IPython import display
try:
import torch
except ModuleNotFoundError:
%pip install -qq torch
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils import data
import collections
import re
import random
import os
import requests
import zipfile
import tarfile
import hashlib
import time
np.random.seed(seed=1)
torch.manual_seed(1)
!mkdir figures # for saving plots
# Required functions for downloading data
def download(name, cache_dir=os.path.join("..", "data")):
Download a file inserted into DATA_HUB, return the local filename.
assert name in DATA_HUB, f"{name} does not exist in {DATA_HUB}."
url, sha1_hash = DATA_HUB[name]
os.makedirs(cache_dir, exist_ok=True)
fname = os.path.join(cache_dir, url.split("/")[-1])
if os.path.exists(fname):
sha1 = hashlib.sha1()
with open(fname, "rb") as f:
while True:
data = f.read(1048576)
if not data:
break
sha1.update(data)
if sha1.hexdigest() == sha1_hash:
return fname # Hit cache
print(f"Downloading {fname} from {url}...")
r = requests.get(url, stream=True, verify=True)
with open(fname, "wb") as f:
f.write(r.content)
return fname
def download_extract(name, folder=None):
Download and extract a zip/tar file.
fname = download(name)
base_dir = os.path.dirname(fname)
data_dir, ext = os.path.splitext(fname)
if ext == ".zip":
fp = zipfile.ZipFile(fname, "r")
elif ext in (".tar", ".gz"):
fp = tarfile.open(fname, "r")
else:
assert False, "Only zip/tar files can be extracted."
fp.extractall(base_dir)
return os.path.join(base_dir, folder) if folder else data_dir
DATA_HUB = dict()
DATA_HUB["SNLI"] = ("https://nlp.stanford.edu/projects/snli/snli_1.0.zip", "9fcde07509c7e87ec61c640c1b2753d9041758e4")
data_dir = download_extract("SNLI")
def read_snli(data_dir, is_train):
Read the SNLI dataset into premises, hypotheses, and labels.
def extract_text(s):
# Remove information that will not be used by us
s = re.sub("\\(", "", s)
s = re.sub("\\)", "", s)
# Substitute two or more consecutive whitespace with space
s = re.sub("\\s{2,}", " ", s)
return s.strip()
label_set = {"entailment": 0, "contradiction": 1, "neutral": 2}
file_name = os.path.join(data_dir, "snli_1.0_train.txt" if is_train else "snli_1.0_test.txt")
with open(file_name, "r") as f:
rows = [row.split("\t") for row in f.readlines()[1:]]
premises = [extract_text(row[1]) for row in rows if row[0] in label_set]
hypotheses = [extract_text(row[2]) for row in rows if row[0] in label_set]
labels = [label_set[row[0]] for row in rows if row[0] in label_set]
return premises, hypotheses, labels
train_data = read_snli(data_dir, is_train=True)
for x0, x1, y in zip(train_data[0][:3], train_data[1][:3], train_data[2][:3]):
print("premise:", x0)
print("hypothesis:", x1)
print("label:", y)
test_data = read_snli(data_dir, is_train=False)
for data in [train_data, test_data]:
print([[row for row in data[2]].count(i) for i in range(3)])
def tokenize(lines, token="word"):
Split text lines into word or character tokens.
if token == "word":
return [line.split() for line in lines]
elif token == "char":
return [list(line) for line in lines]
else:
print("ERROR: unknown token type: " + token)
class Vocab:
Vocabulary for text.
def __init__(self, tokens=None, min_freq=0, reserved_tokens=None):
if tokens is None:
tokens = []
if reserved_tokens is None:
reserved_tokens = []
# Sort according to frequencies
counter = count_corpus(tokens)
self.token_freqs = sorted(counter.items(), key=lambda x: x[1], reverse=True)
# The index for the unknown token is 0
self.unk, uniq_tokens = 0, ["<unk>"] + reserved_tokens
uniq_tokens += [token for token, freq in self.token_freqs if freq >= min_freq and token not in uniq_tokens]
self.idx_to_token, self.token_to_idx = [], dict()
for token in uniq_tokens:
self.idx_to_token.append(token)
self.token_to_idx[token] = len(self.idx_to_token) - 1
def __len__(self):
return len(self.idx_to_token)
def __getitem__(self, tokens):
if not isinstance(tokens, (list, tuple)):
return self.token_to_idx.get(tokens, self.unk)
return [self.__getitem__(token) for token in tokens]
def to_tokens(self, indices):
if not isinstance(indices, (list, tuple)):
return self.idx_to_token[indices]
return [self.idx_to_token[index] for index in indices]
def count_corpus(tokens):
Count token frequencies.
# Here `tokens` is a 1D list or 2D list
if len(tokens) == 0 or isinstance(tokens[0], list):
# Flatten a list of token lists into a list of tokens
tokens = [token for line in tokens for token in line]
return collections.Counter(tokens)
class SNLIDataset(torch.utils.data.Dataset):
A customized dataset to load the SNLI dataset.
def __init__(self, dataset, num_steps, vocab=None):
self.num_steps = num_steps
all_premise_tokens = tokenize(dataset[0])
all_hypothesis_tokens = tokenize(dataset[1])
if vocab is None:
self.vocab = Vocab(all_premise_tokens + all_hypothesis_tokens, min_freq=5, reserved_tokens=["<pad>"])
else:
self.vocab = vocab
self.premises = self._pad(all_premise_tokens)
self.hypotheses = self._pad(all_hypothesis_tokens)
self.labels = torch.tensor(dataset[2])
print("read " + str(len(self.premises)) + " examples")
def _pad(self, lines):
return torch.tensor([truncate_pad(self.vocab[line], self.num_steps, self.vocab["<pad>"]) for line in lines])
def __getitem__(self, idx):
return (self.premises[idx], self.hypotheses[idx]), self.labels[idx]
def __len__(self):
return len(self.premises)
def load_data_snli(batch_size, num_steps=50):
Download the SNLI dataset and return data iterators and vocabulary.
num_workers = 4
data_dir = download_extract("SNLI")
train_data = read_snli(data_dir, True)
test_data = read_snli(data_dir, False)
train_set = SNLIDataset(train_data, num_steps)
test_set = SNLIDataset(test_data, num_steps, train_set.vocab)
train_iter = torch.utils.data.DataLoader(train_set, batch_size, shuffle=True, num_workers=num_workers)
test_iter = torch.utils.data.DataLoader(test_set, batch_size, shuffle=False, num_workers=num_workers)
return train_iter, test_iter, train_set.vocab
def truncate_pad(line, num_steps, padding_token):
Truncate or pad sequences.
if len(line) > num_steps:
return line[:num_steps] # Truncate
return line + [padding_token] * (num_steps - len(line))
train_iter, test_iter, vocab = load_data_snli(128, 50)
len(vocab)
for X, Y in train_iter:
print(X[0].shape)
print(X[1].shape)
print(Y.shape)
break
def mlp(num_inputs, num_hiddens, flatten):
net = []
net.append(nn.Dropout(0.2))
net.append(nn.Linear(num_inputs, num_hiddens))
net.append(nn.ReLU())
if flatten:
net.append(nn.Flatten(start_dim=1))
net.append(nn.Dropout(0.2))
net.append(nn.Linear(num_hiddens, num_hiddens))
net.append(nn.ReLU())
if flatten:
net.append(nn.Flatten(start_dim=1))
return nn.Sequential(*net)
class Attend(nn.Module):
def __init__(self, num_inputs, num_hiddens, **kwargs):
super(Attend, self).__init__(**kwargs)
self.f = mlp(num_inputs, num_hiddens, flatten=False)
def forward(self, A, B):
# Shape of `A`/`B`: (`batch_size`, no. of words in sequence A/B,
# `embed_size`)
# Shape of `f_A`/`f_B`: (`batch_size`, no. of words in sequence A/B,
# `num_hiddens`)
f_A = self.f(A)
f_B = self.f(B)
# Shape of `e`: (`batch_size`, no. of words in sequence A,
# no. of words in sequence B)
e = torch.bmm(f_A, f_B.permute(0, 2, 1))
# Shape of `beta`: (`batch_size`, no. of words in sequence A,
# `embed_size`), where sequence B is softly aligned with each word
# (axis 1 of `beta`) in sequence A
beta = torch.bmm(F.softmax(e, dim=-1), B)
# Shape of `alpha`: (`batch_size`, no. of words in sequence B,
# `embed_size`), where sequence A is softly aligned with each word
# (axis 1 of `alpha`) in sequence B
alpha = torch.bmm(F.softmax(e.permute(0, 2, 1), dim=-1), A)
return beta, alpha
class Compare(nn.Module):
def __init__(self, num_inputs, num_hiddens, **kwargs):
super(Compare, self).__init__(**kwargs)
self.g = mlp(num_inputs, num_hiddens, flatten=False)
def forward(self, A, B, beta, alpha):
V_A = self.g(torch.cat([A, beta], dim=2))
V_B = self.g(torch.cat([B, alpha], dim=2))
return V_A, V_B
class Aggregate(nn.Module):
def __init__(self, num_inputs, num_hiddens, num_outputs, **kwargs):
super(Aggregate, self).__init__(**kwargs)
self.h = mlp(num_inputs, num_hiddens, flatten=True)
self.linear = nn.Linear(num_hiddens, num_outputs)
def forward(self, V_A, V_B):
# Sum up both sets of comparison vectors
V_A = V_A.sum(dim=1)
V_B = V_B.sum(dim=1)
# Feed the concatenation of both summarization results into an MLP
Y_hat = self.linear(self.h(torch.cat([V_A, V_B], dim=1)))
return Y_hat
class DecomposableAttention(nn.Module):
def __init__(
self,
vocab,
embed_size,
num_hiddens,
num_inputs_attend=100,
num_inputs_compare=200,
num_inputs_agg=400,
**kwargs
):
super(DecomposableAttention, self).__init__(**kwargs)
self.embedding = nn.Embedding(len(vocab), embed_size)
self.attend = Attend(num_inputs_attend, num_hiddens)
self.compare = Compare(num_inputs_compare, num_hiddens)
# There are 3 possible outputs: entailment, contradiction, and neutral
self.aggregate = Aggregate(num_inputs_agg, num_hiddens, num_outputs=3)
def forward(self, X):
premises, hypotheses = X
A = self.embedding(premises)
B = self.embedding(hypotheses)
beta, alpha = self.attend(A, B)
V_A, V_B = self.compare(A, B, beta, alpha)
Y_hat = self.aggregate(V_A, V_B)
return Y_hat
class TokenEmbedding:
Token Embedding.
def __init__(self, embedding_name):
self.idx_to_token, self.idx_to_vec = self._load_embedding(embedding_name)
self.unknown_idx = 0
self.token_to_idx = {token: idx for idx, token in enumerate(self.idx_to_token)}
def _load_embedding(self, embedding_name):
idx_to_token, idx_to_vec = ["<unk>"], []
data_dir = download_extract(embedding_name)
# GloVe website: https://nlp.stanford.edu/projects/glove/
# fastText website: https://fasttext.cc/
with open(os.path.join(data_dir, "vec.txt"), "r") as f:
for line in f:
elems = line.rstrip().split(" ")
token, elems = elems[0], [float(elem) for elem in elems[1:]]
# Skip header information, such as the top row in fastText
if len(elems) > 1:
idx_to_token.append(token)
idx_to_vec.append(elems)
idx_to_vec = [[0] * len(idx_to_vec[0])] + idx_to_vec
return idx_to_token, torch.tensor(idx_to_vec)
def __getitem__(self, tokens):
indices = [self.token_to_idx.get(token, self.unknown_idx) for token in tokens]
vecs = self.idx_to_vec[torch.tensor(indices)]
return vecs
def __len__(self):
return len(self.idx_to_token)
def try_all_gpus():
Return all available GPUs, or [cpu(),] if no GPU exists.
devices = [torch.device(f"cuda:{i}") for i in range(torch.cuda.device_count())]
return devices if devices else [torch.device("cpu")]
DATA_URL = "http://d2l-data.s3-accelerate.amazonaws.com/glove.6B.100d.zip"
DATA_HUB["glove.6b.100d"] = (DATA_URL, "cd43bfb07e44e6f27cbcc7bc9ae3d80284fdaf5a")
embed_size, num_hiddens, devices = 100, 200, try_all_gpus()
net = DecomposableAttention(vocab, embed_size, num_hiddens)
# get pre-trained GloVE embeddings of size 100
glove_embedding = TokenEmbedding("glove.6b.100d")
embeds = glove_embedding[vocab.idx_to_token]
net.embedding.weight.data.copy_(embeds);
class Animator:
For plotting data in animation.
def __init__(
self,
xlabel=None,
ylabel=None,
legend=None,
xlim=None,
ylim=None,
xscale="linear",
yscale="linear",
fmts=("-", "m--", "g-.", "r:"),
nrows=1,
ncols=1,
figsize=(3.5, 2.5),
):
# Incrementally plot multiple lines
if legend is None:
legend = []
display.set_matplotlib_formats("svg")
self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize)
if nrows * ncols == 1:
self.axes = [
self.axes,
]
# Use a lambda function to capture arguments
self.config_axes = lambda: set_axes(self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
self.X, self.Y, self.fmts = None, None, fmts
def add(self, x, y):
# Add multiple data points into the figure
if not hasattr(y, "__len__"):
y = [y]
n = len(y)
if not hasattr(x, "__len__"):
x = [x] * n
if not self.X:
self.X = [[] for _ in range(n)]
if not self.Y:
self.Y = [[] for _ in range(n)]
for i, (a, b) in enumerate(zip(x, y)):
if a is not None and b is not None:
self.X[i].append(a)
self.Y[i].append(b)
self.axes[0].cla()
for x, y, fmt in zip(self.X, self.Y, self.fmts):
self.axes[0].plot(x, y, fmt)
self.config_axes()
display.display(self.fig)
display.clear_output(wait=True)
class Timer:
Record multiple running times.
def __init__(self):
self.times = []
self.start()
def start(self):
Start the timer.
self.tik = time.time()
def stop(self):
Stop the timer and record the time in a list.
self.times.append(time.time() - self.tik)
return self.times[-1]
def avg(self):
Return the average time.
return sum(self.times) / len(self.times)
def sum(self):
Return the sum of time.
return sum(self.times)
def cumsum(self):
Return the accumulated time.
return np.array(self.times).cumsum().tolist()
class Accumulator:
For accumulating sums over `n` variables.
def __init__(self, n):
self.data = [0.0] * n
def add(self, *args):
self.data = [a + float(b) for a, b in zip(self.data, args)]
def reset(self):
self.data = [0.0] * len(self.data)
def __getitem__(self, idx):
return self.data[idx]
def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend):
Set the axes for matplotlib.
axes.set_xlabel(xlabel)
axes.set_ylabel(ylabel)
axes.set_xscale(xscale)
axes.set_yscale(yscale)
axes.set_xlim(xlim)
axes.set_ylim(ylim)
if legend:
axes.legend(legend)
axes.grid()
def accuracy(y_hat, y):
Compute the number of correct predictions.
if len(y_hat.shape) > 1 and y_hat.shape[1] > 1:
y_hat = torch.argmax(y_hat, axis=1)
cmp_ = y_hat.type(y.dtype) == y
return float(cmp_.type(y.dtype).sum())
def evaluate_accuracy_gpu(net, data_iter, device=None):
Compute the accuracy for a model on a dataset using a GPU.
if isinstance(net, torch.nn.Module):
net.eval() # Set the model to evaluation mode
if not device:
device = next(iter(net.parameters())).device
# No. of correct predictions, no. of predictions
metric = Accumulator(2)
for X, y in data_iter:
if isinstance(X, list):
# Required for BERT Fine-tuning
X = [x.to(device) for x in X]
else:
X = X.to(device)
y = y.to(device)
metric.add(accuracy(net(X), y), y.numel())
return metric[0] / metric[1]
def train_batch(net, X, y, loss, trainer, devices):
if isinstance(X, list):
# Required for BERT Fine-tuning
X = [x.to(devices[0]) for x in X]
else:
X = X.to(devices[0])
y = y.to(devices[0])
net.train()
trainer.zero_grad()
pred = net(X)
l = loss(pred, y)
l.sum().backward()
trainer.step()
train_loss_sum = l.sum()
train_acc_sum = accuracy(pred, y)
return train_loss_sum, train_acc_sum
def train(net, train_iter, test_iter, loss, trainer, num_epochs, devices=try_all_gpus()):
timer, num_batches = Timer(), len(train_iter)
animator = Animator(
xlabel="epoch", xlim=[1, num_epochs], ylim=[0, 1], legend=["train loss", "train acc", "test acc"]
)
net = nn.DataParallel(net, device_ids=devices).to(devices[0])
for epoch in range(num_epochs):
# Store training_loss, training_accuracy, num_examples, num_features
metric = Accumulator(4)
for i, (features, labels) in enumerate(train_iter):
timer.start()
l, acc = train_batch(net, features, labels, loss, trainer, devices)
metric.add(l, acc, labels.shape[0], labels.numel())
timer.stop()
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches, (metric[0] / metric[2], metric[1] / metric[3], None))
test_acc = evaluate_accuracy_gpu(net, test_iter)
animator.add(epoch + 1, (None, None, test_acc))
print(f"loss {metric[0] / metric[2]:.3f}, train acc " f"{metric[1] / metric[3]:.3f}, test acc {test_acc:.3f}")
print(f"{metric[2] * num_epochs / timer.sum():.1f} examples/sec on " f"{str(devices)}")
lr, num_epochs = 0.001, 4
trainer = torch.optim.Adam(net.parameters(), lr=lr)
loss = nn.CrossEntropyLoss(reduction="none")
train(net, train_iter, test_iter, loss, trainer, num_epochs, devices)
def try_gpu(i=0):
Return gpu(i) if exists, otherwise return cpu().
if torch.cuda.device_count() >= i + 1:
return torch.device(f"cuda:{i}")
return torch.device("cpu")
def predict_snli(net, vocab, premise, hypothesis):
net.eval()
premise = torch.tensor(vocab[premise], device=try_gpu())
hypothesis = torch.tensor(vocab[hypothesis], device=try_gpu())
label = torch.argmax(net([premise.reshape((1, -1)), hypothesis.reshape((1, -1))]), dim=1)
return "entailment" if label == 0 else "contradiction" if label == 1 else "neutral"
predict_snli(net, vocab, ["he", "is", "good", "."], ["he", "is", "bad", "."])
predict_snli(net, vocab, ["he", "is", "very", "naughty", "."], ["he", "is", "bad", "."])
predict_snli(net, vocab, ["he", "is", "awful", "."], ["he", "is", "bad", "."])
predict_snli(net, vocab, ["he", "is", "handsome", "."], ["he", "is", "bad", "."])
predict_snli(
net,
vocab,
["a", "person", "on", "a", "horse", "jumps", "over", "a", "log" "."],
["a", "person", "is", "outdoors", "on", "a", "horse", "."],
)
predict_snli(
net,
vocab,
["a", "person", "on", "a", "horse", "jumps", "over", "a", "log" "."],
["a", "person", "is", "at", "a", "diner", "ordering", "an", "omelette", "."],
)
predict_snli(
net,
vocab,
["a", "person", "on", "a", "horse", "jumps", "over", "a", "log" "."],
["a", "person", "is", "training", "a", "horse", "for", "a", "competition", "."],
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below is a sample of the first 5 rows of the seeds dataset.
Step2: Tutorial
Step3: Let's test out this function.
Step4: Running the example, you can see that the code prints out each layer one by one.
Step5: Now, let's see how to use the neuron activation.
Step6: Now that we have the pieces, let's see how they are put together and used.
Step7: Let’s put all of these pieces together and test out the forward propagation of our network.
Step8: Time for some testing
Step9: Running the example propagates the input pattern [1, 0] and produces an output value that is printed.
Step10: Now let's see how this can be used.
Step11: Let's put all of the pieces together and see how it works.
Step12: Let's run some tests to make sure we get what we want
Step13: Running the example prints the network after the backpropagation of errors is complete.
Step14: Now that we know how to update network weights, let's tell the machine how to do it repeatedly.
Step15: We now have all of the pieces to train the network.
Step16: Running the example first prints the sum squared error each training epoch.
Step17: We can put this together with our code above for forward propagating input with our small contrived dataset to test making predictions with a network that has already been trained.
Step18: running the example prints the expected output for each record in the training dataset, followed by the crisp prediction made by the network.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
seeds_dataset = pd.read_csv('seeds_dataset.csv', header=None)
seeds_dataset[:5]
# Initialize a network:
def initialize_network(n_inputs, n_hidden, n_outputs):
network = list()
hidden_layer = [{'weights':[random() for i in range(n_inputs + 1)]} for i in range(n_hidden)]
network.append(hidden_layer)
output_layer = [{'weights':[random() for i in range(n_hidden + 1)]} for i in range(n_outputs)]
network.append(output_layer)
return network
from random import seed
from random import random
def initialize_network(n_inputs, n_hidden, n_outputs):
network = list()
hidden_layer = [{'weights':[random() for i in range(n_inputs + 1)]} for i in range(n_hidden)]
network.append(hidden_layer)
output_layer = [{'weights':[random() for i in range(n_hidden + 1)]} for i in range(n_outputs)]
network.append(output_layer)
return network
seed(1)
network = initialize_network(2, 1, 2)
for layer in network:
print(layer)
# Calculate neuron activation for an input
def activate(weights, inputs):
activation = weights[-1]
for i in range(len(weights)-1):
activation += weights[i] * inputs[i]
return activation
# Transfer neuron activation:
def transfer(activation):
return 1.0 / (1.0 + exp(-activation))
# Forward propagate input to a network output:
def forward_propagate(network, row):
inputs = row
for layer in network:
new_inputs = []
for neuron in layer:
activation = activate(neuron['weights'], inputs)
neuron['output'] = transfer(activation)
new_inputs.append(neuron['output'])
inputs = new_inputs
return inputs
from math import exp
# Calculate neuron activation for an input:
def activate(weights, inputs):
activation = weights[-1]
for i in range(len(weights)-1):
activation += weights[i] * inputs[i]
return activation
# Transfer neuron activation:
def transfer(activation):
return 1.0 / (1.0 + exp(-activation))
# Forward propagate input to a network output:
def forward_propagate(network, row):
inputs = row
for layer in network:
new_inputs = []
for neuron in layer:
activation = activate(neuron['weights'], inputs)
neuron['output'] = transfer(activation)
new_inputs.append(neuron['output'])
inputs = new_inputs
return inputs
# Test forward propagation:
network = [[{'weights': [0.13436424411240122, 0.8474337369372327, 0.763774618976614]}],
[{'weights': [0.2550690257394217, 0.49543508709194095]},
{'weights': [0.4494910647887381, 0.651592972722763]}]]
row = [1, 0, None]
output = forward_propagate(network, row)
output
# Caculate the derivative of a neuron output:
def transfer_derivative(output):
return output * (1.0 - output)
# Backpropagate error and store in neurons:
def backward_propagate_error(network, expected):
for i in reversed(range(len(network))):
layer = network[i]
errors = list()
if i != len(network) - 1:
for j in range(len(layer)):
error = 0.0
for neuron in network[i + 1]:
error += (neuron['weights'][j] * neuron['delta'])
errors.append(error)
else:
for j in range(len(layer)):
neuron = layer[j]
errors.append(expected[j] - neuron['output'])
for j in range(len(layer)):
neuron = layer[j]
neuron['delta'] = errors[j] * transfer_derivative(neuron['output'])
# Calculate the derivative of a neuron output:
def transfer_derivative(output):
return output * (1.0 - output)
# Backpropagate error and store in neurons:
def backward_propagate_error(network, expected):
for i in reversed(range(len(network))):
layer = network[i]
errors = list()
if i != len(network) - 1:
for j in range(len(layer)):
error = 0.0
for neuron in network[i + 1]:
error += (neuron['weights'][j] * neuron['delta'])
errors.append(error)
else:
for j in range(len(layer)):
neuron = layer[j]
errors.append(expected[j] - neuron['output'])
for j in range(len(layer)):
neuron = layer[j]
neuron['delta'] = errors[j] * transfer_derivative(neuron['output'])
# Test backpropagation of error:
network = [[{'output': 0.7105668883115941, 'weights': [0.13436424411240122, 0.8474337369372327, 0.763774618976614]}],
[{'output': 0.6213859615555266, 'weights': [0.2550690257394217, 0.49543508709194095]},
{'output': 0.6573693455986976, 'weights': [0.4494910647887381, 0.651592972722763]}]]
expected = [0, 1]
backward_propagate_error(network, expected)
for layer in network:
print(layer)
# Update network weights with error:
def update_weights(network, row, l_rate):
for i in range(len(network)):
inputs = row[:-1]
if i != 0:
inputs = [neuron['output'] for neuron in network[i - 1]]
for neuron in network[i]:
for j in range(len(inputs)):
neuron['weights'][j] += l_rate * neuron['delta'] * inputs[j]
neuron['weights'][-1] += l_rate * neuron['delta']
# Train an ANN for a fixed number of epochs:
def train_network(network, train, l_rate, n_epoch, n_outputs):
for epoch in range(n_epoch):
sum_error = 0
for row in train:
outputs = forward_propagate(network, row)
expected = [0 for i in range(n_outputs)]
expected[row[-1]] = 1
sum_error += sum([(expected[i] - outputs[i]) ** 2 \
for i in range(len(expected))])
backward_propagate_error(network, expected)
update_weights(network, row, l_rate)
print(">epoch: {}, l_rate: {:.3f}, sum_error: {:.3f}".format(epoch, l_rate, sum_error))
from math import exp
from random import seed
from random import random
# Initialize a network
def initialize_network(n_inputs, n_hidden, n_outputs):
network = list()
hidden_layer = [{'weights':[random() for i in range(n_inputs + 1)]} for i in range(n_hidden)]
network.append(hidden_layer)
output_layer = [{'weights':[random() for i in range(n_hidden + 1)]} for i in range(n_outputs)]
network.append(output_layer)
return network
# Calculate neuron activation for an input
def activate(weights, inputs):
activation = weights[-1]
for i in range(len(weights)-1):
activation += weights[i] * inputs[i]
return activation
# Transfer neuron activation
def transfer(activation):
return 1.0 / (1.0 + exp(-activation))
# Forward propagate input to a network output
def forward_propagate(network, row):
inputs = row
for layer in network:
new_inputs = []
for neuron in layer:
activation = activate(neuron['weights'], inputs)
neuron['output'] = transfer(activation)
new_inputs.append(neuron['output'])
inputs = new_inputs
return inputs
# Calculate the derivative of an neuron output
def transfer_derivative(output):
return output * (1.0 - output)
# Backpropagate error and store in neurons
def backward_propagate_error(network, expected):
for i in reversed(range(len(network))):
layer = network[i]
errors = list()
if i != len(network)-1:
for j in range(len(layer)):
error = 0.0
for neuron in network[i + 1]:
error += (neuron['weights'][j] * neuron['delta'])
errors.append(error)
else:
for j in range(len(layer)):
neuron = layer[j]
errors.append(expected[j] - neuron['output'])
for j in range(len(layer)):
neuron = layer[j]
neuron['delta'] = errors[j] * transfer_derivative(neuron['output'])
# Update network weights with error
def update_weights(network, row, l_rate):
for i in range(len(network)):
inputs = row[:-1]
if i != 0:
inputs = [neuron['output'] for neuron in network[i - 1]]
for neuron in network[i]:
for j in range(len(inputs)):
neuron['weights'][j] += l_rate * neuron['delta'] * inputs[j]
neuron['weights'][-1] += l_rate * neuron['delta']
# Train a network for a fixed number of epochs
def train_network(network, train, l_rate, n_epoch, n_outputs):
for epoch in range(n_epoch):
sum_error = 0
for row in train:
outputs = forward_propagate(network, row)
expected = [0 for i in range(n_outputs)]
expected[row[-1]] = 1
sum_error += sum([(expected[i]-outputs[i])**2 for i in range(len(expected))])
backward_propagate_error(network, expected)
update_weights(network, row, l_rate)
print('>epoch=%d, lrate=%.3f, error=%.3f' % (epoch, l_rate, sum_error))
# Test training backprop algorithm
seed(1)
dataset = [[2.7810836,2.550537003,0],
[1.465489372,2.362125076,0],
[3.396561688,4.400293529,0],
[1.38807019,1.850220317,0],
[3.06407232,3.005305973,0],
[7.627531214,2.759262235,1],
[5.332441248,2.088626775,1],
[6.922596716,1.77106367,1],
[8.675418651,-0.242068655,1],
[7.673756466,3.508563011,1]]
n_inputs = len(dataset[0]) - 1
n_outputs = len(set([row[-1] for row in dataset]))
network = initialize_network(n_inputs, 2, n_outputs)
train_network(network, dataset, 0.5, 20, n_outputs)
for layer in network:
print(layer)
# Make a prediction with the network:
def predict(network, row):
outputs = forward_propagate(network, row)
return outputs.index(max(outputs))
from math import exp
# Calculate neuron activation for an input:
def activate(weights, inputs):
activation = weights[-1]
for i in range(len(weights)-1):
activation += weights[i] * inputs[i]
return activation
# Transfer neuron activation:
def transfer(activation):
return 1.0 / (1.0 + exp(-activation))
# Forward propagate inputs to a network output:
def forward_propagate(network, row):
inputs = row
for layer in network:
new_inputs = []
for neuron in layer:
activation = activate(neuron['weights'], inputs)
neuron['output'] = transfer(activation)
new_inputs.append(neuron['output'])
inputs = new_inputs
return inputs
# Make a prediction with a network:
def predict(network, row):
outputs = forward_propagate(network, row)
return outputs.index(max(outputs))
# Test making predictions with the network:
dataset = [[2.7810836,2.550537003,0],
[1.465489372,2.362125076,0],
[3.396561688,4.400293529,0],
[1.38807019,1.850220317,0],
[3.06407232,3.005305973,0],
[7.627531214,2.759262235,1],
[5.332441248,2.088626775,1],
[6.922596716,1.77106367,1],
[8.675418651,-0.242068655,1],
[7.673756466,3.508563011,1]]
network = [[{'weights': [-1.482313569067226, 1.8308790073202204, 1.078381922048799]},
{'weights': [0.23244990332399884, 0.3621998343835864, 0.40289821191094327]}],
[{'weights': [2.5001872433501404, 0.7887233511355132, -1.1026649757805829]},
{'weights': [-2.429350576245497, 0.8357651039198697, 1.0699217181280656]}]]
for row in dataset:
prediction = predict(network, row)
print('Expected=%d, Actual=%d' % (row[-1], prediction))
# Backpropagation on the Seeds Dataset:
from random import seed
from random import randrange
from random import random
from csv import reader
from math import exp
# Load the CSV file with the seeds dataset:
def load_csv(filename):
dataset = list()
with open(filename, 'r') as file:
csv_reader = reader(file)
for row in csv_reader:
if not row:
continue
dataset.append(row)
return dataset
# Convert string column to float:
def str_column_to_float(dataset, column):
for row in dataset:
row[column] = float(row[column].strip())
# Convert string column to integer:
def str_column_to_int(dataset, column):
class_values = [row[column] for row in dataset]
unique = set(class_values)
lookup = dict()
for i, value in enumerate(unique):
lookup[value] = i
for row in dataset:
row[column] = lookup[row[column]]
return lookup
# Find the min and max values for each column:
def dataset_minmax(dataset):
minmax = list()
stats = [[min(column), max(column)] for column in zip(*dataset)]
return stats
# Rescale dataset columns to the range 0-1:
def normalize_dataset(dataset, minmax):
for row in dataset:
for i in range(len(row)-1):
row[i] = (row[i] - minmax[i][0]) / (minmax[i][1] - minmax[i][0])
# Split a dataset into k folds:
def cross_validation_split(dataset, n_folds):
dataset_split = list()
dataset_copy = list(dataset)
fold_size = int(len(dataset) / n_folds)
for i in range(n_folds):
fold = list()
while len(fold) < fold_size:
index = randrange(len(dataset_copy))
fold.append(dataset_copy.pop(index))
dataset_split.append(fold)
return dataset_split
# Calculate accuracy percentage:
def accuracy_metric(actual, predicted):
correct = 0
for i in range(len(actual)):
if actual[i] == predicted[i]:
correct += 1
return correct / float(len(actual)) * 100.0
# Evaluate the algorithm using a cross validation split:
def evaluate_algorithm(dataset, algorithm, n_folds, *args):
folds = cross_validation_split(dataset, n_folds)
scores = list()
for fold in folds:
train_set = list(folds)
train_set.remove(fold)
train_set = sum(train_set, [])
test_set = list()
for row in fold:
row_copy = list(row)
test_set.append(row_copy)
row_copy[-1] = None
predicted = algorithm(train_set, test_set, *args)
actual = [row[-1] for row in fold]
accuracy = accuracy_metric(actual, predicted)
scores.append(accuracy)
return scores
# Calculate neuron activation for an input:
def activate(weights, inputs):
activation = weights[-1]
for i in range(len(weights)-1):
activation += weights[i] * inputs[i]
return activation
# Transfer neuron activation:
def transfer(activation):
return 1.0 / (1.0 + exp(-activation))
# Forward propagate input to a network input:
def forward_propagate(network, row):
inputs = row
for layer in network:
new_inputs = []
for neuron in layer:
activation = activate(neuron['weights'], inputs)
neuron['output'] = transfer(activation)
new_inputs.append(neuron['output'])
inputs = new_inputs
return inputs
# Calculate the derivative of a neuron's output:
def transfer_derivative(output):
return output * (1.0 - output)
# Backpropagate errors and store the results in the corresponding neurons:
def backward_propagate_error(network, expected):
for i in reversed(range(len(network))):
layer = network[i]
errors = list()
if i != len(network)-1:
for j in range(len(layer)):
error = 0.0
for neuron in network[i + 1]:
error += (neuron['weights'][j] * neuron['delta'])
errors.append(error)
else:
for j in range(len(layer)):
neuron = layer[j]
errors.append(expected[j] - neuron['output'])
for j in range(len(layer)):
neuron = layer[j]
neuron['delta'] = errors[j] * transfer_derivative(neuron['output'])
# Update network weights with the errors:
def update_weights(network, row, l_rate):
for i in range(len(network)):
inputs = row[:-1]
if i != 0:
inputs = [neuron['output'] for neuron in network[i - 1]]
for neuron in network[i]:
for j in range(len(inputs)):
neuron['weights'][j] += l_rate * neuron['delta'] * inputs[j]
neuron['weights'][-1] += l_rate * neuron['delta']
# Train a network for a fixed number of epochs:
def train_network(network, train, l_rate, n_epoch, n_outputs):
for epoch in range(n_epoch):
for row in train:
outputs = forward_propagate(network, row)
expected = [0 for i in range(n_outputs)]
expected[row[-1]] = 1
backward_propagate_error(network, expected)
update_weights(network, row, l_rate)
# Initialize a network:
def initialize_network(n_inputs, n_hidden, n_outputs):
network = list()
hidden_layer = [{'weights':[random() for i in range(n_inputs + 1)]} for i in range(n_hidden)]
network.append(hidden_layer)
output_layer = [{'weights':[random() for i in range(n_hidden + 1)]} for i in range(n_outputs)]
network.append(output_layer)
return network
# Make a prediction with a network:
def predict(network, row):
outputs = forward_propagate(network, row)
return outputs.index(max(outputs))
# Backpropagation Algorithm with Stochastic Gradient Descent:
def back_propagation(train, test, l_rate, n_epoch, n_hidden):
n_inputs = len(train[0])-1
n_outputs = len(set([row[-1] for row in train]))
network = initialize_network(n_inputs, n_hidden, n_outputs)
train_network(network, train, l_rate, n_epoch, n_outputs)
predictions = list()
for row in test:
prediction = predict(network, row)
predictions.append(prediction)
return predictions
# Test backpropagation on seeds dataset:
seed(1)
# Load and prepare data:
filename = 'seeds_dataset.csv'
dataset = load_csv(filename)
for i in range(len(dataset[0])-1):
str_column_to_float(dataset, i)
# Convert class column to integers:
str_column_to_int(dataset, len(dataset[0])-1)
# Normalize the input variables:
minmax = dataset_minmax(dataset)
normalize_dataset(dataset, minmax)
# Evaluate the algorithm:
n_folds = 5
l_rate = 0.3
n_epoch = 500
n_hidden = 5
scores = evaluate_algorithm(dataset, back_propagation, n_folds, l_rate, n_epoch, n_hidden)
print("Scores: \n{}".format(scores))
print("Mean Accuracy: {:.3f}".format(sum(scores)/float(len(scores))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 눈에 띄는 이미지 캡션
Step2: MS-COCO 데이터세트 다운로드 및 준비
Step3: 선택 사항
Step4: InceptionV3를 사용하여 이미지 전처리하기
Step5: InceptionV3 초기화 및 사전 훈련된 Imagenet 가중치 로드하기
Step6: InceptionV3에서 추출된 특성 캐시하기
Step7: 캡션 전처리 및 토큰화하기
Step8: 데이터를 훈련 및 테스트로 분할하기
Step9: 훈련을 위한 tf.data 데이터세트 생성하기
Step10: 모델
Step11: 체크포인트
Step12: 훈련하기
Step13: 캡션!
Step14: 자체 이미지로 시도해보기
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
# You'll generate plots of attention in order to see which parts of an image
# your model focuses on during captioning
import matplotlib.pyplot as plt
import collections
import random
import numpy as np
import os
import time
import json
from PIL import Image
# Download caption annotation files
annotation_folder = '/annotations/'
if not os.path.exists(os.path.abspath('.') + annotation_folder):
annotation_zip = tf.keras.utils.get_file('captions.zip',
cache_subdir=os.path.abspath('.'),
origin='http://images.cocodataset.org/annotations/annotations_trainval2014.zip',
extract=True)
annotation_file = os.path.dirname(annotation_zip)+'/annotations/captions_train2014.json'
os.remove(annotation_zip)
# Download image files
image_folder = '/train2014/'
if not os.path.exists(os.path.abspath('.') + image_folder):
image_zip = tf.keras.utils.get_file('train2014.zip',
cache_subdir=os.path.abspath('.'),
origin='http://images.cocodataset.org/zips/train2014.zip',
extract=True)
PATH = os.path.dirname(image_zip) + image_folder
os.remove(image_zip)
else:
PATH = os.path.abspath('.') + image_folder
with open(annotation_file, 'r') as f:
annotations = json.load(f)
# Group all captions together having the same image ID.
image_path_to_caption = collections.defaultdict(list)
for val in annotations['annotations']:
caption = f"<start> {val['caption']} <end>"
image_path = PATH + 'COCO_train2014_' + '%012d.jpg' % (val['image_id'])
image_path_to_caption[image_path].append(caption)
image_paths = list(image_path_to_caption.keys())
random.shuffle(image_paths)
# Select the first 6000 image_paths from the shuffled set.
# Approximately each image id has 5 captions associated with it, so that will
# lead to 30,000 examples.
train_image_paths = image_paths[:6000]
print(len(train_image_paths))
train_captions = []
img_name_vector = []
for image_path in train_image_paths:
caption_list = image_path_to_caption[image_path]
train_captions.extend(caption_list)
img_name_vector.extend([image_path] * len(caption_list))
print(train_captions[0])
Image.open(img_name_vector[0])
def load_image(image_path):
img = tf.io.read_file(image_path)
img = tf.image.decode_jpeg(img, channels=3)
img = tf.image.resize(img, (299, 299))
img = tf.keras.applications.inception_v3.preprocess_input(img)
return img, image_path
image_model = tf.keras.applications.InceptionV3(include_top=False,
weights='imagenet')
new_input = image_model.input
hidden_layer = image_model.layers[-1].output
image_features_extract_model = tf.keras.Model(new_input, hidden_layer)
# Get unique images
encode_train = sorted(set(img_name_vector))
# Feel free to change batch_size according to your system configuration
image_dataset = tf.data.Dataset.from_tensor_slices(encode_train)
image_dataset = image_dataset.map(
load_image, num_parallel_calls=tf.data.AUTOTUNE).batch(16)
for img, path in image_dataset:
batch_features = image_features_extract_model(img)
batch_features = tf.reshape(batch_features,
(batch_features.shape[0], -1, batch_features.shape[3]))
for bf, p in zip(batch_features, path):
path_of_feature = p.numpy().decode("utf-8")
np.save(path_of_feature, bf.numpy())
# Find the maximum length of any caption in the dataset
def calc_max_length(tensor):
return max(len(t) for t in tensor)
# Choose the top 5000 words from the vocabulary
top_k = 5000
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=top_k,
oov_token="<unk>",
filters='!"#$%&()*+.,-/:;=?@[\]^_`{|}~')
tokenizer.fit_on_texts(train_captions)
tokenizer.word_index['<pad>'] = 0
tokenizer.index_word[0] = '<pad>'
# Create the tokenized vectors
train_seqs = tokenizer.texts_to_sequences(train_captions)
# Pad each vector to the max_length of the captions
# If you do not provide a max_length value, pad_sequences calculates it automatically
cap_vector = tf.keras.preprocessing.sequence.pad_sequences(train_seqs, padding='post')
# Calculates the max_length, which is used to store the attention weights
max_length = calc_max_length(train_seqs)
img_to_cap_vector = collections.defaultdict(list)
for img, cap in zip(img_name_vector, cap_vector):
img_to_cap_vector[img].append(cap)
# Create training and validation sets using an 80-20 split randomly.
img_keys = list(img_to_cap_vector.keys())
random.shuffle(img_keys)
slice_index = int(len(img_keys)*0.8)
img_name_train_keys, img_name_val_keys = img_keys[:slice_index], img_keys[slice_index:]
img_name_train = []
cap_train = []
for imgt in img_name_train_keys:
capt_len = len(img_to_cap_vector[imgt])
img_name_train.extend([imgt] * capt_len)
cap_train.extend(img_to_cap_vector[imgt])
img_name_val = []
cap_val = []
for imgv in img_name_val_keys:
capv_len = len(img_to_cap_vector[imgv])
img_name_val.extend([imgv] * capv_len)
cap_val.extend(img_to_cap_vector[imgv])
len(img_name_train), len(cap_train), len(img_name_val), len(cap_val)
# Feel free to change these parameters according to your system's configuration
BATCH_SIZE = 64
BUFFER_SIZE = 1000
embedding_dim = 256
units = 512
vocab_size = top_k + 1
num_steps = len(img_name_train) // BATCH_SIZE
# Shape of the vector extracted from InceptionV3 is (64, 2048)
# These two variables represent that vector shape
features_shape = 2048
attention_features_shape = 64
# Load the numpy files
def map_func(img_name, cap):
img_tensor = np.load(img_name.decode('utf-8')+'.npy')
return img_tensor, cap
dataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train))
# Use map to load the numpy files in parallel
dataset = dataset.map(lambda item1, item2: tf.numpy_function(
map_func, [item1, item2], [tf.float32, tf.int32]),
num_parallel_calls=tf.data.AUTOTUNE)
# Shuffle and batch
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
dataset = dataset.prefetch(buffer_size=tf.data.AUTOTUNE)
class BahdanauAttention(tf.keras.Model):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, features, hidden):
# features(CNN_encoder output) shape == (batch_size, 64, embedding_dim)
# hidden shape == (batch_size, hidden_size)
# hidden_with_time_axis shape == (batch_size, 1, hidden_size)
hidden_with_time_axis = tf.expand_dims(hidden, 1)
# attention_hidden_layer shape == (batch_size, 64, units)
attention_hidden_layer = (tf.nn.tanh(self.W1(features) +
self.W2(hidden_with_time_axis)))
# score shape == (batch_size, 64, 1)
# This gives you an unnormalized score for each image feature.
score = self.V(attention_hidden_layer)
# attention_weights shape == (batch_size, 64, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * features
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
class CNN_Encoder(tf.keras.Model):
# Since you have already extracted the features and dumped it
# This encoder passes those features through a Fully connected layer
def __init__(self, embedding_dim):
super(CNN_Encoder, self).__init__()
# shape after fc == (batch_size, 64, embedding_dim)
self.fc = tf.keras.layers.Dense(embedding_dim)
def call(self, x):
x = self.fc(x)
x = tf.nn.relu(x)
return x
class RNN_Decoder(tf.keras.Model):
def __init__(self, embedding_dim, units, vocab_size):
super(RNN_Decoder, self).__init__()
self.units = units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc1 = tf.keras.layers.Dense(self.units)
self.fc2 = tf.keras.layers.Dense(vocab_size)
self.attention = BahdanauAttention(self.units)
def call(self, x, features, hidden):
# defining attention as a separate model
context_vector, attention_weights = self.attention(features, hidden)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# shape == (batch_size, max_length, hidden_size)
x = self.fc1(output)
# x shape == (batch_size * max_length, hidden_size)
x = tf.reshape(x, (-1, x.shape[2]))
# output shape == (batch_size * max_length, vocab)
x = self.fc2(x)
return x, state, attention_weights
def reset_state(self, batch_size):
return tf.zeros((batch_size, self.units))
encoder = CNN_Encoder(embedding_dim)
decoder = RNN_Decoder(embedding_dim, units, vocab_size)
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
checkpoint_path = "./checkpoints/train"
ckpt = tf.train.Checkpoint(encoder=encoder,
decoder=decoder,
optimizer=optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
start_epoch = 0
if ckpt_manager.latest_checkpoint:
start_epoch = int(ckpt_manager.latest_checkpoint.split('-')[-1])
# restoring the latest checkpoint in checkpoint_path
ckpt.restore(ckpt_manager.latest_checkpoint)
# adding this in a separate cell because if you run the training cell
# many times, the loss_plot array will be reset
loss_plot = []
@tf.function
def train_step(img_tensor, target):
loss = 0
# initializing the hidden state for each batch
# because the captions are not related from image to image
hidden = decoder.reset_state(batch_size=target.shape[0])
dec_input = tf.expand_dims([tokenizer.word_index['<start>']] * target.shape[0], 1)
with tf.GradientTape() as tape:
features = encoder(img_tensor)
for i in range(1, target.shape[1]):
# passing the features through the decoder
predictions, hidden, _ = decoder(dec_input, features, hidden)
loss += loss_function(target[:, i], predictions)
# using teacher forcing
dec_input = tf.expand_dims(target[:, i], 1)
total_loss = (loss / int(target.shape[1]))
trainable_variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(gradients, trainable_variables))
return loss, total_loss
EPOCHS = 20
for epoch in range(start_epoch, EPOCHS):
start = time.time()
total_loss = 0
for (batch, (img_tensor, target)) in enumerate(dataset):
batch_loss, t_loss = train_step(img_tensor, target)
total_loss += t_loss
if batch % 100 == 0:
average_batch_loss = batch_loss.numpy()/int(target.shape[1])
print(f'Epoch {epoch+1} Batch {batch} Loss {average_batch_loss:.4f}')
# storing the epoch end loss value to plot later
loss_plot.append(total_loss / num_steps)
if epoch % 5 == 0:
ckpt_manager.save()
print(f'Epoch {epoch+1} Loss {total_loss/num_steps:.6f}')
print(f'Time taken for 1 epoch {time.time()-start:.2f} sec\n')
plt.plot(loss_plot)
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title('Loss Plot')
plt.show()
def evaluate(image):
attention_plot = np.zeros((max_length, attention_features_shape))
hidden = decoder.reset_state(batch_size=1)
temp_input = tf.expand_dims(load_image(image)[0], 0)
img_tensor_val = image_features_extract_model(temp_input)
img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_val.shape[0],
-1,
img_tensor_val.shape[3]))
features = encoder(img_tensor_val)
dec_input = tf.expand_dims([tokenizer.word_index['<start>']], 0)
result = []
for i in range(max_length):
predictions, hidden, attention_weights = decoder(dec_input,
features,
hidden)
attention_plot[i] = tf.reshape(attention_weights, (-1, )).numpy()
predicted_id = tf.random.categorical(predictions, 1)[0][0].numpy()
result.append(tokenizer.index_word[predicted_id])
if tokenizer.index_word[predicted_id] == '<end>':
return result, attention_plot
dec_input = tf.expand_dims([predicted_id], 0)
attention_plot = attention_plot[:len(result), :]
return result, attention_plot
def plot_attention(image, result, attention_plot):
temp_image = np.array(Image.open(image))
fig = plt.figure(figsize=(10, 10))
len_result = len(result)
for i in range(len_result):
temp_att = np.resize(attention_plot[i], (8, 8))
grid_size = max(np.ceil(len_result/2), 2)
ax = fig.add_subplot(grid_size, grid_size, i+1)
ax.set_title(result[i])
img = ax.imshow(temp_image)
ax.imshow(temp_att, cmap='gray', alpha=0.6, extent=img.get_extent())
plt.tight_layout()
plt.show()
# captions on the validation set
rid = np.random.randint(0, len(img_name_val))
image = img_name_val[rid]
real_caption = ' '.join([tokenizer.index_word[i]
for i in cap_val[rid] if i not in [0]])
result, attention_plot = evaluate(image)
print('Real Caption:', real_caption)
print('Prediction Caption:', ' '.join(result))
plot_attention(image, result, attention_plot)
image_url = 'https://tensorflow.org/images/surf.jpg'
image_extension = image_url[-4:]
image_path = tf.keras.utils.get_file('image'+image_extension, origin=image_url)
result, attention_plot = evaluate(image_path)
print('Prediction Caption:', ' '.join(result))
plot_attention(image_path, result, attention_plot)
# opening the image
Image.open(image_path)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will produce some plots based on a frequency range to illustrate the concepts
Step2: Back to top
Step3: Back to top
Step4: Back to top
Step5: When the Rayleigh damping is used in MDOF systems, the coefficients $\alpha$ and $\beta$ can be computed in order to give an appropriate damping coefficient value for a given frequency range, related to the vibration modes of interest for the dynamic analysis. This is achieved by setting a simple two equation system whose solution yields the values of $\alpha$ and $\beta$
Step6: We can check that the Rayleigh damping assumes the required values at the desired frequencies, although may vary considerably for other frequencies
|
<ASSISTANT_TASK:>
Python Code:
import sys
import math
import numpy as np
import matplotlib as mpl
print('System: {}'.format(sys.version))
for package in (np, mpl):
print('Package: {} {}'.format(package.__name__, package.__version__))
import matplotlib.pyplot as plt
%matplotlib inline
ff = np.linspace(0.01, 6., num=600)
wn = 2.*np.pi*ff
alpha = 0.1
zn_a = alpha/(2.*wn)
plt.plot(wn, zn_a, label='mass proportional')
plt.xlabel('Vibration frequency $\omega_n$ [rad/s]')
plt.ylabel('Damping coefficient $\zeta_n$ [-]')
plt.legend(loc='best')
plt.grid(True)
plt.xlim([0, 35.])
plt.ylim([0, 0.2])
plt.show()
beta = 0.005
zn_b = (beta*wn)/2.
plt.plot(wn, zn_b, label='stiffness proportional')
plt.xlabel('Vibration frequency $\omega_n$ [rad/s]')
plt.ylabel('Damping coefficient $\zeta_n$ [-]')
plt.legend(loc='best')
plt.grid(True)
plt.xlim([0, 35.])
plt.ylim([0, 0.2])
plt.show()
plt.hold(True)
plt.plot(wn, zn_a+zn_b, label='Rayleigh damping')
plt.plot(wn, zn_a, label='mass proportional')
plt.plot(wn, zn_b, label='stiffness proportional')
plt.xlabel('Vibration frequency $\omega_n$ [rad/s]')
plt.ylabel('Damping coefficient $\zeta_n$ [-]')
plt.legend(loc='best')
plt.grid(True)
plt.xlim([0, 35.])
plt.ylim([0, 0.2])
plt.show()
f1, f2 = 1., 4.
z1, z2 = 0.02, 0.05
w1 = 2.*np.pi*f1
w2 = 2.*np.pi*f2
alpha, beta = np.linalg.solve([[1./(2.*w1), w1/2.], [1./(2.*w2), w2/2.]], [z1, z2])
print('Alpha={:.6f}\nBeta={:.6f}'.format(alpha, beta))
zn_a = alpha/(2.*wn)
zn_b = (beta*wn)/2.
plt.hold(True)
plt.plot(wn, zn_a+zn_b, label='Rayleigh damping')
plt.plot(wn, zn_a, label='mass proportional')
plt.plot(wn, zn_b, label='stiffness proportional')
plt.plot(w1, z1, 'o')
plt.plot(w2, z2, 'o')
plt.axvline(w1, ls=':')
plt.axhline(z1, ls=':')
plt.axvline(w2, ls=':')
plt.axhline(z2, ls=':')
plt.xlabel('Vibration frequency $\omega_n$ [rad/s]')
plt.ylabel('Damping coefficient $\zeta_n$ [-]')
plt.legend(loc='best')
plt.xlim([0, 35.])
plt.ylim([0, 0.2])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read all tractor catalogs
Step2: Find matched/unmatched sources (not currently doing anything with these)
Step3: Find matched (nearest-neighbour within 0.5 arcsec) sources
Step4: Now look at the coadd images.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pylab as plt
import numpy as np
from astrometry.util.fits import *
from astrometry.util.plotutils import *
from collections import Counter
from astrometry.libkd.spherematch import *
from astrometry.util.starutil import *
from glob import glob
import fitsio
T4 = merge_tables([fits_table(fn) for fn in glob('bootes/dr4/tractor-*.fits')])
T6 = merge_tables([fits_table(fn) for fn in glob('bootes/dr6/tractor-*.fits')])
print(len(T4), len(T6))
I,J,d = match_radec(T4.ra, T4.dec, T6.ra, T6.dec, 1./3600.)
# unmatched
U4 = np.ones(len(T4), bool)
U6 = np.ones(len(T6), bool)
U4[I] = False
U6[J] = False
U4 = T4[U4]
U6 = T6[U6]
print(len(U4), len(U6), 'unmatched')
I,J,d = match_radec(T4.ra, T4.dec, T6.ra, T6.dec, 0.5/3600., nearest=True)
len(I)
M4 = T4[I]
M6 = T6[J]
plt.hist(d*3600., 20);
plt.xlabel('Distance between matched sources (arcsec)');
cosdec = np.cos(np.deg2rad(36.))
loghist(3600.*(M4.ra - M6.ra)*cosdec, 3600.*(M4.dec - M6.dec));
plt.title('Astrometric offsets DR4 to DR6')
plt.xlabel('delta-RA (arcsec)')
plt.ylabel('delta-Dec (arcsec)')
# Compute mags
with np.errstate(invalid='ignore', divide='ignore'):
M4.g = 2.5*(np.log10(M4.flux_g)-9)
M4.r = 2.5*(np.log10(M4.flux_r)-9)
M4.z = 2.5*(np.log10(M4.flux_z)-9)
M6.g = 2.5*(np.log10(M6.flux_g)-9)
M6.r = 2.5*(np.log10(M6.flux_r)-9)
M6.z = 2.5*(np.log10(M6.flux_z)-9)
loghist(M4.flux_g, M6.flux_g, range=((0, 2.5),(0, 2.5)));
plt.xlabel('DR4 g flux')
plt.ylabel('DR6 g flux');
loghist(M4.flux_r, M6.flux_r, range=((0, 2.5),(0, 2.5)));
plt.xlabel('DR4 r flux')
plt.ylabel('DR6 r flux');
loghist(M4.flux_z, M6.flux_z, range=((0, 2.5),(0, 2.5)));
plt.xlabel('DR4 z flux')
plt.ylabel('DR6 z flux');
# Grab PSF sources
I = np.flatnonzero((M4.type == 'PSF ') * (M6.type == 'PSF '))
P4 = M4[I]
P6 = M6[I]
loghist(P4.flux_z, P6.flux_z, range=((0, 2.5),(0, 2.5)));
plt.xlabel('DR4 PSF flux, z band')
plt.xlabel('DR6 PSF flux, z band');
# Look at flux ratios
I = np.flatnonzero((P4.flux_z > 0) * (P6.flux_z > 0))
loghist(P4.flux_z[I], P6.flux_z[I] / P4.flux_z[I], range=((0, 2.5), (0.8, 1.2)));
plt.xlabel('DR4 PSF flux z')
plt.ylabel('DR6 / DR4 PSF flux z');
loghist(P4.flux_z, P6.flux_z - P4.flux_z, range=((0, 2.5), (-0.2, 0.2)));
plt.axhline(0, color='c')
plt.xlabel('DR4 PSF flux z')
plt.ylabel('DR6 - DR4 PSF flux z');
plt.savefig('fluxdiff.png')
im4 = fitsio.read('bootes/dr4/legacysurvey-2176p345-image-z.fits')
im6 = fitsio.read('bootes/dr6/legacysurvey-2176p345-image-z.fits.fz')
iv4 = fitsio.read('bootes/dr4/legacysurvey-2176p345-invvar-z.fits')
iv6 = fitsio.read('bootes/dr6/legacysurvey-2176p345-invvar-z.fits.fz')
ha=dict(bins=100, histtype='step', range=(-0.05,0.05)) #, log=True)
plt.hist(im4[iv4>0].ravel(), color='k', label='DR4', **ha);
plt.hist(im6[iv6>0].ravel(), color='r', label='DR6', **ha);
plt.legend()
plt.axvline(0, color='b');
plt.xlabel('Coadd image pixel values, z band, one brick');
plt.title('Background estimate, DR4 vs DR6');
plt.savefig('sky1.png')
ha=dict(bins=100, histtype='step', range=(-0.1,0.3), log=True)
plt.hist(im4[iv4>0].ravel(), color='k', **ha);
plt.hist(im6[iv6>0].ravel(), color='r', **ha);
plt.axvline(0, color='b');
ha=dict(bins=100, histtype='step', range=(-0.01,0.01), log=True)
plt.hist(im4[iv4>0].ravel(), color='k', label='DR4', **ha);
plt.hist(im6[iv6>0].ravel(), color='r', label='DR6', **ha);
plt.axvline(0, color='b');
plt.legend();
plt.ylim(6e4, 9e4);
plt.xlabel('Coadd image pixel values, z band, one brick');
plt.title('Background estimate, DR4 vs DR6');
plt.savefig('sky2.png')
print(np.median(im4[iv4>0]))
print(np.median(im6[iv6>0]))
plt.hist((im6-im4)[(iv4>0)*(iv6>0)].ravel(), 100, histtype='step', color='k', range=(-0.003,0.003));
plt.title('Pixel-by-pixel difference DR6 - DR4')
plt.xlabel('Pixel difference, z band (nanomaggies)');
plt.axvline(0, color='b')
plt.savefig('sky3.png');
plt.imshow(im4, interpolation='nearest', origin='lower', vmin=-1, vmax=1);
plt.imshow(iv4, interpolation='nearest', origin='lower');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id="load"></a> 1. Load, inspect, calibrate data
Step2: Centering the dataset
Step3: Affine transformation
Step4: Scale calibration
Step5: <a id="pre"></a> 2. Image processing
Step6: <a id="sim"></a> 3. Simulate template library
Step7: Here we simulate the actual library using diffsims.
Step8: <a id="index"></a> 4. Index results
Step9: <a id="1p1t"></a> Single pattern and single template
Step10: <a id="1pmt"></a> Single pattern, multiple templates
Step11: In the following function we only calculate the match between a radially integrated image and radially integrated templates.
Step12: To find out the (in-plane) angle of best fit, we must do a full correlation matching.
Step13: What is returned are the indexes of retaines templates, the angles of best fit, the correlation of the templates, but also the angle of best fit for the mirrored template and the correlation for this mirrored template.
Step14: It is easy to find the maximum of the returned correlations for each template to index the pattern, but there is also the shorthand function get_n_best_matches to find the n best matches ordered in descending order of correlation index.
Step15: Returned are the indices of the best matching templates, the best fit in-plane angles, the best correlations, and the "signs" which refers to whether the normal pattern (1) or the mirrored pattern (-1) best fits the experiment.
Step16: We can also verify what the best matching pattern actually looks like as shown below
Step17: <a id="mpmt"></a> Many patterns and many templates
Step18: The structure of the result
Step19: To visualize the data, we plot IPF's. Below is a somewhat ad-hoc solution; a more general solution for visualizing IPF's is under development in orix.
Step20: We can also now use the solution to check individual solutions
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import hyperspy.api as hs
experimental_data = hs.load("data/sample_with_g.hspy", lazy=True)
experimental_data
experimental_data.center_direct_beam(method="blur", half_square_width=50, sigma=1.5)
# Define the affine transform matrices obtained from manual iteration to see what fits templates best
# An automated way to optimize the affine transformation matrix is under construction
matrix_transfo = np.array([[ 0.93356802, -0.04315628, 0. ],
[-0.02749365, 0.96883687, 0. ],
[ 0. , 0. , 1. ]])
experimental_data = experimental_data.apply_affine_transformation(matrix_transfo)
experimental_data.set_diffraction_calibration(1)
from hyperspy.roi import Line2DROI
# a line profile accross known diffraction spots
x1 = -7.47
x2 = 8.63
y1 = -74.63
y2 = 70.83
line = Line2DROI(x1=x1, y1=y1, x2=x2, y2=y2, linewidth=5)
image1 = experimental_data.inav[0, 0]
image1.plot(cmap="viridis", vmax=0.02)
plt.plot([x1, x2], [y1, y2], c = "red")
plt.close()
trace = line(image1).as_signal1D(0)
trace.plot(norm="log")
plt.close()
trace = np.log(trace)
import scipy.signal as sps
sps.find_peaks(trace.data)
# the number of pixels center to 022 peak
dist_002_pixels = (135 - 15)/2
# the distance in 1/A of the 022 peak
g_dist = np.sqrt(8)/3.59
diffraction_calibration = g_dist/dist_002_pixels
print(diffraction_calibration)
experimental_data.set_diffraction_calibration(diffraction_calibration)
from skimage import filters
# Set values lower than a specific value to 0 in the image
def crop_minimum(image, minimum=0.0005):
copied = image.copy()
copied[copied <= minimum] = 0.
return copied
# difference of gaussians serves to remove the background intensity
procdat = experimental_data.subtract_diffraction_background(method="difference of gaussians",
min_sigma=3,
max_sigma=8, )
# smooth out the output
procdat = procdat.map(filters.gaussian, sigma=0.5, inplace=False)
# remove low intensities
procdat = procdat.map(crop_minimum, minimum = 0.0005, inplace=False)
# to get access to the functions of electron diffraction patterns
procdat.set_signal_type(signal_type="electron_diffraction")
plt.imshow(procdat.inav[0,0].data, vmin=0, vmax=0.03)
plt.close()
experimental_data = procdat
from diffsims.generators.rotation_list_generators import get_beam_directions_grid
resolution = 1.5 # maximum angle in degrees between nearest templates. Pretty rough grid for speed.
grid_cub = get_beam_directions_grid("cubic", resolution, mesh="spherified_cube_edge")
print("Number of patterns: ", grid_cub.shape[0])
# this cell serves to visualize the grid of orientations in stereographic projection
# a helper function for turning the euler angle grid to a grid of points in the stereographic projection
def grid_to_xy(grid):
from orix.quaternion.rotation import Rotation
from orix.vector.vector3d import Vector3d
from orix.projections import StereographicProjection
s = StereographicProjection(pole=-1)
rotations_regular = Rotation.from_euler(np.deg2rad(grid))
rot_reg_test = rotations_regular*Vector3d.zvector()
x, y = s.vector2xy(rot_reg_test)
return x, y
fig, ax1 = plt.subplots(1, figsize=(5, 5))
ax1.scatter(*grid_to_xy(grid_cub), s = 1)
ax1.set_xlim(-0.1, 0.5)
ax1.set_ylim(-0.1, 0.5)
plt.close()
import diffpy
from diffsims.libraries.structure_library import StructureLibrary
from diffsims.generators.diffraction_generator import DiffractionGenerator
from diffsims.generators.library_generator import DiffractionLibraryGenerator
# Parameters necessary for simulating a template library
# half size of the images
half_shape = (experimental_data.data.shape[-2]//2, experimental_data.data.shape[-1]//2)
# maximum radius in reciprocal space to calculate spot intensities for
reciprocal_radius = np.sqrt(half_shape[0]**2 + half_shape[1]**2)*diffraction_calibration
# importing the structure
structure_matrix = diffpy.structure.loadStructure("data/austeniteLP.cif")
# "The microscope = the parameters that determine how the templates are calculated"
diff_gen = DiffractionGenerator(accelerating_voltage=200,
precession_angle=0,
scattering_params=None,
shape_factor_model="linear",
minimum_intensity=0.1,
)
lib_gen = DiffractionLibraryGenerator(diff_gen)
# Generating a library
# "Library of structures and orientations"
library_phases_fe = StructureLibrary(["fe"], [structure_matrix], [grid_cub])
# Calculate the actual library
diff_lib_fe = lib_gen.get_diffraction_library(library_phases_fe,
calibration=diffraction_calibration,
reciprocal_radius=reciprocal_radius,
half_shape=half_shape,
with_direct_beam=False,
max_excitation_error=0.07)
from pyxem.utils import indexation_utils as iutls
from pyxem.utils import plotting_utils as putls
from pyxem.utils import polar_transform_utils as ptutls
from pyxem.utils import expt_utils as eutls
# pull out a random image and simulation
image = experimental_data.inav[0,0].data
simulation_test = diff_lib_fe["fe"]["simulations"][-1]
# for completeness in the illustration, all keyword arguments are given and explained
# an array of angles and corresponding correlation values are returned
a, c = iutls.get_in_plane_rotation_correlation(
image,
simulation_test,
intensity_transform_function=None, # a function applied both to the image and template intensities before calculating the correlation
delta_r = 1, # sampling in the radial direction
delta_theta = 0.1, # sampling in the azimuthal direction
max_r = None, # maximum radius to consider, by default the distance from the center to the corner
find_direct_beam = True, # convenience, if the pattern was not centered, this will perform a rough centering
direct_beam_position = None, # manually provide the coordinates of the direct beam
normalize_image=True, # divide the correlation by the norm of the image
normalize_template=True, # divide the correlation by the norm of the template
)
fig, ax = plt.subplots()
ax.plot(a, c)
ax.set_xlim(0, 360)
ax.set_xlabel("Angular shift (Degrees)")
ax.set_ylabel("Correlation")
plt.close()
# again a convenience function implemented as sanity check
putls.plot_template_over_pattern(image,
simulation_test,
in_plane_angle=a[np.argmax(c)],
coordinate_system = "cartesian",
size_factor = 10,
vmax=0.02,
max_r = 200,
find_direct_beam=True,
cmap = "inferno"
)
plt.close()
simulations = diff_lib_fe["fe"]["simulations"]
# these arguments are the same as before, for convenience we define them here
delta_r = 1
delta_theta = 1
max_r = None
intensity_transform_function = None
find_direct_beam = True
direct_beam_position = None
normalize_image = True
normalize_templates = True
fast_cors = iutls.correlate_library_to_pattern_fast(
image, simulations, delta_r, delta_theta, max_r, intensity_transform_function, find_direct_beam,
direct_beam_position, normalize_image, normalize_templates,
)
frac_keep = 0.8
n_keep = None
# if frac_keep < 1 or 1 < n_keep < number of templates then indexation will be performed on the
# templates in "indexes" that have the highest "fast" correlation
indexes, angles, corrs, angles_m, corrs_m = iutls.correlate_library_to_pattern(
image, simulations, frac_keep, n_keep, delta_r, delta_theta, max_r,
intensity_transform_function, find_direct_beam, direct_beam_position,
normalize_image, normalize_templates,
)
fig, ax = plt.subplots(ncols = 3, nrows = 2, figsize = (10, 6))
x, y = grid_to_xy(grid_cub)
ax = ax.ravel()
for i in ax:
i.set_xlim(-0.05, 0.45)
i.set_ylim(-0.05, 0.4)
i.axis("off")
i.set_aspect("equal")
s = 10
# to ensure the same color scale on mirror and non-mirror
all_cors = np.stack([corrs, corrs_m])
vmin = all_cors.min()
vmax = all_cors.max()
ax[0].scatter(x[indexes], y[indexes], s = s, c = corrs, cmap = "inferno", vmin = vmin, vmax = vmax)
ax[0].set_title("Correlations")
ax[1].scatter(x[indexes], y[indexes], s = s, c = corrs_m, cmap = "inferno", vmin = vmin, vmax = vmax)
ax[1].set_title("Mirrored correlations")
ax[2].scatter(x, y, s = s, c = fast_cors, cmap = "inferno")
ax[2].set_title("Fast_correlation")
ax[3].scatter(x[indexes], y[indexes], s = s, c = angles, cmap = "hsv")
ax[3].set_title("Angles")
ax[4].scatter(x[indexes], y[indexes], s = s, c = angles_m, cmap = "hsv")
ax[4].set_title("Mirrored angles")
plt.close()
# The number of best answers to return, not to be confused with n_keep!
n_best = 5
indices_n, angles_n, correlations_n, signs_n = iutls.get_n_best_matches(image,
simulations,
n_best,
frac_keep,
n_keep,
delta_r,
delta_theta,
max_r,
intensity_transform_function,
find_direct_beam,
direct_beam_position,
normalize_image,
normalize_templates,
)
fig, ax = plt.subplots(figsize = (5, 5))
ax.set_aspect("equal")
ax.scatter(x[indexes], y[indexes], s = 50, c = corrs, cmap = "Greys_r", vmin = vmin, vmax = vmax)
for j, i in enumerate(indices_n):
ax.scatter(x[i], y[i], s = s*5, label = f"{j}: {correlations_n[j]}")
ax.legend()
plt.close()
# signs will be 1 for the upper triangle, -1 for the lower triangle
mirrored = signs_n[0] == -1
putls.plot_template_over_pattern(image,
simulations[indices_n[0]],
in_plane_angle=angles_n[0],
coordinate_system = "cartesian",
size_factor = 10,
vmax=0.01,
max_r = 200,
mirrored_template=mirrored,
find_direct_beam=True,
cmap = "inferno"
)
plt.close()
# let's not throw away any templates prematurely and perform a full calculation on all
frac_keep = 1
result, phasedict = iutls.index_dataset_with_template_rotation(experimental_data,
diff_lib_fe,
phases = ["fe"], # if we have multiple phases we can also specify which ones we want to consider. If it's not specified, all phases are used.
n_best = n_best,
frac_keep = frac_keep,
n_keep = n_keep,
delta_r = delta_r,
delta_theta = delta_theta,
max_r = 115,
intensity_transform_function=intensity_transform_function,
normalize_images = normalize_image,
normalize_templates=normalize_templates,
)
result.keys()
result["phase_index"].shape
solution = result["orientation"]
import matplotlib.colors as mcolors
from orix.projections import StereographicProjection
# map a vector onto the fundamental zone of the cube
def to_fundamental(data_sol):
data_sol = np.abs(data_sol)
data_sol = np.sort(data_sol, axis=-1)
column = data_sol[...,0].copy()
data_sol[..., 0] = data_sol[...,1]
data_sol[..., 1] = column
return data_sol
def get_ipf_color(vectors):
# the following column vectors should map onto R [100], G [010], B[001], i.e. the identity. So the inverse of
# this matrix maps the beam directions onto the right color vector
color_corners = np.array([[0, 1, 1],
[0, 0, 1],
[1, 1, 1]])
color_mapper = np.linalg.inv(color_corners)
# a bit of wrangling
data_sol = to_fundamental(vectors.data)
flattened = data_sol.reshape(np.product(data_sol.shape[:-1]), 3).T
rgb_mapped = np.dot(color_mapper, flattened)
rgb_mapped = np.abs(rgb_mapped / rgb_mapped.max(axis=0)).T
rgb_mapped = rgb_mapped.reshape(data_sol.shape)
return rgb_mapped
from orix.quaternion.rotation import Rotation
from orix.vector.vector3d import Vector3d
# draw IPF - Z (row 1), IPF - Y (row 2), IPF - Z (row 3)
fig, ax = plt.subplots(ncols = solution.shape[2], nrows = 3, figsize = (10, 6))
for i in range(solution.shape[2]):
solution_vectors_z = Rotation.from_euler(np.deg2rad(solution[:,:,i,:]))*Vector3d.zvector()
solution_vectors_y = Rotation.from_euler(np.deg2rad(solution[:,:,i,:]))*Vector3d.yvector()
solution_vectors_x = Rotation.from_euler(np.deg2rad(solution[:,:,i,:]))*Vector3d.xvector()
ax[0, i].set_title(f"Solution {i}")
ax[0, i].imshow(get_ipf_color(solution_vectors_z))
ax[1, i].imshow(get_ipf_color(solution_vectors_y))
ax[2, i].imshow(get_ipf_color(solution_vectors_x))
ax[0,0].set_ylabel("IPF-Z")
ax[1,0].set_ylabel("IPF-Y")
ax[2,0].set_ylabel("IPF-X")
fig.tight_layout()
plt.close()
# scan coordinate to check
px = 20
py = 15
# which solution to plot
n_sol = 0
# query the necessary info from the solution
sim_sol_index = result["template_index"][py, px, n_sol]
mirrored_sol = result["mirrored_template"][py, px, n_sol]
in_plane_angle = result["orientation"][py, px, n_sol, 0] #! NOTE: the first angle is the in plane angle!
# query the appropriate template
sim_sol = simulations[sim_sol_index]
fig, ax = plt.subplots(ncols=2, figsize=(8, 4))
# calculating and plotting the IPF
solution_vectors_y = Rotation.from_euler(np.deg2rad(solution[:,:,n_sol,...]))*Vector3d.yvector()
ax[0].imshow(get_ipf_color(solution_vectors_y))
# plotting the point we chose
ax[0].scatter([px], [py], c="black", s=50)
# plotting the diffraction pattern and template
putls.plot_template_over_pattern(experimental_data.inav[px, py].data,
sim_sol,
ax=ax[1],
in_plane_angle=in_plane_angle,
coordinate_system = "cartesian",
size_factor = 10,
vmax=0.02,
max_r = 200,
mirrored_template=mirrored_sol,
find_direct_beam=True,
cmap = "inferno",
marker_color = "green"
)
for i in ax:
i.axis("off")
plt.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <center><img src="images/tf_logo.jpg" width="70%"/></center>
Step2: Munge data
Step3: RTFM - https
Step4: <center><img src="images/dense.png" width="55%"/></center>
Step5: <center><img src="images/waiting.jpg" width="55%"/></center>
Step6: Keras' Other Features
|
<ASSISTANT_TASK:>
Python Code:
reset -fs
import keras
# What is the backend / execution engine?
keras.backend.backend()
# Import data
from keras.datasets import mnist
# Setup train and test splits
(x_train, y_train), (x_test, y_test) = mnist.load_data()
from random import randint
from matplotlib import pyplot
%matplotlib inline
pyplot.imshow(x_train[randint(0, x_train.shape[0])], cmap='gray_r');
# Munge Data
# Transform from matrix to vector, cast, and normalize
image_size = 784 # 28 x 28
x_train = x_train.reshape(x_train.shape[0], image_size) # Transform from matrix to vector
x_train = x_train.astype('float32') # Cast as 32 bit integers
x_train /= 255 # Normalize inputs from 0-255 to 0.0-1.0
x_test = x_test.reshape(x_test.shape[0], image_size) # Transform from matrix to vector
x_test = x_test.astype('float32') # Cast as 32 bit integers
x_test /= 255 # Normalize inputs from 0-255 to 0.0-1.0
# Convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
# Import the most common type of neural network
from keras.models import Sequential
# Define model instance
model = Sequential()
# Import the most common type of network layer, fully interconnected
from keras.layers import Dense
# Define input layer
layer_input = Dense(units=512, # Number of nodes
activation='sigmoid', # The nonlinearity
input_shape=(image_size,))
model.add(layer_input)
# Define another layer
model.add(Dense(units=512, activation='sigmoid'))
# Define output layers
layer_output = Dense(units=10, # Number of digits (0-9)
activation='softmax') # Convert neural activation to probability of category
model.add(layer_output)
# Print summary
model.summary()
# Yes - we compile the model to run it
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
# Train the model
training = model.fit(x_train,
y_train,
epochs=5, # Number of passes over complete dataset
verbose=True,
validation_split=0.1)
# Let's see how well our model performs
loss, accuracy = model.evaluate(x_test,
y_test,
verbose=True)
print(f"Test loss: {loss:.3}")
print(f"Test accuracy: {accuracy:.3%}")
# reset -fs
# from keras import *
# whos
# from keras.datasets import fashion_mnist
# # Setup train and test splits
# (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
# from random import randint
# from matplotlib import pyplot
# %matplotlib inline
# pyplot.imshow(x_train[randint(0, x_train.shape[0])], cmap='gray_r');
# # Define CNN model
# # Redefine input dimensions to make sure conv works
# img_rows, img_cols = 28, 28
# x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
# x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
# input_shape = (img_rows, img_cols, 1)
# import keras
# # Convert class vectors to binary class matrices
# y_train = keras.utils.to_categorical(y_train, 10)
# y_test = keras.utils.to_categorical(y_test, 10)
# from keras.layers import Conv2D, Dense, Flatten, MaxPooling2D
# # Define model
# model = Sequential()
# model.add(Conv2D(32,
# kernel_size=(3, 3),
# activation='sigmoid',
# input_shape=input_shape))
# model.add(Conv2D(64, (3, 3), activation='sigmoid'))
# model.add(MaxPooling2D(pool_size=(2, 2)))
# model.add(Flatten())
# model.add(Dense(128, activation='sigmoid'))
# model.add(Dense(10, activation='softmax'))
# model.compile(loss='categorical_crossentropy',
# optimizer='adam',
# metrics=['accuracy'])
# # Define training
# training = model.fit(x_train,
# y_train,
# epochs=5,
# verbose=True,
# validation_split=0.1)
# loss, accuracy = model.evaluate(x_test,
# y_test,
# verbose=True)
# print(f"Test loss: {loss:.3}")
# print(f"Test accuracy: {accuracy:.3%}")
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.