Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
โ | cleaned_prompt
stringlengths 168
30.3k
โ |
|---|---|---|
6,500
|
<ASSISTANT_TASK:>
Python Code:
import sys
try:
import docplex.cp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
from docplex.cp.model import CpoModel
from sys import stdout
NB_QUEEN = 8
mdl = CpoModel(name="NQueen")
# Create column index of each queen
x = mdl.integer_var_list(NB_QUEEN, 0, NB_QUEEN - 1, "X")
# One queen per raw
mdl.add(mdl.all_diff(x))
# One queen per diagonal xi - xj != i - j
mdl.add(mdl.all_diff(x[i] + i for i in range(NB_QUEEN)))
# One queen per diagonal xi - xj != j - i
mdl.add(mdl.all_diff(x[i] - i for i in range(NB_QUEEN)))
print("\nSolving model....")
msol = mdl.solve(TimeLimit=10)
try:
import numpy as np
import matplotlib.pyplot as plt
VISU_ENABLED = True
except ImportError:
VISU_ENABLED = False
def display(sol):
%matplotlib inline
chess_board = np.zeros((NB_QUEEN, NB_QUEEN, 3))
black = 0.5
white = 1
for l in range(NB_QUEEN):
for c in range(NB_QUEEN):
if (l%2 == c%2):
col = white
else:
col = black
chess_board[l,c,::]=col
fig, ax = plt.subplots(figsize=(NB_QUEEN / 2, NB_QUEEN / 2))
ax.imshow(chess_board, interpolation='none')
# wq_im_file = "./n_queen_utils/WQueen.png"
# bq_im_file = "./n_queen_utils/BQueen.png"
wq_im_file = "https://github.com/IBMDecisionOptimization/docplex-examples/blob/master/examples/cp/jupyter/n_queen_utils/WQueen.png?raw=true"
bq_im_file = "https://github.com/IBMDecisionOptimization/docplex-examples/blob/master/examples/cp/jupyter/n_queen_utils/BQueen.png?raw=true"
wq = plt.imread(wq_im_file)
bq = plt.imread(bq_im_file)
for y, x in enumerate(sol):
if (x%2 == y%2):
queen = bq
else:
queen = wq
ax.imshow(queen, extent=[x-0.4, x + 0.4, y - 0.4, y + 0.4])
ax.set(xticks=[], yticks=[])
ax.axis('image')
plt.show()
if msol:
stdout.write("Solution:")
sol = [msol[v] for v in x]
for v in range(NB_QUEEN):
stdout.write(" " + str(sol[v]))
stdout.write("\n")
stdout.write("Solve time: " + str(msol.get_solve_time()) + "\n")
if VISU_ENABLED:
display(sol)
else:
stdout.write("No solution found\n")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that the more global package <i>docplex</i> contains another subpackage <i>docplex.mp</i> that is dedicated to Mathematical Programming, another branch of optimization.
Step 2
Step2: Set model parameter
Step3: Step 3
Step4: Define the decision variables
Step5: Express the business constraints
Step6: Solve the model
Step7: Step 4
|
6,501
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
#self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
def sigmoid(x):
return 1 / (1 + np.exp(-x))
self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, error)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error * 1
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:,None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:,None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
import sys
### Set the hyperparameters here ###
iterations = 3500
learning_rate = 0.9
hidden_nodes = 9
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Unit tests
Step9: Training the network
Step10: Check out your predictions
|
6,502
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
with open('sounds/wavToTag.txt') as f:
labels = array([l.replace('\n', '') for l in f.readlines()])
print set(labels)
import librosa
mfccs = {}
for i in range(len(labels)):
y, sr = librosa.load('sounds/{}.wav'.format(i))
mfcc = librosa.feature.mfcc(y, sr, n_mfcc=13)
mfccs[i] = mfcc.T
def generate_train_test_set(P):
train = []
test = []
for s in set(labels):
all = find(labels == s)
shuffle(all)
train += all[:-P].tolist()
test += all[-P:].tolist()
return train, test
from dtw import dtw
# We use DP to speed up multiple tests
D = ones((len(labels), len(labels))) * -1
def cross_validation(train, test):
score = 0.0
for i in test:
x = mfccs[i]
dmin, jmin = inf, -1
for j in train:
y = mfccs[j]
d = D[i, j]
if d == -1:
d, _, _, _ = dtw(x, y, dist=lambda x, y: norm(x - y, ord=1))
D[i, j] = d
if d < dmin:
dmin = d
jmin = j
score += 1.0 if (labels[i] == labels[jmin]) else 0.0
return score / len(test)
train, test = generate_train_test_set(P=1)
rec_rate = cross_validation(train, test)
print 'Recognition rate {}%'.format(100. * rec_rate)
P = arange(1, 10)
N = 5
rec = []
for p in P:
r = [cross_validation(*generate_train_test_set(p)) for _ in range(N)]
rec.append(r)
rec = array(rec)
rec = rec.reshape((N, -1))
errorbar(P - 0.5, mean(rec, axis=0), yerr=std(rec, axis=0))
xticks(P - 0.5, P)
ylim(0, 1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Precompute all MFCCs
Step2: Leave P Out Cross Validation
Step3: The next plot may take a while to compute!
|
6,503
|
<ASSISTANT_TASK:>
Python Code:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
plt.imshow(mnist.train.images[..].reshape([.., ..]), cmap=plt.get_cmap('gray_r'))
mnist.train.labels[..]
x_image = ...
x_image
print(x_image.get_shape())
print(conv1.get_shape())
print(pool1.get_shape())
print(conv2.get_shape())
print(pool2.get_shape())
sess = tf.Session()
sess.run(tf.global_variables_initializer())
costs = []
for i in range(5000):
x_data, y_data = mnist.train.next_batch(100)
_, cost = sess.run([train, loss],
feed_dict={x: x_data, y: y_data, drop_prob: 0.5})
costs.append(cost)
plt.plot(costs)
for i in range(5):
plt.imshow(mnist.test.images[i].reshape([28, 28]), cmap=plt.get_cmap('gray_r'))
plt.show()
print(sess.run(tf.argmax(y_hat,1), feed_dict={x: mnist.test.images[i].reshape([1,784]),
drop_prob: 1.0}))
[x.name for x in tf.global_variables()]
with tf.variable_scope('conv2d', reuse=True):
kernel = tf.get_variable('kernel')
weight = sess.run(kernel)
weight.shape
fig, axes = plt.subplots(4, 8, figsize=(10, 10))
for i in range(4):
for j in range(8):
axes[i][j].imshow(weight[:, :, :, i*8+j].reshape([5, 5]), cmap=plt.get_cmap('gray_r'))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: mnist.train.images์๋ ํ๋ จ์ฉ ์ด๋ฏธ์ง ๋ฐ์ดํฐ๊ฐ ์๊ณ mnist.test.images์๋ ํ
์คํธ์ฉ ์ด๋ฏธ์ง ๋ฐ์ดํฐ๊ฐ ์์ต๋๋ค. ์ด ๋ฐ์ดํฐ์ ํฌ๊ธฐ๋ฅผ ํ์ธํด ๋ณด์ธ์.
Step2: mnist.train.labels์๋ ์ ๋ต๊ฐ y ๊ฐ ๋ค์ด ์์ต๋๋ค. ์ํซ๋ฒกํฐ๋ก ๋ก๋๋์๋์ง 55,000๊ฐ์ ์ ๋ต ๋ฐ์ดํฐ ์ค ํ๋๋ฅผ ํ์ธํด ๋ณด์ธ์.
Step3: ํ๋ จ ๋ฐ์ดํฐ๋ 55,000๊ฐ๋ก ํ๊บผ๋ฒ์ ์ฒ๋ฆฌํ๊ธฐ์ ๋๋ฌด ๋ง์ต๋๋ค. ๊ทธ๋์ ๋ฏธ๋๋ฐฐ์น ๊ทธ๋๋์ธํธ ๋์ผํธ ๋ฐฉ์์ ์ฌ์ฉํ๋ ค๊ณ ํฉ๋๋ค. ๋ฏธ๋๋ฐฐ์น ๋ฐฉ์์ ์ฌ์ฉํ๋ ค๋ฉด ํ๋ จ ๋ฐ์ดํฐ์์ ์ผ๋ถ๋ฅผ ์ชผ๊ฐ์ด ๋ฐ๋ณตํ์ฌ ํ
์ํ๋ก์ฐ ๋ชจ๋ธ์ ์ฃผ์
ํด ์ฃผ์ด์ผ ํฉ๋๋ค.
Step4: ์ฝ๋ณผ๋ฃจ์
์ ์ ์ฉํ๊ธฐ ์ํด tf.layers.conv2d ํจ์๋ฅผ ์ฌ์ฉํ๊ฒ ์ต๋๋ค. ์ปค๋ ์ฌ์ด์ฆ๋ 5x5 ์ด๊ณ 32๊ฐ๋ฅผ ์ฌ์ฉํฉ๋๋ค. ์คํธ๋ผ์ด๋๋ 1x1 ์ด๊ณ 'same' ํจ๋ฉ์ ์ฌ์ฉํฉ๋๋ค. ํ์ฑํ ํจ์๋ ๋ ๋ฃจ ํจ์๋ฅผ ์ ์ฉํฉ๋๋ค.
Step5: ๋๋ฒ์งธ ์ฝ๋ณผ๋ฃจ์
์ ์ปค๋ ์ฌ์ด์ฆ๋ 5x5 ์ด๊ณ 64๊ฐ๋ฅผ ์ฌ์ฉํฉ๋๋ค. ์คํธ๋ผ์ด๋๋ 1x1 ์ด๊ณ 'same' ํจ๋ฉ์ ์ฌ์ฉํฉ๋๋ค. ํ์ฑํ ํจ์๋ ๋ ๋ฃจ ํจ์๋ฅผ ์ ์ฉํฉ๋๋ค.
Step6: ๋ด์ค ๋คํธ์ํฌ์ ์ฐ๊ฒฐํ๊ธฐ ์ํด ๋๋ฒ์งธ ํ๋ง ๊ฒฐ๊ณผ๋ฅผ ํผ์นฉ๋๋ค. ์ด ๋์๋ reshape ๋ช
๋ น์ ์ฌ์ฉํฉ๋๋ค. ์ฒซ๋ฒ์งธ ์ฐจ์์ ์๊ด์์ด 2~4๋ฒ์งธ ์ฐจ์์ ํ๋๋ก ํฉ์นฉ๋๋ค.
Step7: 5000๋ฒ ๋ฐ๋ณต์ ํ๋ฉด์ ํ๋ จ ๋ฐ์ดํฐ์์ 100๊ฐ์ฉ ๋ฝ์๋ด์ด(mnist.train.next_batch) ๋ชจ๋ธ์ ๋๋กญ์์ ๋น์จ๊ณผ ํจ๊ป ์ฃผ์
ํฉ๋๋ค. ๋ชจ๋ธ์ ํ๋ ์ด์ค ํ๋์ ์ฃผ์
ํ๋ ค๋ฉด ํ๋ ์ด์ค ํ๋์ ์ด๋ฆ๊ณผ ๋๊ฒจ์ค ๊ฐ์ ๋์
๋๋ฆฌ ํํ๋ก ๋ฌถ์ด์ feed_dict ๋งค๊ฐ๋ณ์์ ์ ๋ฌํฉ๋๋ค.
Step8: costs ๋ฆฌ์คํธ๋ฅผ ๊ทธ๋ํ๋ก ์ถ๋ ฅํฉ๋๋ค.
Step9: ์ ํ๋๋ฅผ ๊ณ์ฐํ๊ธฐ ์ํด ๋ง๋ ๋
ธ๋ accuracy๋ฅผ ์คํํฉ๋๋ค. ์ด๋ ์
๋ ฅ ๋ฐ์ดํฐ๋ mnist.test ๋ก ํ๋ จ์์ ์ฌ์ฉํ์ง ์์๋ ๋ฐ์ดํฐ์
๋๋ค. accuracy๋ฅผ ๊ณ์ฐํ ๋๋ ๋ชจ๋ ๋ด๋ฐ์ ์ฌ์ฉํ๊ธฐ ์ํด ๋๋กญ์์ ๋น์จ์ 1๋ก ์ง์ ํด์ผ ํฉ๋๋ค.
Step10: ํ์ต๋ ๋ณ์๋ฅผ ๋ชจ๋ ์ถ๋ ฅํด ๋ด
๋๋ค. ์ฌ๊ธฐ์๋ ๋๊ฐ์ ์ฝ๋ณผ๋ฃจ์
๋ ์ด์ด์ ๊ฐ์ค์น์ ๋ฐ์ด์ด์ค, ๋๊ฐ์ ๋ด์ค ๋ ์ด์ด์ ๊ฐ์ค์น์ ๋ฐ์ด์ด์ค๊ฐ ์์ต๋๋ค.
Step11: ์ฒซ๋ฒ์งธ ์ฝ๋ณผ๋ฃจ์
๋ ์ด์ด์ ๊ฐ์ค์น ํ
์์ ๊ฐ์ ์ถ์ถํฉ๋๋ค. ์ด ๊ฐ์ค์น๋ ์์์ ์ฐ๋ฆฌ๊ฐ ์ ์ํ๋ ๊ฒ๊ณผ ๊ฐ์ด 5x5 ์ฌ์ด์ฆ์ 32๊ฐ๋ฅผ ํฉ์น ๊ฒ์
๋๋ค.
Step12: ์ด ๊ฐ์ค์น๋ฅผ ํ๊ฐ์ฉ ์ด๋ฏธ์ง๋ก ์ถ๋ ฅํด ๋ณด๊ฒ ์ต๋๋ค. ์ฒซ๋ฒ์งธ ์ฝ๋ณผ๋ฃจ์
๋ ์ด์ด์์ ํ์ตํ ๊ฒ์ ๋์ผ๋ก ํ์ธํ ์ ์๋์?
|
6,504
|
<ASSISTANT_TASK:>
Python Code:
def doubler(input_number):
return input_number*2
doubler(45)
temp_fn = lambda arg : arg*2
temp_fn(55)
type(temp_fn)
l1 = [1,2,3,4,5,6,7]
#to double elements in this list using list comp
l1_double = [i*2 for i in l1]
l1_double
#double using map and a function
l1_double_2 = list(map(doubler, l1))
l1_double_2
#double using map and a lambda function
l1_double_3 = list(map(lambda arg:arg*2, l1))
l1_double_3
#find only the odd numbers --> list comp way
l1_odd = [i for i in l1 if i%2 > 0]
l1_odd
# find only odd numbers --> filter with lambda way
l1_odd_2 = list(filter(lambda arg:arg%2>0, l1))
l1_odd_2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: It looks silly now, but lambdas work great with map and other productivity functions. You can have other methods and functions that do the heavy lifting and call them in a particular order from a lambda
Step2: filter function
|
6,505
|
<ASSISTANT_TASK:>
Python Code:
%tensorflow_version 1.x
!curl -Lo deepchem_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import deepchem_installer
%time deepchem_installer.install(version='2.3.0')
import deepchem as dc
import numpy as np
import matplotlib.pyplot as plot
tasks, datasets, transformers = dc.molnet.load_sampl(reload=False)
train_dataset, valid_dataset, test_dataset = datasets
model = dc.models.MultitaskRegressor(len(tasks), 1024, uncertainty=True)
model.fit(train_dataset, nb_epoch=200)
y_pred, y_std = model.predict_uncertainty(test_dataset)
# Generate some fake data and plot a regression line.
x = np.linspace(0, 5, 10)
y = 0.15*x + np.random.random(10)
plot.scatter(x, y)
fit = np.polyfit(x, y, 1)
line_x = np.linspace(-1, 6, 2)
plot.plot(line_x, np.poly1d(fit)(line_x))
plot.show()
plot.figure(figsize=(12, 3))
line_x = np.linspace(0, 5, 50)
for i in range(3):
plot.subplot(1, 3, i+1)
plot.scatter(x, y)
fit = np.polyfit(np.concatenate([x, [3]]), np.concatenate([y, [i]]), 10)
plot.plot(line_x, np.poly1d(fit)(line_x))
plot.show()
abs_error = np.abs(y_pred.flatten()-test_dataset.y.flatten())
plot.scatter(y_std.flatten(), abs_error)
plot.xlabel('Standard Deviation')
plot.ylabel('Absolute Error')
plot.show()
plot.hist(abs_error/y_std.flatten(), 20)
plot.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll use the SAMPL dataset from the MoleculeNet suite to run our experiments in this tutorial. Let's load up our dataset for our experiments, and then make some uncertainty predictions.
Step2: All of this looks exactly like any other example, with just two differences. First, we add the option uncertainty=True when creating the model. This instructs it to add features to the model that are needed for estimating uncertainty. Second, we call predict_uncertainty() instead of predict() to produce the output. y_pred is the predicted outputs. y_std is another array of the same shape, where each element is an estimate of the uncertainty (standard deviation) of the corresponding element in y_pred. And that's all there is to it! Simple, right?
Step3: The line clearly does not do a great job of fitting the data. There are many possible reasons for this. Perhaps the measuring device used to capture the data was not very accurate. Perhaps y depends on some other factor in addition to x, and if we knew the value of that factor for each data point we could predict y more accurately. Maybe the relationship between x and y simply isn't linear, and we need a more complicated model to capture it. Regardless of the cause, the model clearly does a poor job of predicting the training data, and we need to keep that in mind. We cannot expect it to be any more accurate on test data than on training data. This is known as aleatoric uncertainty.
Step4: Each of them perfectly interpolates the data points, yet they clearly are different models. (In fact, there are infinitely many 10th degree polynomials that exactly interpolate any ten data points.) They make identical predictions for the data we fit them to, but for any other value of x they produce different predictions. This is called epistemic uncertainty. It means the data does not fully constrain the model. Given the training data, there are many different models we could have found, and those models make different predictions.
Step5: The first thing we notice is that the axes have similar ranges. The model clearly has learned the overall magnitude of errors in the predictions. There also is clearly a correlation between the axes. Values with larger uncertainties tend on average to have larger errors.
|
6,506
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# Load PyMT model(s)
import pymt.models
ec = pymt.models.ECSimpleSnow()
#Call setup to get default config and data files.
defaults = ec.setup('.')
print(defaults)
cfg_filename = defaults[0]
%cat $cfg_filename
# Initialize the model with the defaults.
ec.initialize('snow_model.cfg')
ec.set_value('snow_class',2)
ec.set_value('open_area_or_not', 1)
# List input and output variable names.
print(ec.get_output_var_names())
print(ec.get_input_var_names())
plt.figure(figsize=[4,9])
h0 = plt.subplot(3,1,1)
h1 = plt.subplot(3,1,2)
h2 = plt.subplot(3,1,3)
h0.title.set_text('Snow Depth')
h1.title.set_text('Snow Density')
h2.title.set_text('Air Temperature')
print('Air Temperature Unit:', ec.get_var_units('land_surface_air__temperature'))
print('Snow Depth Unit:' , ec.get_var_units('snowpack__depth'))
print('Snow Density Unit:' , ec.get_var_units('snowpack__mass-per-volume_density'))
for i in np.arange(365):
ec.update()
tair = ec.get_value('land_surface_air__temperature')
snd = ec.get_value('snowpack__depth', units='m')
rsn = ec.get_value('snowpack__mass-per-volume_density')
units = ec.get_var_units('snowpack__depth')
h0.scatter(ec.time, snd, c='k')
h1.scatter(ec.time, rsn, c='k')
h2.scatter(ec.time,tair, c='k')
# ec.finalize()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: load example configuration and inputs
Step2: initialize by using default example data
Step3: Implement the simple snow model for the first year as an example
|
6,507
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function
import itertools
import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
#from pyodesys.native.gsl import NativeGSLSys as SymbolicSys
from pyodesys.native.cvode import NativeCvodeSys as SymbolicSys
sp.init_printing()
%matplotlib inline
print(sp.__version__)
vdp1 = lambda x, y, p: [y[1], -y[0] + p[0]*y[1]*(1 - y[0]**2)]
mu_val = 2.5
y0_1 = [0.0, 1.0]
y0_1, (y0_1[0], vdp1(0, y0_1, [mu_val])[0])
transf = lambda y, dydx, p: [y[0], y[0] - y[0]**3/3 - dydx[0]/p[0]]
x, mu = sp.symbols('x mu', real=True)
y = [yi(x) for yi in sp.symbols('y:2', cls=sp.Function)]
dydx = [yi.diff(x) for yi in y]
[sp.Eq(yi, expr, evaluate=False) for yi, expr in zip(y, transf(y, dydx, [mu]))] # Just for displaying
vdp2 = lambda x, y, p: [p[0]*(y[0] - y[0]**3/3 - y[1]), y[0]/p[0]]
calc_y0_2 = lambda y0, mu: transf(y0, vdp1(0, y0, [mu]), [mu])
y0_2 = calc_y0_2(y0_1, mu_val)
(y0_2, y0_2[0], vdp2(0, y0_2, [mu_val])[0])
def solve_and_plot(odesys, y0, tout, mu, indices=None, integrator='native', **kwargs):
plt.figure(figsize=(16, 4))
xout, yout, info = odesys.integrate(tout, y0, [mu], integrator=integrator, **kwargs)
plt.subplot(1, 2, 1)
odesys.plot_result(indices=indices, ls=('-',), c=('k', 'r'))
plt.legend(loc='best')
plt.subplot(1, 2, 2)
odesys.plot_phase_plane()
info.pop('internal_xout') # too much output
info.pop('internal_yout')
return len(xout), info
tend = 25
odesys1 = SymbolicSys.from_callback(vdp1, 2, 1, names='y0 y1'.split())
odesys1.exprs
for mu in [0, 3, 9]:
solve_and_plot(odesys1, y0_1, np.linspace(0, tend, 500), mu)
tau = lambda mu: 1.6137056388801094*mu + 4.676*mu**(-1./3)
for mu in [20, 40, 60]:
solve_and_plot(odesys1, y0_1, np.linspace(0, 5*tau(mu), 500), mu)
odesys2 = SymbolicSys.from_callback(vdp2, 2, 1, names='y0 y1'.split())
odesys2.exprs
solve_and_plot(odesys2, y0_2, tend, mu_val, nsteps=2000)
ls = itertools.cycle(('-', '--', ':'))
for mu in [84, 160, 320]:
y0_2 = calc_y0_2(y0_1, mu)
print(y0_2)
solve_and_plot(odesys2, y0_2, np.linspace(0, 5*tau(mu), 500), mu)
solve_and_plot(odesys2, calc_y0_2(y0_1, mu_val), tend, mu_val, nsteps=2000)
J = odesys2.get_jac()
J
odesys2._NativeCode._written_files
symbs = odesys2.dep + tuple(odesys2.params)
symbs
Jeig = J.eigenvals().keys()
eig_cbs = [sp.lambdify(symbs, eig, modules='numpy') for eig in Jeig]
Jeig
eigvals = np.array([(eig_cbs[0](*(tuple(yvals)+(mu_val + 0j,))),
eig_cbs[1](*(tuple(yvals)+(mu_val + 0j,)))) for yvals in odesys2._internal[1]])
plt.plot(odesys2._internal[0], odesys2.stiffness(), label='from SVD')
plt.plot(odesys2._internal[0], np.abs(eigvals[:,0])/np.abs(eigvals[:,1]), label='analytic')
plt.legend()
def arr_to_wav(arr, rate=44100):
from IPython.display import Audio
from scipy.io.wavfile import write
scaled = np.int16(arr/np.max(np.abs(arr)) * 32767)
write('test.wav', rate, scaled)
return Audio('test.wav')
xout, yout, info = odesys2.integrate(np.linspace(0, 500*tau(40.0), 2*44100), y0_1, [40.0], integrator='native')
arr_to_wav(yout[:, 0])
def overlay(tend_mu, odesys=odesys2, time=3, rate=44100, plot=False):
yout_tot = None
for tend, mu in tend_mu:
xout, yout, info = odesys.integrate(np.linspace(0, tend*tau(mu[0]), time*rate), y0_1, mu, integrator='native')
print(tend, mu, tend*tau(mu[0]))
if yout_tot is None:
yout_tot = yout[:, 0]
else:
yout_tot += yout[:, 0]
if plot:
plt.figure(figsize=(16,4))
plt.plot(yout_tot[slice(None) if plot is True else slice(0, plot)])
return arr_to_wav(yout_tot, rate=rate)
overlay([
(400, [2.0]),
(410, [2.1]),
], plot=10000)
vdp_forced = lambda x, y, p: [y[1], p[1]*sp.sin(p[2]*x) - y[0] + p[0]*y[1]*(1 - y[0]**2)]
odesys_forced = SymbolicSys.from_callback(vdp_forced, 2, 3)
overlay([(700, [8, 1, 0.5])], odesys_forced, plot=5000) # Non-chaotic behavior
overlay([(700, [8, 1.2, 0.6])], odesys_forced, plot=5000) # Chaotic behavior
vdp_transient = lambda x, y, p: [y[1], - y[0] + p[0]*sp.exp(-p[1]*x)*y[1]*(1 - y[0]**2)]
odesys_transient = SymbolicSys.from_callback(vdp_transient, 2, 2)
odesys_transient.exprs
overlay([
(440, [0.1, 1/2500.]),
(445, [0.5, 1/1000.]),
(890, [0.1, 2/2500.]),
(896, [0.5, 2/1000.]),
], odesys_transient, plot=-1)
odesys2._native._written_files
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that we imported NativeCvodeSys as SymbolicSys, this speed up the time of integration by more than an order of magnitude due to using compiled C++ code for our mathematical expressions.
Step2: An alternative would be to use use the Liรฉnard transformation
Step3: which gives us (we could generate this result using SymPy)
Step4: As we see, the period ($\tau$) varies with $\mu$, in 1952 Mary Cartwright derived an approximate formula for $\tau$ (valid for large $\mu$)
Step5: For larger values of $\mu$ we run into trouble (the numerical solver fails).
Step6: This looks much better. Let's see if the solver has an easier time dealing with this formulation of y2 for large values of $\mu$
Step7: Indeed it has.
Step8: For this simple system we can afford calculating the eigenvalues analytically
Step9: Audio
Step10: Forced van der pol oscillator
Step11: Transient $\mu$
|
6,508
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
# mathematical routines are expecting 'array'
x = array([-10, -9, -8, -7, -6, -5, -4, -3, 0]);
y = array([2.65, 2.10, 1.90, 1.40, 1.00, 0.80, 0.60, 0.30, 0.00]);
ey = array([0.1, 0.1, 0.1, 0.1, 0.05, 0.05, 0.05, 0.05, 0.2]);
# Plot the data with error bars
errorbar(x,y,ey,linestyle = '',marker = 'o') # no connecting line, circle
# Donโt forget axes labels
xlabel('x (mm)')
ylabel('y (mm)')
axis([-12,0.5,-0.5,3])
grid(True)
def myfun(x,a,b,c):
ans = -sqrt(a**2-(x-b)**2)+c # this is y, "the function to be fit"
return ans
p0 = [15, 0, 15]
from scipy.optimize import curve_fit # import the curve fitting function
plsq, pcov = curve_fit(myfun, x, y, p0, ey) # curve fit returns p and covariance matrix
# these give the parameters and the uncertainties
print('a = %.3f +/- %.3f' % (plsq[0], sqrt(pcov[0,0])))
print('b = %.3f +/- %.3f' % (plsq[1], sqrt(pcov[1,1])))
print('c = %.3f +/- %.3f' % (plsq[2], sqrt(pcov[2,2])))
xlots = linspace(-11,0.5) # need lots of data points for smooth curve
yfit = myfun(xlots,plsq[0],plsq[1],plsq[2]) # use fit results for a, b, c
errorbar(x,y,ey,linestyle = '',marker = 'o')
xlabel('x (mm)')
ylabel('y (mm)')
plot(xlots,yfit)
title('Least-squares fit to data')
legend(['data','Fit'])
axis([-12,0.5,-0.5,3])
grid(True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fitting function to the data
Step2: Here are the initial guesses for the parameters $a$, $b$, and $c$ to pass to the fitting function.
Step3: The 'curve_fit' function gets the best y by adjusting the parameters 'p'.
Step4: Now we use the fitted parameters in our function to compare with the data.
|
6,509
|
<ASSISTANT_TASK:>
Python Code:
# from __future__ import exam_success
from __future__ import absolute_import
from __future__ import print_function
%matplotlib inline
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import random
import pandas as pd
import scipy.stats as stats
# Sk cheats
from sklearn.cross_validation import cross_val_score # cross val
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.preprocessing import Imputer # get rid of nan
%%time
filename = "data/reduced_train_100000.csv"
#filename = "data/reduced_test_100000.csv"
raw = pd.read_csv(filename)
raw = raw.set_index('Id')
raw['Expected'].describe()
# Considering that the gauge may concentrate the rainfall, we set the cap to 1000
# Comment this line to analyse the complete dataset
l = len(raw)
raw = raw[raw['Expected'] < 1000]
print("Dropped %d (%0.2f%%)"%(l-len(raw),(l-len(raw))/float(l)*100))
raw.head(5)
l = float(len(raw["minutes_past"]))
comp = [[1-raw[i].isnull().sum()/l , i] for i in raw.columns]
comp.sort(key=lambda x: x[0], reverse=True)
sns.barplot(zip(*comp)[0],zip(*comp)[1],palette=sns.cubehelix_palette(len(comp), start=.5, rot=-.75))
plt.title("Percentage of non NaN data")
plt.show()
# We select all features except for the minutes past,
# because we ignore the time repartition of the sequence for now
features_columns = list([u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
def getXy(raw):
selected_columns = list([ u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
data = raw[selected_columns]
docX, docY = [], []
for i in data.index.unique():
if isinstance(data.loc[i],pd.core.series.Series):
m = [data.loc[i].as_matrix()]
docX.append(m)
docY.append(float(raw.loc[i]["Expected"]))
else:
m = data.loc[i].as_matrix()
docX.append(m)
docY.append(float(raw.loc[i][:1]["Expected"]))
X , y = np.array(docX) , np.array(docY)
return X,y
raw.index.unique()
raw.isnull().sum()
X,y=getXy(raw)
tmp = []
for i in X:
tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On complete dataset)")
plt.plot()
print("Average gauge observation in mm: %0.2f"%y.mean())
pd.DataFrame(y).describe()
#noAnyNan = raw.loc[raw[features_columns].dropna(how='any').index.unique()]
noAnyNan = raw.dropna()
noAnyNan.isnull().sum()
X,y=getXy(noAnyNan)
tmp = []
for i in X:
tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On fully filled dataset)")
plt.plot()
print("Average gauge observation in mm: %0.2f"%y.mean())
pd.DataFrame(y).describe()
noFullNan = raw.loc[raw[features_columns].dropna(how='all').index.unique()]
noFullNan.isnull().sum()
X,y=getXy(noFullNan)
tmp = []
for i in X:
tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On partly filled dataset)")
plt.plot()
print("Average gauge observation in mm: %0.2f"%y.mean())
pd.DataFrame(y).describe()
fullNan = raw.drop(raw[features_columns].dropna(how='all').index)
fullNan.isnull().sum()
X,y=getXy(fullNan)
tmp = []
for i in X:
tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On fully empty dataset)")
plt.plot()
print("Average gauge observation in mm: %0.2f"%y.mean())
pd.DataFrame(y).describe()
print("%d observations" %(len(raw)))
#print("%d fully filled, %d partly filled, %d fully empty"
# %(len(noAnyNan),len(noFullNan),len(raw)-len(noFullNan)))
print("%0.1f%% fully filled, %0.1f%% partly filled, %0.1f%% fully empty"
%(len(noAnyNan)/float(len(raw))*100,
len(noFullNan)/float(len(raw))*100,
(len(raw)-len(noFullNan))/float(len(raw))*100))
etreg = ExtraTreesRegressor(n_estimators=100, max_depth=None, min_samples_split=1, random_state=0)
X,y=getXy(noAnyNan)
XX = [np.array(t).mean(0) for t in X]
split = 0.2
ps = int(len(XX) * (1-split))
X_train = XX[:ps]
y_train = y[:ps]
X_test = XX[ps:]
y_test = y[ps:]
%%time
etreg.fit(X_train,y_train)
%%time
et_score = cross_val_score(etreg, XX, y, cv=5)
print("Score: %s\tMean: %.03f"%(et_score,et_score.mean()))
err = (etreg.predict(X_test)-y_test)**2
err.sum()/len(err)
r = random.randrange(len(X_train))
print(r)
print(etreg.predict(X_train[r]))
print(y_train[r])
r = random.randrange(len(X_test))
print(r)
print(etreg.predict(X_test[r]))
print(y_test[r])
filename = "data/reduced_test_5000.csv"
test = pd.read_csv(filename)
test = test.set_index('Id')
features_columns = list([u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
def getX(raw):
selected_columns = list([ u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
data = raw[selected_columns]
docX= []
for i in data.index.unique():
if isinstance(data.loc[i],pd.core.series.Series):
m = [data.loc[i].as_matrix()]
docX.append(m)
else:
m = data.loc[i].as_matrix()
docX.append(m)
X = np.array(docX)
return X
X=getX(test)
tmp = []
for i in X:
tmp.append(len(i))
tmp = np.array(tmp)
sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
plt.title("Number of ID per number of observations\n(On test dataset)")
plt.plot()
#print("Average gauge observation in mm: %0.2f"%y.mean())
etreg.predict(X_test)
testFull = test.dropna()
X=getX(testFull)
XX = [np.array(t).mean(0) for t in X]
pd.DataFrame(etreg.predict(XX)).describe()
predFull = zip(testFull.index.unique(),etreg.predict(XX))
b = np.empty(len(a))
b.fill(3.14)
zip(a,b)
predFull[:10]
testNan = test.drop(test[features_columns].dropna(how='all').index)
tmp = np.empty(len(testNan))
tmp.fill(0.445000) # 50th percentile of full Nan dataset
predNan = zip(testNan.index.unique(),tmp)
predNan[:10]
testLeft = test.drop(testNan.index.unique()).drop(testFull.index.unique())
tmp = np.empty(len(testLeft))
tmp.fill(1.27) # 50th percentile of full Nan dataset
predLeft = zip(testLeft.index.unique(),tmp)
len(testFull.index.unique())
len(testNan.index.unique())
len(testLeft.index.unique())
pred = predFull + predNan + predLeft
pred.sort(key=lambda x: x[0], reverse=False)
submission = pd.DataFrame(pred)
submission.columns = ["Id","Expected"]
submission.head()
submission.to_csv("first_submit.csv",index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 13.765.202 lines in train.csv
Step2: Per wikipedia, a value of more than 421 mm/h is considered "Extreme/large hail"
Step3: Quick analysis for the sparsity by column
Step4: We see that except for the fixed features minutes_past, radardist_km and Expected the dataset is mainly sparse.
Step5: How much observations is there for each ID ?
Step6: We see there is a lot of ID with 6 or 12 observations, that mean one every 5 or 10 minutes on average.
Step7: Now let's do the analysis on different subsets
Step8: Strangely we notice that the less observations there is, the more it rains on average
Step9: Predicitons
Step10:
|
6,510
|
<ASSISTANT_TASK:>
Python Code:
from scipy.stats import binom
# Binomial probability mass function
yvals = range(10+1)
plt.plot(yvals, binom.pmf(yvals, 10, 0.5), 'ro')
# Binomial likelhood function
pvals = np.linspace(0, 1)
y = 4
plt.plot(pvals, binom.pmf(y, 10, pvals));
cancer = pd.read_csv('../data/cancer.csv')
cancer
ytotal, ntotal = cancer.sum().astype(float)
p_hat = ytotal/ntotal
p_hat
cancer.y.var()
p_hat*(1.-p_hat)*ntotal
K_x = np.linspace(0, 10)
K_prior = lambda K: 1./(1. + K)**2
plt.plot(K_x, K_prior(K_x))
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
eta_x = np.linspace(0, 1)
eta_prior = lambda eta: 1./(eta*(1.-eta))
plt.plot(eta_x, eta_prior(eta_x))
from scipy.special import betaln
def betabin_post(params, n, y):
K, eta = params
post = betaln(K*eta + y, K*(1.-eta) + n - y).sum()
post -= len(y)*betaln(K*eta, K*(1.-eta))
post -= np.log(eta*(1.-eta))
post -= 2.*np.log(1.+K)
return post
betabin_post((15000, 0.003), cancer.n, cancer.y)
# Create grid
K_x = np.linspace(1, 20000)
eta_x = np.linspace(0.0001, 0.003)
# Calculate posterior on grid
z = np.array([[betabin_post((K, eta), cancer.n, cancer.y)
for eta in eta_x] for K in K_x])
# Plot posterior
x, y = np.meshgrid(eta_x, K_x)
cplot = plt.contour(x, y, z-z.max(), [-0.5, -1, -2, -3, -4], cmap=plt.cm.RdBu)
plt.ylabel('K');plt.xlabel('$\eta$');
def betabin_trans(theta, n, y):
K = np.exp(theta[0])
eta = 1./(1. + np.exp(-theta[1]))
# Jacobians for transformation
J = theta[0] + theta[1]
return betabin_post((K, eta), n, y) + J
betabin_trans((10, -7.5), cancer.n, cancer.y)
# Create grid
log_K_x = np.linspace(0, 20)
logit_eta_x = np.linspace(-8, -5)
# Calculate posterior on grid
z = np.array([[betabin_trans((t1, t2), cancer.n, cancer.y)
for t2 in logit_eta_x] for t1 in log_K_x])
# Plot posterior
x, y = np.meshgrid(logit_eta_x, log_K_x)
cplot = plt.contour(x, y, z - z.max(), [-0.5, -1, -2, -4, -8], cmap=plt.cm.RdBu)
plt.clabel(cplot, inline=1, fontsize=10, fmt='%1.1f')
plt.ylabel('log(K)');plt.xlabel('logit($\eta$)');
from scipy.optimize import fmin_bfgs
betabin_trans_min = lambda *args: -betabin_trans(*args)
init_value = (10, -7.5)
opt = fmin_bfgs(betabin_trans_min, init_value,
args=(cancer.n, cancer.y), full_output=True)
mode, var = opt[0], opt[3]
mode, var
det = np.linalg.det
inv = np.linalg.inv
def lmvn(value, mu, Sigma):
# Log kernel of multivariate normal
delta = np.array(value) - mu
return 1 / (2. * (np.log(det(Sigma))) - np.dot(delta.T, np.dot(inv(Sigma), delta)))
z = np.array([[lmvn((t1, t2), mode, var)
for t2 in logit_eta_x] for t1 in log_K_x])
x, y = np.meshgrid(logit_eta_x, log_K_x)
cplot = plt.contour(x, y, z - z.max(), cmap=plt.cm.RdBu)
plt.ylabel('log(K)');plt.xlabel('logit($\eta$)');
from scipy.stats.distributions import norm
se = np.sqrt(np.diag(var))
mode[0] + norm.ppf(0.025)*se[0], mode[0] + norm.ppf(0.975)*se[0]
mode[1] + norm.ppf(0.025)*se[1], mode[1] + norm.ppf(0.975)*se[1]
def rtriangle(low, high, mode):
alpha = -1
# Run until accepted
while np.random.random() > alpha:
u = np.random.uniform(low, high)
if u < mode:
alpha = (u - low) / (mode - low)
else:
alpha = (high - u) / (high - mode)
return(u)
_ = plt.hist([rtriangle(0, 7, 2) for t in range(10000)], bins=100)
chi2 = np.random.chisquare
mvn = np.random.multivariate_normal
rmvt = lambda nu, S, mu=0, size=1: (np.sqrt(nu) * (mvn(np.zeros(len(S)), S, size).T
/ chi2(nu, size))).T + mu
from scipy.special import gammaln
def mvt(x, nu, S, mu=0):
d = len(S)
n = len(x)
X = np.atleast_2d(x) - mu
Q = X.dot(np.linalg.inv(S)).dot(X.T).sum()
log_det = np.log(np.linalg.det(S))
log_pdf = gammaln((nu + d)/2.) - 0.5 * (d*np.log(np.pi*nu) + log_det) - gammaln(nu/2.)
log_pdf -= 0.5*(nu + d)*np.log(1 + Q/nu)
return(np.exp(log_pdf))
def calc_diff(theta, n, y, nu, S, mu):
return betabin_trans(theta, n, y) - np.log(mvt(theta, nu, S, mu))
calc_diff_min = lambda *args: -calc_diff(*args)
opt = fmin_bfgs(calc_diff_min,
(12, -7),
args=(cancer.n, cancer.y, 4, 2*var, mode),
full_output=True)
c = opt[1]
c
def reject(post, nu, S, mu, n, data, c):
k = len(mode)
# Draw samples from g(theta)
theta = rmvt(nu, S, mu, size=n)
# Calculate probability under g(theta)
gvals = np.array([np.log(mvt(t, nu, S, mu)) for t in theta])
# Calculate probability under f(theta)
fvals = np.array([post(t, data.n, data.y) for t in theta])
# Calculate acceptance probability
p = np.exp(fvals - gvals + c)
return theta[np.random.random(n) < p]
nsamples = 1000
sample = reject(betabin_trans, 4, var, mode, nsamples, cancer, c)
z = np.array([[betabin_trans((t1, t2), cancer.n, cancer.y)
for t2 in logit_eta_x] for t1 in log_K_x])
x, y = np.meshgrid(logit_eta_x, log_K_x)
cplot = plt.contour(x, y, z - z.max(), [-0.5, -1, -2, -4, -8], cmap=plt.cm.RdBu)
plt.clabel(cplot, inline=1, fontsize=10, fmt='%1.1f')
plt.ylabel('log(K)');plt.xlabel('logit($\eta$)')
plt.scatter(*sample.T[[1,0]])
float(sample.size)/nsamples
theta = rmvt(4, var, mode, size=1000)
f_theta = np.array([betabin_trans(t, cancer.n, cancer.y) for t in theta])
q_theta = np.array([mvt(t, 4, var, mode) for t in theta])
w = np.exp(f_theta - q_theta - max(f_theta - q_theta))
theta_si = [(w*t).sum()/w.sum() for t in theta.T]
theta_si
se = [np.sqrt((((theta.T[i] - theta_si[i])* w)**2).sum()/w.sum()) for i in (0,1)]
se
p_sir = w/w.sum()
theta_sir = theta[np.random.choice(range(len(theta)), size=10000, p=p_sir)]
fig, axes = plt.subplots(2)
_ = axes[0].hist(theta_sir.T[0], bins=30)
_ = axes[1].hist(theta_sir.T[1], bins=30)
logK_sample = theta_sir[:,0]
logK_sample.sort()
logK_sample[[250, 9750]]
# Write your anser here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bayesian Computation
Step2: If we use a simple binomial model, which assumes independent samples from a binomial distribution with probability of mortality $p$, we can use MLE to obtain an estimate of this probability.
Step3: However, if we compare the variation of $y$ under this model, it is to small relative to the observed variation
Step4: $$Var(y) = np(1-p)$$
Step5: Hence, the data are strongly overdispersed relative to what is predicted under a model with a fixed probability of death. A more realistic model would allow for these probabilities to vary among the cities.
Step6: Now, by multiplying these quantities together, we can obtain a non-normalized posterior.
Step7: An easy (though computationally expensive) way of getting the joint posterior distribution of the parameters is to evaluate betabin_post on a grid of parameter values.
Step8: This is fine, but the precision parameter $K$ is heavily skewed.
Step9: Normal Approximation
Step10: Thus, our approximated mode is $\log(K)=7.6$, $\text{logit}(\eta)=-6.8$. We can plug this value, along with the variance-covariance matrix, into a function that returns the kernel of a multivariate normal distribution, and use this to plot the approximate posterior
Step11: Along with this, we can estimate a 95% probability interval for the estimated mode
Step12: Of course, this approximation is only reasonable for posteriors that are not strongly skewed, bimodal, or leptokurtic (heavy-tailed).
Step13: This approach is useful, for example, in estimating the normalizing constant for posterior distributions.
Step14: Finally, we need an implementation of the multivariate T probability distribution function, which is as follows
Step15: The next step is to find the constant $c$ that ensures
Step16: We can calculate an appropriate value of $c'$ by simply using the approximation method described above on calc_diff (tweaked to produce a negative value for minimization)
Step17: Now we can execute a rejection sampling algorithm
Step18: Notice that the efficiency of rejection sampling is not very high for this problem.
Step19: Rejection sampling is usually subject to declining performance as the dimension of the parameter space increases. Further improvement is gained by using optimized algorithms such as importance sampling which, as the name implies, samples more frequently from important areas of the distribution.
Step20: We can obtain the probability of these values under the posterior density
Step21: and under the T distribution
Step22: This allows us to calculate the importance weights
Step23: notice that we have subtracted the maximum value of the differences, which normalizes the weights.
Step24: Finally, the standard error of the estimates
Step25: Sampling Importance Resampling
Step26: The choice function in numpy.random can be used to generate a random sample from an arbitrary 1-D array.
Step27: One advantage of this approach is that one can easily extract a posterior probability interval for each parameter, simply by extracting quantiles from the resampled values.
Step28: Exercise
|
6,511
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title Upgrade to TensorFlow 2.1+
!pip install --upgrade tensorflow
#@title Install TF Quant Finance
!pip install tf-quant-finance
#@title Imports
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
tf.compat.v1.enable_eager_execution()
import tf_quant_finance as tff
option_price = tff.black_scholes.option_price
implied_vol = tff.black_scholes.implied_vol
from IPython.core.pylabtools import figsize
figsize(21, 14) # better graph size for Colab
# Calculate discount factors (e^-rT)
rate = 0.05
expiries = np.array([0.5, 1.0, 2.0, 1.3])
discount_factors = np.exp(-rate * expiries)
# Current value of assets.
spots = np.array([0.9, 1.0, 1.1, 0.9])
# Forward value of assets at expiry.
forwards = spots / discount_factors
# Strike prices given by:
strikes = np.array([1.0, 2.0, 1.0, 0.5])
# Indicate whether options are call (True) or put (False)
is_call_options = np.array([True, True, False, False])
# The volatilites at which the options are to be priced.
volatilities = np.array([0.7, 1.1, 2.0, 0.5])
# Calculate the prices given the volatilities and term structure.
prices = option_price(
volatilities=volatilities,
strikes=strikes,
expiries=expiries,
forwards=forwards,
discount_factors=discount_factors,
is_call_options=is_call_options)
prices
# Initial positions for finding implied vol.
initial_volatilities = np.array([2.0, 0.5, 2.0, 0.5])
# Identifier whether the option is call (True) or put (False)
is_call_options = np.array([True, True, False, False])
# Find the implied vols beginning at initial_volatilities.
implied_vols = implied_vol(
prices=prices,
strikes=strikes,
expiries=expiries,
forwards=forwards,
discount_factors=discount_factors,
is_call_options=is_call_options,
initial_volatilities=initial_volatilities,
validate_args=True,
tolerance=1e-9,
max_iterations=200,
name=None,
dtype=None)
implied_vols
#@title Example data on a grid.
def grid_data(strike_vec, vol_vec, dtype=np.float64):
Construct dummy data with known ground truth.
For a grid of known strikes by volatilities, return the price.
Assumes the forward prices and expiries are fixed at unity.
Args:
strikes: a vector of strike prices from which to form the grid.
volatilities: a vector of volatilities from which to form the grid.
dtype: a numpy datatype for the element values of returned arrays.
Returns:
(forwards, strikes, expiries, true_volatilities, prices) all of
which are identically shaped numpy arrays.
nstrikes = len(strike_vec)
nvolatilities = len(vol_vec)
vol_ones = np.matrix(np.ones((1, nvolatilities)))
strike_ones = np.matrix(np.ones((nstrikes, 1)))
strikes = np.array(np.matrix(strike_vec).T * vol_ones, dtype=dtype)
volatilities = np.array(strike_ones * np.matrix(vol_vec), dtype=dtype)
expiries = np.ones_like(strikes, dtype=dtype)
forwards = np.ones_like(strikes, dtype=dtype)
initials = np.ones_like(strikes, dtype=dtype)
prices = option_price(volatilities=volatilities,
strikes=strikes,
expiries=expiries,
forwards=forwards,
dtype=tf.float64)
return (forwards, strikes, expiries, volatilities, initials, prices)
# Build a 1000 x 1000 grid of options find the implied volatilities of.
nstrikes = 1000
nvolatilities = 1000
strike_vec = np.linspace(0.0001, 5.0, nstrikes)
vol_vec = np.linspace(0.0001, 5.0, nvolatilities)
max_iterations = 50
grid = grid_data(strike_vec, vol_vec)
forwards0, strikes0, expiries0, volatilities0, initials0, prices0 = grid
initials0 = discounts0 = signs0 = np.ones_like(prices0)
# Implied volitilities, starting the root finder at 1.
implied_vols_fix = implied_vol(
prices=prices0,
strikes=strikes0,
expiries=expiries0,
forwards=forwards0,
initial_volatilities=initials0,
validate_args=False,
tolerance=1e-8,
max_iterations=max_iterations)
# Implied vols starting the root finder at the Radiocic-Polya approximation.
implied_vols_polya = implied_vol(
prices=prices0,
strikes=strikes0,
expiries=expiries0,
forwards=forwards0,
validate_args=False,
tolerance=1e-8,
max_iterations=max_iterations)
#@title Visualisation of accuracy
plt.clf()
thinner = 100
fig, _axs = plt.subplots(nrows=1, ncols=2)
fig.subplots_adjust(hspace=0.3)
axs = _axs.flatten()
implied_vols = [implied_vols_fix, implied_vols_polya]
titles = ["Fixed initialisation implied vol minus true vol", "Radiocic-Polya initialised implied vol minus true vol"]
vmin = np.min(map(np.min, implied_vols))
vmax = np.max(map(np.max, implied_vols))
images = []
for i in range(2):
_title = axs[i].set_title(titles[i])
_title.set_position([.5, 1.03])
im = axs[i].imshow(implied_vols[i] - volatilities0, origin="lower", interpolation="none", cmap="seismic", vmin=-1.0, vmax=1.0)
images.append(im)
axs[i].set_xticks(np.arange(0, len(vol_vec), thinner))
axs[i].set_yticks(np.arange(0, len(strike_vec), thinner))
axs[i].set_xticklabels(np.round(vol_vec[0:len(vol_vec):thinner], 3))
axs[i].set_yticklabels(np.round(strike_vec[0:len(strike_vec):thinner], 3))
plt.colorbar(im, ax=axs[i], fraction=0.046, pad=0.00)
axs[i].set_ylabel('Strike')
axs[i].set_xlabel('True vol')
plt.show()
pass
# Indices for selecting the middle of the grid.
vol_slice = np.arange(int(0.25*len(vol_vec)), int(0.75*len(vol_vec)))
strike_slice = np.arange(int(0.25*len(strike_vec)), int(0.75*len(strike_vec)))
error_fix = implied_vols_fix.numpy() - volatilities0
error_fix_sub = [error_fix[i, j] for i, j in zip(strike_slice, vol_slice)]
# Calculate the median absolute error in the central portion of the the grid
# for the fixed initialisation.
median_error_fix = np.median( np.abs(error_fix_sub) )
median_error_fix
error_polya = implied_vols_polya.numpy() - volatilities0
error_polya_sub = [error_polya[i, j] for i, j in zip(strike_slice, vol_slice)]
# Calculate the median absolute error in the central portion of the the grid
# for the Radiocic-Polya approximation.
median_error_polya = np.median( np.abs(error_polya_sub) )
median_error_polya
median_error_fix / median_error_polya
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Black Scholes
Step2: Black Scholes pricing and implied volatility usage
Step3: We now show how to invert the Black Scholes pricing model in order to recover the volatility which generated a given market price under a particular term structure. Again, the implied volatility interface operates on batches of options, with each index of the arrays corresponding to an independent problem to solve.
Step5: Which should show that implied_vols is very close to the volatilities used to generate the market prices. Here we provided initial starting positions, however, by default tff will chose an adaptive initialisation position as discussed below.
Step6: Where the grey values represent nans in the grid. Note that the bottom left corner of each image lies outside the bounds where inversion should be possible. The pattern of nan values for different values of a fixed initialisation strategy will be different (rerun the colab to see).
|
6,512
|
<ASSISTANT_TASK:>
Python Code:
%%bash
sudo pip freeze | grep google-cloud-bigquery==1.6.1 || \
sudo pip install google-cloud-bigquery==1.6.1
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_1
OPTIONS (
MODEL_TYPE="LINEAR_REG",
INPUT_LABEL_COLS=["weight_pounds"],
L2_REG=0.1,
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
plurality)
) AS gender_plurality_cross
FROM
babyweight.babyweight_data_train
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_1,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
plurality)
) AS gender_plurality_cross
FROM
babyweight.babyweight_data_eval
))
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_1,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
plurality)
) AS gender_plurality_cross
FROM
babyweight.babyweight_data_eval
))
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_2
OPTIONS (
MODEL_TYPE="LINEAR_REG",
INPUT_LABEL_COLS=["weight_pounds"],
L2_REG=0.1,
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
ML.BUCKETIZE(
mother_age,
GENERATE_ARRAY(15, 45, 1)
) AS bucketed_mothers_age,
plurality,
ML.BUCKETIZE(
gestation_weeks,
GENERATE_ARRAY(17, 47, 1)
) AS bucketed_gestation_weeks
)
) AS crossed
FROM
babyweight.babyweight_data_train
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_2)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_2,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
ML.BUCKETIZE(
mother_age,
GENERATE_ARRAY(15, 45, 1)
) AS bucketed_mothers_age,
plurality,
ML.BUCKETIZE(
gestation_weeks,
GENERATE_ARRAY(17, 47, 1)
) AS bucketed_gestation_weeks
)
) AS crossed
FROM
babyweight.babyweight_data_eval))
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_2,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
ML.BUCKETIZE(
mother_age,
GENERATE_ARRAY(15, 45, 1)
) AS bucketed_mothers_age,
plurality,
ML.BUCKETIZE(
gestation_weeks,
GENERATE_ARRAY(17, 47, 1)
) AS bucketed_gestation_weeks
)
) AS crossed
FROM
babyweight.babyweight_data_eval))
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_3
TRANSFORM(
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
ML.BUCKETIZE(
mother_age,
GENERATE_ARRAY(15, 45, 1)
) AS bucketed_mothers_age,
plurality,
ML.BUCKETIZE(
gestation_weeks,
GENERATE_ARRAY(17, 47, 1)
) AS bucketed_gestation_weeks
)
) AS crossed
)
OPTIONS (
MODEL_TYPE="LINEAR_REG",
INPUT_LABEL_COLS=["weight_pounds"],
L2_REG=0.1,
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
*
FROM
babyweight.babyweight_data_train
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_3)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_3,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_3,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Verify tables exist
Step2: Model 1
Step3: Create two SQL statements to evaluate the model.
Step4: Model 2
Step5: Let's now retrieve the training statistics and evaluate the model.
Step6: We now evaluate our model on our eval dataset
Step7: Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse.
Step8: Model 3
Step9: Let's retrieve the training statistics
Step10: We now evaluate our model on our eval dataset
Step11: Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse.
|
6,513
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib
%matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib.patches import Circle
from matplotlib.colors import LogNorm
from copy import copy
import skbeam
from skbeam.core.image import find_ring_center_acorr_1D
from skbeam.core.utils import bin_1D, radial_grid, bin_edges_to_centers
from skbeam.core.calibration import estimate_d_blind, refine_center
import numpy as np
from pims.tiff_stack import TiffStack_tifffile as TiffStack
import os.path
pixel_size = (.2, .2)
def estimate_d(img, name, wavelength, pixel_size):
# find the
res = find_ring_center_acorr_1D(img)
res2 = refine_center(img, res, pixel_size, 25, 5, thresh=0.1, window_size=5)
bins, sums, counts = img_to_relative_xyi(img, res2, radial_grid)
mask = counts > 10
bin_centers = bin_edges_to_centers(bins)[mask]
ring_averages = sums[mask] / counts[mask]
d_mean, d_std = estimate_d_blind(name,wavelength, bin_centers,
ring_averages, 5, 7, thresh=0.03)
return d_mean, d_std, res2
def show_rings_on_image(ax, image, ring_radius, center):
vmin, vmax = np.percentile(image, [80, 100])
my_cmap = copy(matplotlib.cm.get_cmap('gray'))
my_cmap.set_bad('k')
im = ax.imshow(image, cmap=my_cmap, interpolation='none', norm=LogNorm(), vmin=vmin, vmax=vmax)
for r in ring_radius:
c = Circle(center[::-1], r , facecolor='none', edgecolor='r', lw=2, linestyle='dashed')
ax.add_patch(c)
ax.axhline(center[0], color='r')
ax.axvline(center[1], color='r')
ax.set_ylim([center[0] - ring_radius[-1], center[0] + ring_radius[-1]])
ax.set_xlim([center[1] - ring_radius[-1], center[1] + ring_radius[-1]])
si_fname = 'Si_STD_d204-00002.tif'
si_name = 'Si'
si_wavelength = 0.1839
si_data = TiffStack(si_fname)
lab6_fname = 'LaB6_d500-0p72959-2Kx2K_pix200.tif'
lab6_name = 'LaB6'
lab6_wavelength = .72959
lab6_data = TiffStack(lab6_fname)
calib_si = estimate_d(si_data[0], si_name, si_wavelength, pixel_size)
print("D: {} ยฑ {}".format(calib_si[0], calib_si[1]))
print("center: {}".format(calib_si[2]))
cal_si = skbeam.core.calibration.calibration_standards['Si']
si_rings = calib_si[0] * np.tan(cal_si.convert_2theta(si_wavelength)) / .2
fig, ax = plt.subplots()
show_rings_on_image(ax, si_data[0], si_rings, calib_si[2])
plt.show()
calib_lab6 = estimate_d(lab6_data[0], lab6_name, lab6_wavelength, pixel_size)
print("D: {} ยฑ {}".format(calib_lab6[0], calib_lab6[1]))
print("center: {}".format(calib_lab6[2]))
cal_lab6 = nsls2.calibration.calibration_standards['LaB6']\n",
lab6_rings = calib_lab6[0] * np.tan(cal_lab6.convert_2theta(lab6_wavelength)) / .2\n",
fig, ax = plt.subplots()\n",
show_rings_on_image(ax, lab6_data[0], lab6_rings, calib_lab6[2])\n",
ax.set_xlim([0, lab6_data.frame_shape[0]])\n",
ax.set_ylim([0, lab6_data.frame_shape[1]])\n",
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define some helpful functions
Step2: Setup data for Si standard sample
Step3: Setup data for LaB6 calibration standard
Step4: Calibrate Si data
Step5: Calibrate using Lab 6 data
|
6,514
|
<ASSISTANT_TASK:>
Python Code:
# First, the imports
import os
import datetime as dt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display
np.random.seed(19760812)
%matplotlib inline
# read file 'mast.txt'
ipath = os.path.join('Datos', 'mast.txt')
def dateparse(date, time):
YY = 2000 + int(date[:2])
MM = int(date[2:4])
DD = int(date[4:])
hh = int(time[:2])
mm = int(time[2:])
return dt.datetime(YY, MM, DD, hh, mm, 0)
cols = ['Date', 'time', 'wspd', 'wspd_max', 'wdir',
'x1', 'x2', 'x3', 'x4', 'x5',
'wspd_std']
wind = pd.read_csv(ipath, sep = "\s*", names = cols,
parse_dates = {'Timestamp': [0, 1]}, index_col = 0,
date_parser = dateparse)
# read file 'model.txt'
ipath = os.path.join('Datos', 'model.txt')
model = pd.read_csv(ipath, sep = "\s*", skiprows = 3,
parse_dates = {'Timestamp': [0, 1]}, index_col = 'Timestamp')
for c in ['x1','x2','x3','x4','x5']: # remove unnecesary columns
_ = wind.pop(c)
wind.head(3)
model.head(3)
wind['Timestamp'] = wind.index
print(wind['Timestamp'].diff().min())
del wind['Timestamp']
model['Timestamp'] = model.index
print(model['Timestamp'].diff().min())
del model['Timestamp']
df = pd.DataFrame(np.array([[1, 10], [2, 20], [3,30]]), columns = ['A', 'B'])
df
df.sum()
# The previous example would be similar to
df.sum(axis = 0)
df.sum(axis = 1)
df < 10
(df < 10).all()
(df < 10).all(axis = 'columns') # instead of axis = 1 we use axis = 'columns'
# test operations of a DatFrame using axis = 0, 1, 'index', rows', columns'
new = pd.concat([wind, model], axis = 0, join = 'outer')
new.head(5)
new.tail(5)
new.loc['2014/01/01 00:00':'2014/01/01 02:00']
new = pd.concat([wind, model], axis = 1, join = 'inner')
new.head(5)
new.loc['2014/01/01 00:00':'2014/01/01 02:00']
df1 = pd.DataFrame(np.random.randn(10,2),
columns = ['A', 'B'],
index = np.arange(10))
df2 = pd.DataFrame(np.random.randn(4,3),
columns = ['A', 'B', 'C'],
index = np.arange(8, 12))
df1
df2
new = pd.concat([df1, df2], axis = 0, join = 'inner')
new
new = pd.concat([df1, df2], axis = 1, join = 'inner')
new
wind.append(model)
pd.merge(wind, model, left_index = True, right_index = True, how = 'inner').head(5)
(pd.merge(wind, model, left_index = True, right_index = True, how = 'inner') ==
pd.concat([wind, model], axis = 1, join = 'inner')).all().all()
df1 = pd.DataFrame(
np.array([
np.arange(1, 11),
np.random.choice([1,2,3], size = 10),
np.arange(1, 11) * 10
]).T,
columns = ['A', 'col', 'B']
)
df2 = pd.DataFrame(
np.array([
np.arange(11, 21),
np.random.choice([1,2,3], size = 10),
np.arange(1, 11) * 100
]).T,
columns = ['A', 'col', 'B']
)
display(df1)
display(df2)
pd.merge(df1, df2, on = ['col'])
# Play with it with pd.merge keywords to become more comfortable with it
wind.join(model).head(10)
model.join(wind).head(10)
joinA = wind.join(model, how = 'inner')
joinB = model.join(wind, how = 'inner').loc[:,joinA.columns]
(joinA == joinB).all().all()
wind['month'] = wind.index.month
wind.iloc[[0, 1000, 10000, 30000]]
wind.groupby(by = 'month').mean()
wind.groupby(by = [wind.index.year, 'month']).mean()
del wind['month']
# Play grouping
# (obtain daily mean wind speed,
# mean wind speed on Tuesdays when wind direction is between 300ยบ and 360ยบ,...)
grouped = wind.groupby(by=wind.index.month)
import inspect
info = inspect.getmembers(grouped, predicate=inspect.ismethod)
for stuff in info:
print(stuff[0])
grouped
grouped.ngroups
grouped.groups.keys()
grouped.get_group(2)
wind['year'] = wind.index.year
wind['month'] = wind.index.month
tmp = wind.groupby(by = ['year', 'month']).mean()
del wind['year']
del wind['month']
tmp
tmp['year'] = tmp.index.get_level_values(0)
tmp['month'] = tmp.index.get_level_values(1)
tmp
tmp.pivot(index = 'year', columns = 'month', values='wspd')
# Get the yearly mean wind speed
# starting from tmp.pivot(index = 'year', columns = 'month', values='wspd')
tmp = wind.groupby(by = [wind.index.year, wind.index.month])
tmp = tmp.agg({'wspd': np.mean, 'wspd_max': 'max'})
tmp.reset_index(inplace = True)
tmp
tmp.pivot(index = 'level_1', columns = 'level_0')
tmp.pivot(index = 'level_1', columns = 'level_0').columns
from collections import OrderedDict
table = OrderedDict((
("Item", ['Item0', 'Item0', 'Item0', 'Item1']),
('CType',['Gold', 'Bronze', 'Gold', 'Silver']),
('USD', ['1$', '2$', '3$', '4$']),
('EU', ['1โฌ', '2โฌ', '3โฌ', '4โฌ'])
))
df = pd.DataFrame(table)
df
pivoted = df.pivot(index='Item', columns='CType', values='USD')
table = OrderedDict((
("Item", ['Item0', 'Item0', 'Item0', 'Item1']),
('CType',['Gold', 'Bronze', 'Gold', 'Silver']),
('USD', [1, 2, 3, 4]),
('EU', [1.1, 2.2, 3.3, 4.4])
))
df = pd.DataFrame(table)
pivoted = df.pivot_table(index='Item', columns='CType', values='USD', aggfunc=np.min)
pivoted
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We read a file of wind data
Step2: We read a second file of simulated environmental data
Step3: We have data with a time frequency of 10 minutes (wind) vs the second file data (model) with a time frequency of 1 hour.
Step4: If we don't use the axis keyword explicitly, by default, operations are over rows (axis = 0), i.e., it uses all the elements of a column
Step5: If we want to obtain the result on each row, i.e., all the elements of all columns in a row, we should add axis = 1
Step6: Another example
Step7: I hope you have a clear idea now about how it works.
Step8:
Step9: concat allows us to 'merge' pandas data structures using rows and columns.
Step10: In general, I use this last option with different column names as it is what I want to do...
Step11: In general, this is not what I want to do. What I want to do is a merge with some logics and to do so we could use pd.merge...
Step12: Imagine we want to merge two DataFrames using columns
Step13: Combining using the join method
Step14: Grouping
Step15: Let's see what groupby returns
Step16: pandas.core.groupby.DataFrameGroupBy is like a dict with superpowers!!!
Step17: Pivoting using several columns
Step18: What happens if we combine repeated indexes. For instance
Step19: (source
Step20: pivot_table to the rescue to solve the previous error
|
6,515
|
<ASSISTANT_TASK:>
Python Code:
def black_box_function(x, y):
Function with unknown internals we wish to maximize.
This is just serving as an example, for all intents and
purposes think of the internals of this function, i.e.: the process
which generates its output values, as unknown.
return -x ** 2 - (y - 1) ** 2 + 1
from bayes_opt import BayesianOptimization
# Bounded region of parameter space
pbounds = {'x': (2, 4), 'y': (-3, 3)}
optimizer = BayesianOptimization(
f=black_box_function,
pbounds=pbounds,
verbose=2, # verbose = 1 prints only when a maximum is observed, verbose = 0 is silent
random_state=1,
)
optimizer.maximize(
init_points=2,
n_iter=3,
)
print(optimizer.max)
for i, res in enumerate(optimizer.res):
print("Iteration {}: \n\t{}".format(i, res))
optimizer.set_bounds(new_bounds={"x": (-2, 3)})
optimizer.maximize(
init_points=0,
n_iter=5,
)
optimizer.probe(
params={"x": 0.5, "y": 0.7},
lazy=True,
)
print(optimizer.space.keys)
optimizer.probe(
params=[-0.3, 0.1],
lazy=True,
)
optimizer.maximize(init_points=0, n_iter=0)
from bayes_opt.observer import JSONLogger
from bayes_opt.event import Events
logger = JSONLogger(path="./logs.json")
optimizer.subscribe(Events.OPTMIZATION_STEP, logger)
optimizer.maximize(
init_points=2,
n_iter=3,
)
from bayes_opt.util import load_logs
new_optimizer = BayesianOptimization(
f=black_box_function,
pbounds={"x": (-2, 2), "y": (-2, 2)},
verbose=2,
random_state=7,
)
print(len(new_optimizer.space))
load_logs(new_optimizer, logs=["./logs.json"]);
print("New optimizer is now aware of {} points.".format(len(new_optimizer.space)))
new_optimizer.maximize(
init_points=0,
n_iter=10,
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic tour of the Bayesian Optimization package
Step2: 2. Getting Started
Step3: The BayesianOptimization object will work out of the box without much tuning needed. The main method you should be aware of is maximize, which does exactly what you think it does.
Step4: The best combination of parameters and target value found can be accessed via the property bo.max.
Step5: While the list of all parameters probed and their corresponding target values is available via the property bo.res.
Step6: 2.1 Changing bounds
Step7: 3. Guiding the optimization
Step8: Or as an iterable. Beware that the order has to be alphabetical. You can usee optimizer.space.keys for guidance
Step9: 4. Saving, loading and restarting
Step10: The observer paradigm works by
Step11: 4.2 Loading progress
|
6,516
|
<ASSISTANT_TASK:>
Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
## Your code here
# My solution was not using Counter thus was extremely slow. here it is:
# def freq(word,corpus):
# return corpus.count(word)/len(corpus)
#
# def prob(word,freqs,th):
# return 1-np.sqrt(th/freqs[word])
#
# freqs = {word:freq(word,int_words) for word in int_words}
# p_drop = {word: prob(word,freqs,th) for word in int_words}
# train_words = {w for w in int_words if p_drop[w]>np.random.rand()}
from collections import Counter
import random
word_counts=Counter(int_words) # dictionary like with k:v=int_words:count
total_count = len(int_words)
freqs={word: count/total_count for word,count in word_counts.items()}
p_drop={word: 1-np.sqrt(th/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if p_drop[word]<random.random()]
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
# get random number in the range (1,window_size) - this will be the number of words we'll take
R=random.randint(1,window_size)
# what about warping arond ? do we want to allow it ?
start = max(idx-R,0)
stop = min(idx+R+1,len(words))
return words[start:idx]+words[idx+1:stop]
# note that the reference solution used np.random.randint
# note that the reference solution returned list(set(words[start:idx]+words[idx+1:stop])). not clear why the set() is needed...
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(dtype=tf.int32,shape=[None], name='inputs')
labels = tf.placeholder(dtype=tf.int32,shape=[None,None],name='labels') # ??? To make things work later, you'll need to set the second dimension of labels to None or 1.
n_vocab = len(int_to_vocab)
n_embedding = 200
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab,n_embedding),-1,1))
embed = tf.nn.embedding_lookup(embedding,inputs)
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab,n_embedding),stddev=0.1))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w,softmax_b,labels,embed,n_sampled,n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
with train_graph.as_default(): # Question : why do we need this context ?
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Step3: And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Step5: Making batches
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
Step8: Embedding
Step9: Negative sampling
Step10: Validation
Step11: Training
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
|
6,517
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import pandas as pd
CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species']
SPECIES = ['Setosa', 'Versicolor', 'Virginica']
train_path = tf.keras.utils.get_file(
"iris_training.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv")
test_path = tf.keras.utils.get_file(
"iris_test.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv")
train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0)
test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0)
train.head()
train_y = train.pop('Species')
test_y = test.pop('Species')
# The label column has now been removed from the features.
train.head()
def input_evaluation_set():
features = {'SepalLength': np.array([6.4, 5.0]),
'SepalWidth': np.array([2.8, 2.3]),
'PetalLength': np.array([5.6, 3.3]),
'PetalWidth': np.array([2.2, 1.0])}
labels = np.array([2, 1])
return features, labels
def input_fn(features, labels, training=True, batch_size=256):
An input function for training or evaluating
# Convert the inputs to a Dataset.
dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))
# Shuffle and repeat if you are in training mode.
if training:
dataset = dataset.shuffle(1000).repeat()
return dataset.batch(batch_size)
# Feature columns describe how to use the input.
my_feature_columns = []
for key in train.keys():
my_feature_columns.append(tf.feature_column.numeric_column(key=key))
# Build a DNN with 2 hidden layers with 30 and 10 hidden nodes each.
classifier = tf.estimator.DNNClassifier(
feature_columns=my_feature_columns,
# Two hidden layers of 30 and 10 nodes respectively.
hidden_units=[30, 10],
# The model must choose between 3 classes.
n_classes=3)
# Train the Model.
classifier.train(
input_fn=lambda: input_fn(train, train_y, training=True),
steps=5000)
eval_result = classifier.evaluate(
input_fn=lambda: input_fn(test, test_y, training=False))
print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result))
# Generate predictions from the model
expected = ['Setosa', 'Versicolor', 'Virginica']
predict_x = {
'SepalLength': [5.1, 5.9, 6.9],
'SepalWidth': [3.3, 3.0, 3.1],
'PetalLength': [1.7, 4.2, 5.4],
'PetalWidth': [0.5, 1.5, 2.1],
}
def input_fn(features, batch_size=256):
An input function for prediction.
# Convert the inputs to a Dataset without labels.
return tf.data.Dataset.from_tensor_slices(dict(features)).batch(batch_size)
predictions = classifier.predict(
input_fn=lambda: input_fn(predict_x))
for pred_dict, expec in zip(predictions, expected):
class_id = pred_dict['class_ids'][0]
probability = pred_dict['probabilities'][class_id]
print('Prediction is "{}" ({:.1f}%), expected "{}"'.format(
SPECIES[class_id], 100 * probability, expec))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ์ฌ์ ์ ์ Estimator
Step2: ๋ฐ์ดํฐ์ธํธ
Step3: ๊ทธ ๋ค์, Keras ๋ฐ Pandas๋ฅผ ์ฌ์ฉํ์ฌ Iris ๋ฐ์ดํฐ์ธํธ๋ฅผ ๋ค์ด๋ก๋ํ๊ณ ๊ตฌ๋ฌธ ๋ถ์ํฉ๋๋ค. ํ๋ จ ๋ฐ ํ
์คํธ๋ฅผ ์ํด ๋ณ๋์ ๋ฐ์ดํฐ์ธํธ๋ฅผ ์ ์งํฉ๋๋ค.
Step4: ๋ฐ์ดํฐ๋ฅผ ๊ฒ์ฌํ์ฌ ๋ค ๊ฐ์ float ํน์ฑ ์ด๊ณผ ํ๋์ int32 ๋ ์ด๋ธ์ด ์๋์ง ํ์ธํ ์ ์์ต๋๋ค.
Step5: ๊ฐ ๋ฐ์ดํฐ์ธํธ์ ๋ํด ์์ธกํ๋๋ก ๋ชจ๋ธ์ ํ๋ จํ ๋ ์ด๋ธ์ ๋ถํ ํฉ๋๋ค.
Step6: Estimator๋ฅผ ์ฌ์ฉํ ํ๋ก๊ทธ๋๋ฐ ๊ฐ์
Step8: ์
๋ ฅ ํจ์์์ ์ํ๋ ๋๋ก features ์ฌ์ ๋ฐ label ๋ชฉ๋ก์ด ์์ฑ๋๋๋ก ํ ์ ์์ต๋๋ค. ๊ทธ๋ฌ๋ ๋ชจ๋ ์ข
๋ฅ์ ๋ฐ์ดํฐ๋ฅผ ๊ตฌ๋ฌธ ๋ถ์ํ ์ ์๋ TensorFlow์ Dataset API๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์ด ์ข์ต๋๋ค.
Step9: ํน์ฑ ์ด ์ ์ํ๊ธฐ
Step10: ํน์ฑ ์ด์ ์ฌ๊ธฐ์ ํ์๋ ๊ฒ๋ณด๋ค ํจ์ฌ ์ ๊ตํ ์ ์์ต๋๋ค. ์ด ๊ฐ์ด๋์์ ํน์ฑ ์ด์ ๋ํ ์์ธํ ๋ด์ฉ์ ์ฝ์ ์ ์์ต๋๋ค.
Step11: ํ๋ จ, ํ๊ฐ ๋ฐ ์์ธกํ๊ธฐ
Step12: Estimator๊ฐ ์์ํ ๋๋ก ์ธ์๋ฅผ ์ฌ์ฉํ์ง ์๋ ์
๋ ฅ ํจ์๋ฅผ ์ ๊ณตํ๋ฉด์ ์ธ์๋ฅผ ํฌ์ฐฉํ๊ธฐ ์ํด lambda์์ input_fn ํธ์ถ์ ๋ํํฉ๋๋ค. steps ์ธ์๋ ์ฌ๋ฌ ํ๋ จ ๋จ๊ณ๋ฅผ ๊ฑฐ์น ํ์ ํ๋ จ์ ์ค์งํ๋๋ก ๋ฉ์๋์ ์ง์ํฉ๋๋ค.
Step14: train ๋ฉ์๋์ ๋ํ ํธ์ถ๊ณผ ๋ฌ๋ฆฌ ํ๊ฐํ steps ์ธ์๋ฅผ ์ ๋ฌํ์ง ์์์ต๋๋ค. eval์ ๋ํ input_fn์ ๋จ ํ๋์ ๋ฐ์ดํฐ epoch๋ง ์์ฑํฉ๋๋ค.
Step15: predict ๋ฉ์๋๋ Python iterable์ ๋ฐํํ์ฌ ๊ฐ ์์ ์ ๋ํ ์์ธก ๊ฒฐ๊ณผ ์ฌ์ ์ ์์ฑํฉ๋๋ค. ๋ค์ ์ฝ๋๋ ๋ช ๊ฐ์ง ์์ธก๊ณผ ํด๋น ํ๋ฅ ์ ์ถ๋ ฅํฉ๋๋ค.
|
6,518
|
<ASSISTANT_TASK:>
Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
def split_data(chars, batch_size, num_steps, split_frac=0.9):
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_layers"):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size, state_is_tuple=False)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers, state_is_tuple=False)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/3', sess.graph)
!mkdir -p checkpoints/anna
epochs = 10
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we'll load the text file and convert it into integers for our network to use.
Step3: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Step4: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
Step5: Hyperparameters
Step6: Write out the graph for TensorBoard
Step7: Training
Step8: Sampling
|
6,519
|
<ASSISTANT_TASK:>
Python Code:
from dolfin import *
from rbnics import *
class UnsteadyThermalBlock(ParabolicCoerciveProblem):
# Default initialization of members
def __init__(self, V, **kwargs):
# Call the standard initialization
ParabolicCoerciveProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
self.u = TrialFunction(V)
self.v = TestFunction(V)
self.dx = Measure("dx")(subdomain_data=self.subdomains)
self.ds = Measure("ds")(subdomain_data=self.boundaries)
# Return custom problem name
def name(self):
return "UnsteadyThermalBlock1RB"
# Return the alpha_lower bound.
def get_stability_factor_lower_bound(self):
return min(self.compute_theta("a"))
# Return theta multiplicative terms of the affine expansion of the problem.
def compute_theta(self, term):
mu = self.mu
if term == "m":
theta_m0 = 1.
return (theta_m0, )
elif term == "a":
theta_a0 = mu[0]
theta_a1 = 1.
return (theta_a0, theta_a1)
elif term == "f":
theta_f0 = mu[1]
return (theta_f0,)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
def assemble_operator(self, term):
v = self.v
dx = self.dx
if term == "m":
u = self.u
m0 = u * v * dx
return (m0, )
elif term == "a":
u = self.u
a0 = inner(grad(u), grad(v)) * dx(1)
a1 = inner(grad(u), grad(v)) * dx(2)
return (a0, a1)
elif term == "f":
ds = self.ds
f0 = v * ds(1)
return (f0,)
elif term == "dirichlet_bc":
bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 3)]
return (bc0,)
elif term == "inner_product":
u = self.u
x0 = inner(grad(u), grad(v)) * dx
return (x0,)
elif term == "projection_inner_product":
u = self.u
x0 = u * v * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
mesh = Mesh("data/thermal_block.xml")
subdomains = MeshFunction("size_t", mesh, "data/thermal_block_physical_region.xml")
boundaries = MeshFunction("size_t", mesh, "data/thermal_block_facet_region.xml")
V = FunctionSpace(mesh, "Lagrange", 1)
problem = UnsteadyThermalBlock(V, subdomains=subdomains, boundaries=boundaries)
mu_range = [(0.1, 10.0), (-1.0, 1.0)]
problem.set_mu_range(mu_range)
problem.set_time_step_size(0.05)
problem.set_final_time(3)
reduction_method = ReducedBasis(problem)
reduction_method.set_Nmax(20, POD_Greedy=4)
reduction_method.set_tolerance(1e-5, POD_Greedy=1e-2)
reduction_method.initialize_training_set(100)
reduced_problem = reduction_method.offline()
online_mu = (8.0, -1.0)
reduced_problem.set_mu(online_mu)
reduced_solution = reduced_problem.solve()
plot(reduced_solution, reduced_problem=reduced_problem, every=5, interval=500)
reduction_method.initialize_testing_set(10)
reduction_method.error_analysis()
reduction_method.initialize_testing_set(10)
reduction_method.speedup_analysis()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 3. Affine decomposition
Step2: 4. Main program
Step3: 4.2. Create Finite Element space (Lagrange P1, two components)
Step4: 4.3. Allocate an object of the UnsteadyThermalBlock class
Step5: 4.4. Prepare reduction with a reduced basis method
Step6: 4.5. Perform the offline phase
Step7: 4.6. Perform an online solve
Step8: 4.7. Perform an error analysis
Step9: 4.8. Perform a speedup analysis
|
6,520
|
<ASSISTANT_TASK:>
Python Code:
from tensorflow.keras import layers
import tensorflow_addons as tfa
from tensorflow import keras
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import random
# Setting seeds for reproducibility.
SEED = 42
keras.utils.set_random_seed(SEED)
# DATA
BUFFER_SIZE = 1024
BATCH_SIZE = 256
AUTO = tf.data.AUTOTUNE
INPUT_SHAPE = (32, 32, 3)
NUM_CLASSES = 10
# OPTIMIZER
LEARNING_RATE = 5e-3
WEIGHT_DECAY = 1e-4
# PRETRAINING
EPOCHS = 100
# AUGMENTATION
IMAGE_SIZE = 48 # We will resize input images to this size.
PATCH_SIZE = 6 # Size of the patches to be extracted from the input images.
NUM_PATCHES = (IMAGE_SIZE // PATCH_SIZE) ** 2
MASK_PROPORTION = 0.75 # We have found 75% masking to give us the best results.
# ENCODER and DECODER
LAYER_NORM_EPS = 1e-6
ENC_PROJECTION_DIM = 128
DEC_PROJECTION_DIM = 64
ENC_NUM_HEADS = 4
ENC_LAYERS = 6
DEC_NUM_HEADS = 4
DEC_LAYERS = (
2 # The decoder is lightweight but should be reasonably deep for reconstruction.
)
ENC_TRANSFORMER_UNITS = [
ENC_PROJECTION_DIM * 2,
ENC_PROJECTION_DIM,
] # Size of the transformer layers.
DEC_TRANSFORMER_UNITS = [
DEC_PROJECTION_DIM * 2,
DEC_PROJECTION_DIM,
]
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
(x_train, y_train), (x_val, y_val) = (
(x_train[:40000], y_train[:40000]),
(x_train[40000:], y_train[40000:]),
)
print(f"Training samples: {len(x_train)}")
print(f"Validation samples: {len(x_val)}")
print(f"Testing samples: {len(x_test)}")
train_ds = tf.data.Dataset.from_tensor_slices(x_train)
train_ds = train_ds.shuffle(BUFFER_SIZE).batch(BATCH_SIZE).prefetch(AUTO)
val_ds = tf.data.Dataset.from_tensor_slices(x_val)
val_ds = val_ds.batch(BATCH_SIZE).prefetch(AUTO)
test_ds = tf.data.Dataset.from_tensor_slices(x_test)
test_ds = test_ds.batch(BATCH_SIZE).prefetch(AUTO)
def get_train_augmentation_model():
model = keras.Sequential(
[
layers.Rescaling(1 / 255.0),
layers.Resizing(INPUT_SHAPE[0] + 20, INPUT_SHAPE[0] + 20),
layers.RandomCrop(IMAGE_SIZE, IMAGE_SIZE),
layers.RandomFlip("horizontal"),
],
name="train_data_augmentation",
)
return model
def get_test_augmentation_model():
model = keras.Sequential(
[layers.Rescaling(1 / 255.0), layers.Resizing(IMAGE_SIZE, IMAGE_SIZE),],
name="test_data_augmentation",
)
return model
class Patches(layers.Layer):
def __init__(self, patch_size=PATCH_SIZE, **kwargs):
super().__init__(**kwargs)
self.patch_size = patch_size
# Assuming the image has three channels each patch would be
# of size (patch_size, patch_size, 3).
self.resize = layers.Reshape((-1, patch_size * patch_size * 3))
def call(self, images):
# Create patches from the input images
patches = tf.image.extract_patches(
images=images,
sizes=[1, self.patch_size, self.patch_size, 1],
strides=[1, self.patch_size, self.patch_size, 1],
rates=[1, 1, 1, 1],
padding="VALID",
)
# Reshape the patches to (batch, num_patches, patch_area) and return it.
patches = self.resize(patches)
return patches
def show_patched_image(self, images, patches):
# This is a utility function which accepts a batch of images and its
# corresponding patches and help visualize one image and its patches
# side by side.
idx = np.random.choice(patches.shape[0])
print(f"Index selected: {idx}.")
plt.figure(figsize=(4, 4))
plt.imshow(keras.utils.array_to_img(images[idx]))
plt.axis("off")
plt.show()
n = int(np.sqrt(patches.shape[1]))
plt.figure(figsize=(4, 4))
for i, patch in enumerate(patches[idx]):
ax = plt.subplot(n, n, i + 1)
patch_img = tf.reshape(patch, (self.patch_size, self.patch_size, 3))
plt.imshow(keras.utils.img_to_array(patch_img))
plt.axis("off")
plt.show()
# Return the index chosen to validate it outside the method.
return idx
# taken from https://stackoverflow.com/a/58082878/10319735
def reconstruct_from_patch(self, patch):
# This utility function takes patches from a *single* image and
# reconstructs it back into the image. This is useful for the train
# monitor callback.
num_patches = patch.shape[0]
n = int(np.sqrt(num_patches))
patch = tf.reshape(patch, (num_patches, self.patch_size, self.patch_size, 3))
rows = tf.split(patch, n, axis=0)
rows = [tf.concat(tf.unstack(x), axis=1) for x in rows]
reconstructed = tf.concat(rows, axis=0)
return reconstructed
# Get a batch of images.
image_batch = next(iter(train_ds))
# Augment the images.
augmentation_model = get_train_augmentation_model()
augmented_images = augmentation_model(image_batch)
# Define the patch layer.
patch_layer = Patches()
# Get the patches from the batched images.
patches = patch_layer(images=augmented_images)
# Now pass the images and the corresponding patches
# to the `show_patched_image` method.
random_index = patch_layer.show_patched_image(images=augmented_images, patches=patches)
# Chose the same chose image and try reconstructing the patches
# into the original image.
image = patch_layer.reconstruct_from_patch(patches[random_index])
plt.imshow(image)
plt.axis("off")
plt.show()
class PatchEncoder(layers.Layer):
def __init__(
self,
patch_size=PATCH_SIZE,
projection_dim=ENC_PROJECTION_DIM,
mask_proportion=MASK_PROPORTION,
downstream=False,
**kwargs,
):
super().__init__(**kwargs)
self.patch_size = patch_size
self.projection_dim = projection_dim
self.mask_proportion = mask_proportion
self.downstream = downstream
# This is a trainable mask token initialized randomly from a normal
# distribution.
self.mask_token = tf.Variable(
tf.random.normal([1, patch_size * patch_size * 3]), trainable=True
)
def build(self, input_shape):
(_, self.num_patches, self.patch_area) = input_shape
# Create the projection layer for the patches.
self.projection = layers.Dense(units=self.projection_dim)
# Create the positional embedding layer.
self.position_embedding = layers.Embedding(
input_dim=self.num_patches, output_dim=self.projection_dim
)
# Number of patches that will be masked.
self.num_mask = int(self.mask_proportion * self.num_patches)
def call(self, patches):
# Get the positional embeddings.
batch_size = tf.shape(patches)[0]
positions = tf.range(start=0, limit=self.num_patches, delta=1)
pos_embeddings = self.position_embedding(positions[tf.newaxis, ...])
pos_embeddings = tf.tile(
pos_embeddings, [batch_size, 1, 1]
) # (B, num_patches, projection_dim)
# Embed the patches.
patch_embeddings = (
self.projection(patches) + pos_embeddings
) # (B, num_patches, projection_dim)
if self.downstream:
return patch_embeddings
else:
mask_indices, unmask_indices = self.get_random_indices(batch_size)
# The encoder input is the unmasked patch embeddings. Here we gather
# all the patches that should be unmasked.
unmasked_embeddings = tf.gather(
patch_embeddings, unmask_indices, axis=1, batch_dims=1
) # (B, unmask_numbers, projection_dim)
# Get the unmasked and masked position embeddings. We will need them
# for the decoder.
unmasked_positions = tf.gather(
pos_embeddings, unmask_indices, axis=1, batch_dims=1
) # (B, unmask_numbers, projection_dim)
masked_positions = tf.gather(
pos_embeddings, mask_indices, axis=1, batch_dims=1
) # (B, mask_numbers, projection_dim)
# Repeat the mask token number of mask times.
# Mask tokens replace the masks of the image.
mask_tokens = tf.repeat(self.mask_token, repeats=self.num_mask, axis=0)
mask_tokens = tf.repeat(
mask_tokens[tf.newaxis, ...], repeats=batch_size, axis=0
)
# Get the masked embeddings for the tokens.
masked_embeddings = self.projection(mask_tokens) + masked_positions
return (
unmasked_embeddings, # Input to the encoder.
masked_embeddings, # First part of input to the decoder.
unmasked_positions, # Added to the encoder outputs.
mask_indices, # The indices that were masked.
unmask_indices, # The indices that were unmaksed.
)
def get_random_indices(self, batch_size):
# Create random indices from a uniform distribution and then split
# it into mask and unmask indices.
rand_indices = tf.argsort(
tf.random.uniform(shape=(batch_size, self.num_patches)), axis=-1
)
mask_indices = rand_indices[:, : self.num_mask]
unmask_indices = rand_indices[:, self.num_mask :]
return mask_indices, unmask_indices
def generate_masked_image(self, patches, unmask_indices):
# Choose a random patch and it corresponding unmask index.
idx = np.random.choice(patches.shape[0])
patch = patches[idx]
unmask_index = unmask_indices[idx]
# Build a numpy array of same shape as patch.
new_patch = np.zeros_like(patch)
# Iterate of the new_patch and plug the unmasked patches.
count = 0
for i in range(unmask_index.shape[0]):
new_patch[unmask_index[i]] = patch[unmask_index[i]]
return new_patch, idx
# Create the patch encoder layer.
patch_encoder = PatchEncoder()
# Get the embeddings and positions.
(
unmasked_embeddings,
masked_embeddings,
unmasked_positions,
mask_indices,
unmask_indices,
) = patch_encoder(patches=patches)
# Show a maksed patch image.
new_patch, random_index = patch_encoder.generate_masked_image(patches, unmask_indices)
plt.figure(figsize=(10, 10))
plt.subplot(1, 2, 1)
img = patch_layer.reconstruct_from_patch(new_patch)
plt.imshow(keras.utils.array_to_img(img))
plt.axis("off")
plt.title("Masked")
plt.subplot(1, 2, 2)
img = augmented_images[random_index]
plt.imshow(keras.utils.array_to_img(img))
plt.axis("off")
plt.title("Original")
plt.show()
def mlp(x, dropout_rate, hidden_units):
for units in hidden_units:
x = layers.Dense(units, activation=tf.nn.gelu)(x)
x = layers.Dropout(dropout_rate)(x)
return x
def create_encoder(num_heads=ENC_NUM_HEADS, num_layers=ENC_LAYERS):
inputs = layers.Input((None, ENC_PROJECTION_DIM))
x = inputs
for _ in range(num_layers):
# Layer normalization 1.
x1 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x)
# Create a multi-head attention layer.
attention_output = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=ENC_PROJECTION_DIM, dropout=0.1
)(x1, x1)
# Skip connection 1.
x2 = layers.Add()([attention_output, x])
# Layer normalization 2.
x3 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x2)
# MLP.
x3 = mlp(x3, hidden_units=ENC_TRANSFORMER_UNITS, dropout_rate=0.1)
# Skip connection 2.
x = layers.Add()([x3, x2])
outputs = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x)
return keras.Model(inputs, outputs, name="mae_encoder")
def create_decoder(
num_layers=DEC_LAYERS, num_heads=DEC_NUM_HEADS, image_size=IMAGE_SIZE
):
inputs = layers.Input((NUM_PATCHES, ENC_PROJECTION_DIM))
x = layers.Dense(DEC_PROJECTION_DIM)(inputs)
for _ in range(num_layers):
# Layer normalization 1.
x1 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x)
# Create a multi-head attention layer.
attention_output = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=DEC_PROJECTION_DIM, dropout=0.1
)(x1, x1)
# Skip connection 1.
x2 = layers.Add()([attention_output, x])
# Layer normalization 2.
x3 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x2)
# MLP.
x3 = mlp(x3, hidden_units=DEC_TRANSFORMER_UNITS, dropout_rate=0.1)
# Skip connection 2.
x = layers.Add()([x3, x2])
x = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x)
x = layers.Flatten()(x)
pre_final = layers.Dense(units=image_size * image_size * 3, activation="sigmoid")(x)
outputs = layers.Reshape((image_size, image_size, 3))(pre_final)
return keras.Model(inputs, outputs, name="mae_decoder")
class MaskedAutoencoder(keras.Model):
def __init__(
self,
train_augmentation_model,
test_augmentation_model,
patch_layer,
patch_encoder,
encoder,
decoder,
**kwargs,
):
super().__init__(**kwargs)
self.train_augmentation_model = train_augmentation_model
self.test_augmentation_model = test_augmentation_model
self.patch_layer = patch_layer
self.patch_encoder = patch_encoder
self.encoder = encoder
self.decoder = decoder
def calculate_loss(self, images, test=False):
# Augment the input images.
if test:
augmented_images = self.test_augmentation_model(images)
else:
augmented_images = self.train_augmentation_model(images)
# Patch the augmented images.
patches = self.patch_layer(augmented_images)
# Encode the patches.
(
unmasked_embeddings,
masked_embeddings,
unmasked_positions,
mask_indices,
unmask_indices,
) = self.patch_encoder(patches)
# Pass the unmaksed patche to the encoder.
encoder_outputs = self.encoder(unmasked_embeddings)
# Create the decoder inputs.
encoder_outputs = encoder_outputs + unmasked_positions
decoder_inputs = tf.concat([encoder_outputs, masked_embeddings], axis=1)
# Decode the inputs.
decoder_outputs = self.decoder(decoder_inputs)
decoder_patches = self.patch_layer(decoder_outputs)
loss_patch = tf.gather(patches, mask_indices, axis=1, batch_dims=1)
loss_output = tf.gather(decoder_patches, mask_indices, axis=1, batch_dims=1)
# Compute the total loss.
total_loss = self.compiled_loss(loss_patch, loss_output)
return total_loss, loss_patch, loss_output
def train_step(self, images):
with tf.GradientTape() as tape:
total_loss, loss_patch, loss_output = self.calculate_loss(images)
# Apply gradients.
train_vars = [
self.train_augmentation_model.trainable_variables,
self.patch_layer.trainable_variables,
self.patch_encoder.trainable_variables,
self.encoder.trainable_variables,
self.decoder.trainable_variables,
]
grads = tape.gradient(total_loss, train_vars)
tv_list = []
for (grad, var) in zip(grads, train_vars):
for g, v in zip(grad, var):
tv_list.append((g, v))
self.optimizer.apply_gradients(tv_list)
# Report progress.
self.compiled_metrics.update_state(loss_patch, loss_output)
return {m.name: m.result() for m in self.metrics}
def test_step(self, images):
total_loss, loss_patch, loss_output = self.calculate_loss(images, test=True)
# Update the trackers.
self.compiled_metrics.update_state(loss_patch, loss_output)
return {m.name: m.result() for m in self.metrics}
train_augmentation_model = get_train_augmentation_model()
test_augmentation_model = get_test_augmentation_model()
patch_layer = Patches()
patch_encoder = PatchEncoder()
encoder = create_encoder()
decoder = create_decoder()
mae_model = MaskedAutoencoder(
train_augmentation_model=train_augmentation_model,
test_augmentation_model=test_augmentation_model,
patch_layer=patch_layer,
patch_encoder=patch_encoder,
encoder=encoder,
decoder=decoder,
)
# Taking a batch of test inputs to measure model's progress.
test_images = next(iter(test_ds))
class TrainMonitor(keras.callbacks.Callback):
def __init__(self, epoch_interval=None):
self.epoch_interval = epoch_interval
def on_epoch_end(self, epoch, logs=None):
if self.epoch_interval and epoch % self.epoch_interval == 0:
test_augmented_images = self.model.test_augmentation_model(test_images)
test_patches = self.model.patch_layer(test_augmented_images)
(
test_unmasked_embeddings,
test_masked_embeddings,
test_unmasked_positions,
test_mask_indices,
test_unmask_indices,
) = self.model.patch_encoder(test_patches)
test_encoder_outputs = self.model.encoder(test_unmasked_embeddings)
test_encoder_outputs = test_encoder_outputs + test_unmasked_positions
test_decoder_inputs = tf.concat(
[test_encoder_outputs, test_masked_embeddings], axis=1
)
test_decoder_outputs = self.model.decoder(test_decoder_inputs)
# Show a maksed patch image.
test_masked_patch, idx = self.model.patch_encoder.generate_masked_image(
test_patches, test_unmask_indices
)
print(f"\nIdx chosen: {idx}")
original_image = test_augmented_images[idx]
masked_image = self.model.patch_layer.reconstruct_from_patch(
test_masked_patch
)
reconstructed_image = test_decoder_outputs[idx]
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(15, 5))
ax[0].imshow(original_image)
ax[0].set_title(f"Original: {epoch:03d}")
ax[1].imshow(masked_image)
ax[1].set_title(f"Masked: {epoch:03d}")
ax[2].imshow(reconstructed_image)
ax[2].set_title(f"Resonstructed: {epoch:03d}")
plt.show()
plt.close()
# Some code is taken from:
# https://www.kaggle.com/ashusma/training-rfcx-tensorflow-tpu-effnet-b2.
class WarmUpCosine(keras.optimizers.schedules.LearningRateSchedule):
def __init__(
self, learning_rate_base, total_steps, warmup_learning_rate, warmup_steps
):
super(WarmUpCosine, self).__init__()
self.learning_rate_base = learning_rate_base
self.total_steps = total_steps
self.warmup_learning_rate = warmup_learning_rate
self.warmup_steps = warmup_steps
self.pi = tf.constant(np.pi)
def __call__(self, step):
if self.total_steps < self.warmup_steps:
raise ValueError("Total_steps must be larger or equal to warmup_steps.")
cos_annealed_lr = tf.cos(
self.pi
* (tf.cast(step, tf.float32) - self.warmup_steps)
/ float(self.total_steps - self.warmup_steps)
)
learning_rate = 0.5 * self.learning_rate_base * (1 + cos_annealed_lr)
if self.warmup_steps > 0:
if self.learning_rate_base < self.warmup_learning_rate:
raise ValueError(
"Learning_rate_base must be larger or equal to "
"warmup_learning_rate."
)
slope = (
self.learning_rate_base - self.warmup_learning_rate
) / self.warmup_steps
warmup_rate = slope * tf.cast(step, tf.float32) + self.warmup_learning_rate
learning_rate = tf.where(
step < self.warmup_steps, warmup_rate, learning_rate
)
return tf.where(
step > self.total_steps, 0.0, learning_rate, name="learning_rate"
)
total_steps = int((len(x_train) / BATCH_SIZE) * EPOCHS)
warmup_epoch_percentage = 0.15
warmup_steps = int(total_steps * warmup_epoch_percentage)
scheduled_lrs = WarmUpCosine(
learning_rate_base=LEARNING_RATE,
total_steps=total_steps,
warmup_learning_rate=0.0,
warmup_steps=warmup_steps,
)
lrs = [scheduled_lrs(step) for step in range(total_steps)]
plt.plot(lrs)
plt.xlabel("Step", fontsize=14)
plt.ylabel("LR", fontsize=14)
plt.show()
# Assemble the callbacks.
train_callbacks = [TrainMonitor(epoch_interval=5)]
optimizer = tfa.optimizers.AdamW(learning_rate=scheduled_lrs, weight_decay=WEIGHT_DECAY)
# Compile and pretrain the model.
mae_model.compile(
optimizer=optimizer, loss=keras.losses.MeanSquaredError(), metrics=["mae"]
)
history = mae_model.fit(
train_ds, epochs=EPOCHS, validation_data=val_ds, callbacks=train_callbacks,
)
# Measure its performance.
loss, mae = mae_model.evaluate(test_ds)
print(f"Loss: {loss:.2f}")
print(f"MAE: {mae:.2f}")
# Extract the augmentation layers.
train_augmentation_model = mae_model.train_augmentation_model
test_augmentation_model = mae_model.test_augmentation_model
# Extract the patchers.
patch_layer = mae_model.patch_layer
patch_encoder = mae_model.patch_encoder
patch_encoder.downstream = True # Swtich the downstream flag to True.
# Extract the encoder.
encoder = mae_model.encoder
# Pack as a model.
downstream_model = keras.Sequential(
[
layers.Input((IMAGE_SIZE, IMAGE_SIZE, 3)),
patch_layer,
patch_encoder,
encoder,
layers.BatchNormalization(), # Refer to A.1 (Linear probing).
layers.GlobalAveragePooling1D(),
layers.Dense(NUM_CLASSES, activation="softmax"),
],
name="linear_probe_model",
)
# Only the final classification layer of the `downstream_model` should be trainable.
for layer in downstream_model.layers[:-1]:
layer.trainable = False
downstream_model.summary()
def prepare_data(images, labels, is_train=True):
if is_train:
augmentation_model = train_augmentation_model
else:
augmentation_model = test_augmentation_model
dataset = tf.data.Dataset.from_tensor_slices((images, labels))
if is_train:
dataset = dataset.shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE).map(
lambda x, y: (augmentation_model(x), y), num_parallel_calls=AUTO
)
return dataset.prefetch(AUTO)
train_ds = prepare_data(x_train, y_train)
val_ds = prepare_data(x_train, y_train, is_train=False)
test_ds = prepare_data(x_test, y_test, is_train=False)
linear_probe_epochs = 50
linear_prob_lr = 0.1
warm_epoch_percentage = 0.1
steps = int((len(x_train) // BATCH_SIZE) * linear_probe_epochs)
warmup_steps = int(steps * warm_epoch_percentage)
scheduled_lrs = WarmUpCosine(
learning_rate_base=linear_prob_lr,
total_steps=steps,
warmup_learning_rate=0.0,
warmup_steps=warmup_steps,
)
optimizer = keras.optimizers.SGD(learning_rate=scheduled_lrs, momentum=0.9)
downstream_model.compile(
optimizer=optimizer, loss="sparse_categorical_crossentropy", metrics=["accuracy"]
)
downstream_model.fit(train_ds, validation_data=val_ds, epochs=linear_probe_epochs)
loss, accuracy = downstream_model.evaluate(test_ds)
accuracy = round(accuracy * 100, 2)
print(f"Accuracy on the test set: {accuracy}%.")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hyperparameters for pretraining
Step2: Load and prepare the CIFAR-10 dataset
Step3: Data augmentation
Step4: A layer for extracting patches from images
Step5: Let's visualize the image patches.
Step6: Patch encoding with masking
Step7: Let's see the masking process in action on a sample image.
Step8: MLP
Step9: MAE encoder
Step10: MAE decoder
Step11: MAE trainer
Step12: Model initialization
Step13: Training callbacks
Step14: Learning rate scheduler
Step15: Model compilation and training
Step16: Evaluation with linear probing
Step17: We are using average pooling to extract learned representations from the MAE encoder.
Step18: Perform linear probing
|
6,521
|
<ASSISTANT_TASK:>
Python Code:
import cashflows as cf
0.2 / (1 - 0.2)
0.36 / (1 + 0.36)
cf.iconv(nrate = 6.72, pyr = 2) ## Banco 1
cf.iconv(nrate = 6.70, pyr = 4) ##ย Banco 2 -- mejor opciรณn
cf.iconv(nrate = 6.65, pyr = 12) ##ย Banco 3
## Otra forma
cf.iconv(nrate = [6.72, 6.79, 6.65], pyr = [2, 4, 12])
erate, _ = cf.iconv(nrate = 12.0, pyr = 2) ##ย efectiva por aรฑo
erate
nrate, _ = cf.iconv(erate = erate, pyr = 12) ## nominal compuesta mensualmente
nrate
## tasa efectiva mensual
0.12 / 12
## tasa efectiva trimestral
erate, _ = cf.iconv(nrate = 3 * 0.12 / 12, pyr = 3)
erate
## tasa efectiva anual
erate, _ = cf.iconv(nrate = 12.0, pyr = 12)
erate
cf.nominal_rate(const_value=10, start=(2000, 0), nper=8, pyr=4)
cf.nominal_rate(const_value=10, start=(2000, 0), nper=8, pyr=6)
spec = ((2000, 3), 10)
cf.nominal_rate(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=spec)
spec = [(3, 10), (6, 20)]
cf.nominal_rate(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=spec)
cf.nominal_rate(const_value=[10, 20]*10, pyr=4)
cf.nominal_rate(const_value = 3,
start = (2000, 0),
nper = 48,
pyr = 12,
spec= [(12, 4), # tasa para el aรฑo 2
(24, 5), #ย tasa para el aรฑo 3
(36, 6)]) # tasa para el aรฑo 4)
x = cf.nominal_rate(const_value = 3,
start = (2000, 0),
nper = 48,
pyr = 12,
spec= [(12, 4), # tasa para el aรฑo 2
(24, 5), #ย tasa para el aรฑo 3
(36, 6)]) # tasa para el aรฑo 4)
x[5] = 100
x
cf.cashflow(const_value=1, # valor constante
start=(2000, 0), #ย (periodo mayor, periodo menor)
nper=8, # nรบmero total de periodos
pyr=4) # nรบmero de periodos por aรฑo
## un valor puntual puede ser introducido mediante una tupla
spec = ((2000, 3), 10) #ย ((major, minor), value)
cf.cashflow(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=spec)
cf.cashflow(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=((2000, 3), 10))
spec = [((2000, 3), 10), ((2001, 3), 10)]
cf.cashflow(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=spec)
spec = (3, 10)
cf.cashflow(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=spec)
cf.cashflow(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=(3, 10))
spec = [(3, 10), (7, 10)]
cf.cashflow(const_value=1, start=(2000, 0), nper=8, pyr=4, spec=spec)
cf.cashflow(const_value=[10]*10, pyr=4)
cf.cashflow(const_value=[-10]*4)
## un flujo de caja es un objeto que puede guardarse
## en una variable para usarse despuรฉs
x = cf.cashflow(const_value=[0, 1, 2, 3], pyr=4)
x[3] = 10
x
## es posible alterar y acceder a valores individuales
## para cada periodo de tiempo usando []
x[3]
x[(0, 3)] = 0
x
x[(0,2)]
abs(cf.cashflow(const_value=[-10]*4, pyr=4))
cf.cashflow(const_value=[1]*4, pyr=4) + cf.cashflow(const_value=[2]*4, pyr=4)
cf.cashflow(const_value=[6]*4, pyr=4) // cf.cashflow(const_value=[4]*4, pyr=4)
x = cf.cashflow( const_value=[2]*4, pyr=4)
x += cf.cashflow( const_value=[3]*4, pyr=4)
x
x = cf.cashflow( const_value=[6]*4, pyr=4)
x //= cf.cashflow( const_value=[4]*4, pyr=4)
x
x = cf.cashflow( const_value=[2]*4, pyr=4)
x *= cf.cashflow( const_value=[3]*4, pyr=4)
x
x = cf.cashflow( const_value=[6]*4, pyr=4)
x -= cf.cashflow( const_value=[4]*4, pyr=4)
x
cf.cashflow( const_value=[2]*4, pyr=4) * cf.cashflow( const_value=[3]*4, pyr=4)
cf.cashflow( const_value=[6]*4, pyr=4) - cf.cashflow( const_value=[4]*4, pyr=4)
cf.cashflow( const_value=[6]*4, pyr=4).tolist()
cflo = cf.cashflow(const_value=[-10, 5, 0, 20] * 3, pyr=4)
cf.cfloplot(cflo)
cf.cashflow(const_value=[0, 1, 2, 2, 4, 5, 6, 7, 8])
## para 5 <= t < 10, el valor es $ 100, y 0 en el resto de los casos
cf.cashflow(const_value=0, nper=15, pyr=1, spec=[(t,100) for t in range(5,10)])
##ย un flujo escalonado
a = [(t, 100) for t in range( 1, 5)]
b = [(t, 150) for t in range( 6, 10)]
c = [(t, 200) for t in range(11, 13)]
cf.cashflow(const_value=0, nper=20, pyr=1, spec=a + b + c)
## flujo con gradiente geomรฉtrico (aumento del 5% por periodo)
cf.cashflow(const_value=0, nper=20, pyr=1, spec=[(t, 100 * 1.05 ** (t-5)) for t in range(5,10)])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Conversiรณn de tasas de interรฉs
Step2: Ejemplo.-- Si se desea obtener una tasa efectiva anual del 36%, ยฟcuรกnto se deberรก cobrar en forma anticipada anual para obenerla?
Step3: Interรฉs nominal e interรฉs efectivo
Step4: Ejemplo.-- Convierta una tasa del 12% anual compuesto semestralmente a anual compuesto mensualmente.
Step5: Ejemplo.-- Sea un interรฉs nominal del 12% capitalizado mensualmente. Calcule
Step6: Nomenclatura
Step7: Ejemplo.-- Se va a tomar un crรฉdito a 48 meses. La tasa inicial es del 3% y aumenta un punto cada aรฑo. Represente la tasa de interรฉs.
Step8: Representaciรณn de flujos genรฉricos de caja
Step9: En algunos casos es necesario introducir patrones de flujo mรกs complejos.
|
6,522
|
<ASSISTANT_TASK:>
Python Code:
# Load our 3 images using rasterio
import rasterio
img1 = rasterio.open('data/20170721_175836_103c/20170721_175836_103c_3B_Visual.tif')
img2 = rasterio.open('data/20170721_175837_103c/20170721_175837_103c_3B_Visual.tif')
img3 = rasterio.open('data/20170721_175838_103c/20170721_175838_103c_3B_Visual.tif')
print(img1.meta['dtype'], img1.meta['crs'], img1.meta['count'])
print(img2.meta['dtype'], img2.meta['crs'], img2.meta['count'])
print(img3.meta['dtype'], img3.meta['crs'], img3.meta['count'])
# Read in color interpretations of each band in img1 - here we'll assume img2 and img3 have the same values
colors = [img1.colorinterp[band] for band in range(img1.count)]
# take a look at img1's band types:
for color in colors:
print(color.name)
from rasterio.merge import merge
# merge returns the mosaic & coordinate transformation information
(mosaic, transform) = merge([img1, img2, img3])
from rasterio.plot import show
show(mosaic)
# Grab a copy of our source metadata, using img1
meta = img1.meta
# Update the original metadata to reflect the specifics of our new mosaic
meta.update({"transform": transform,
"height":mosaic.shape[1],
"width":mosaic.shape[2]})
with rasterio.open('data/mosaic.tif', 'w', **meta) as dst:
dst.write(mosaic)
# use rasterio's sister-library for working with vector data
import fiona
# use fiona to open our original AOI GeoJSON
with fiona.open('data/mt-dana-small.geojson') as mt:
aoi = [feature["geometry"] for feature in mt]
# transform AOI to match mosaic CRS
from fiona.transform import transform_geom
transformed_coords = transform_geom('EPSG:4326', 'EPSG:32611', aoi[0])
aoi = [transformed_coords]
# import rasterio's mask tool
from rasterio.mask import mask
# apply mask with crop=True to cut to boundary
with rasterio.open('data/mosaic.tif') as mosaic:
clipped, transform = mask(mosaic, aoi, crop=True)
# See the results!
show(clipped)
# save the output to a final GeoTIFF
# use the metadata from our original mosaic
meta = mosaic.meta.copy()
# update metadata with new, clipped mosaic's boundaries
meta.update({"transform": transform,
"height":clipped.shape[1],
"width":clipped.shape[2]})
# write the output to a GeoTIFF
with rasterio.open('data/clipped_mosaic.tif', 'w', **meta) as dst:
dst.write(masked)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step 1. Aquiring Imagery
Step 2. Inspecting Imagery
Step1: At this point we can use rasterio to inspect the metadata of these three images. Specifically, in order to create a composite from these images, we want to verify that all three images have the same data type, the same coordinate reference systems and the same band count
Step2: Success - they do! But wait, I thought we were using a "Visual" image, and expecting only 3 bands of information (RGB)?
Step3: The fourth channel is actually a binary alpha mask
Step4: Once that process is complete, take a moment to congratulate yourself. At this stage you've successfully acquired adjacent imagery, inspected metadata, and performed a compositing process in order to generate a new mosaic. Well done!
Step5: At this point we're ready to write our mosaic out to a new GeoTIFF file. To do this, we'll want to grab the geospatial metadata from one of our original images (again, here we'll use img1 to represent the metadata of all 3 input images).
Step6: Step 4. Clip the Mosaic to AOI Boundaries
Step7: At this stage you have read in the AOI geometry and transformed its coordinates to match the mosaic. We're now ready to use rasterio.mask.mask to create a mask over our mosaic, using the AOI geometry as the mask line.
Step8: Congratulations! You've created a clipped mosaic, showing only the imagery that falls within our area of interest.
|
6,523
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import expon
from numpy import arange
import scipy.stats as stats
#Abrir o arquivo
df = pd.read_csv('earthquake.csv')
#listar colunas
print(list(df))
df.head()
df.loc[(df.Latitude >=0), "Hemisfรฉrio"] = "Norte"
df.loc[(df.Latitude <0), "Hemisfรฉrio"] = "Sul"
df.head()
df.Magnitude.describe()
f = plt.figure(figsize=(11,5))
faixas = arange(5,9,0.65)
plot = df.Magnitude.plot.hist(bins=faixas , title="Histograma de Magnitude",normed=1,alpha = 0.9,color="g")
plt.xlabel("Magnitude")
plt.ylabel("Densidade")
plt.show()
mu = df.Magnitude.mean()
dp = df.Magnitude.std()
fig = plt.figure(figsize=(11, 5))
plot= df.Magnitude.plot.hist(bins = faixas, title='HISTOGRAMA Magnitude ', normed=1, alpha=0.9,color = 'r')
a = sorted(df.Magnitude)
plt.plot(a, stats.norm.pdf(a, loc = mu, scale = dp))
plt.title('Histograma X Pdf')
ct = pd.crosstab(df.Hemisfรฉrio,df.Type,margins=True,normalize = True)
ct
probNorte = ct.Earthquake.Norte/ct.Earthquake.All
print(probNorte)
probNuclear = ct["Nuclear Explosion"]["Norte"]/ct.All.Norte
print(probNuclear)
plt.scatter(x = df['Magnitude Error'],
y = df['Depth'])
plt.show()
df["Depth"].corr(df["Magnitude Error"])
Lat = df["Latitude"].describe()
Long = df["Longitude"].describe()
print(Lat,Long)
df.boxplot(column = ["Latitude","Longitude"])
plt.show()
df.loc[(df.Type=="Nuclear Explosion")&(df["Magnitude Type"]=="MB")&(df["Hemisfรฉrio"]=="Sul"),"Hemis"]="Sul"
df.loc[(df.Type=="Nuclear Explosion")&(df["Magnitude Type"]=="MB")&(df["Hemisfรฉrio"]=="Norte"),"Hemis"]="Norte"
sul=df["Hemis"].value_counts("Sul")
sul
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Liste as primeiras linhas do DataFrame
Step2: Q1 - Manipulando o DataFrame
Step3: Q2 - Fit e Histograma
Step4: Faรงa o fit de uma distribuiรงรฃo exponencial sobre os dados da Magnitude, achando os valores de loc e scale. Interprete loc e scale no caso da exponencial.
Step5: Q3 - Tabela cruzada
Step6: Q3.1 - Qual a probabilidade de ocorrer um terremoto no hemisfรฉrio norte?
Step7: Explique o seu raciocรญnio
Step8: Se conseguir obter a resposta sem calcular, insira a resposta abaixo
Step9: Calcule a correlaรงรฃo entre as variรกveis Magnitude Error e Depth
Step10: Explique o que significa o valor da correlaรงรฃo calculada acima?
Step11: Q6 - Tirando conclusรตes com base nos dados
|
6,524
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo(id="BYOK12I9vgI", width="100%")
from nams import load_data as cf
G = cf.load_crime_network()
for n, d in G.nodes(data=True):
G.nodes[n]["degree"] = G.degree(n)
import nxviz as nv
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(7, 7))
nv.circos(G, sort_by="degree", group_by="bipartite", node_color_by="bipartite", node_enc_kwargs={"size_scale": 3})
import networkx as nx
def extract_partition_nodes(G: nx.Graph, partition: str):
nodeset = [_ for _, _ in _______ if ____________]
if _____________:
raise Exception(f"No nodes exist in the partition {partition}!")
return nodeset
from nams.solutions.bipartite import extract_partition_nodes
# Uncomment the next line to see the answer.
# extract_partition_nodes??
from nams.solutions.bipartite import draw_bipartite_graph_example, bipartite_example_graph
from nxviz import annotate
import matplotlib.pyplot as plt
bG = bipartite_example_graph()
pG = nx.bipartite.projection.projected_graph(bG, "abcd")
ax = draw_bipartite_graph_example()
plt.sca(ax[0])
annotate.parallel_labels(bG, group_by="bipartite")
plt.sca(ax[1])
annotate.arc_labels(pG)
from networkx.algorithms import bipartite
bipartite.is_bipartite(G)
person_nodes = extract_partition_nodes(G, "person")
crime_nodes = extract_partition_nodes(G, "crime")
person_graph = bipartite.projected_graph(G, person_nodes)
crime_graph = bipartite.projected_graph(G, crime_nodes)
list(person_graph.edges(data=True))[0:5]
list(crime_graph.edges(data=True))[0:5]
import pandas as pd
def find_most_similar_crimes(cG: nx.Graph):
Find the crimes that are most similar to other crimes.
dcs = ______________
return ___________________
from nams.solutions.bipartite import find_most_similar_crimes
find_most_similar_crimes(crime_graph)
def find_most_similar_people(pG: nx.Graph):
Find the persons that are most similar to other persons.
dcs = ______________
return ___________________
from nams.solutions.bipartite import find_most_similar_people
find_most_similar_people(person_graph)
weighted_person_graph = bipartite.weighted_projected_graph(G, person_nodes)
list(weighted_person_graph.edges(data=True))[0:5]
list(G.neighbors('p1'))
def find_connected_persons(G, person, crime):
# Step 0: Check that the given "person" and "crime" are connected.
if _____________________________:
raise ValueError(f"Graph does not have a connection between {person} and {crime}!")
# Step 1: calculate weighted projection for person nodes.
person_nodes = ____________________________________
person_graph = bipartite.________________________(_, ____________)
# Step 2: Find neighbors of the given `person` node in projected graph.
candidate_neighbors = ___________________________________
# Step 3: Remove candidate neighbors from the set if they are implicated in the given crime.
for p in G.neighbors(crime):
if ________________________:
_____________________________
# Step 4: Rank-order the candidate neighbors by number of shared connections.
_________ = []
## You might need a for-loop here
return pd.DataFrame(__________).sort_values("________", ascending=False)
from nams.solutions.bipartite import find_connected_persons
find_connected_persons(G, 'p2', 'c10')
from nams.solutions.bipartite import bipartite_degree_centrality_denominator
from nams.functions import render_html
render_html(bipartite_degree_centrality_denominator())
def find_most_crime_person(G, person_nodes):
dcs = __________________________
return ___________________________
from nams.solutions.bipartite import find_most_crime_person
find_most_crime_person(G, person_nodes)
from nams.solutions import bipartite
import inspect
print(inspect.getsource(bipartite))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this chapter, we will look at bipartite graphs and their applications.
Step2: If you inspect the nodes,
Step3: Exercise
Step4: Bipartite Graph Projections
Step5: As shown in the figure above, we start first with a bipartite graph with two node sets,
Step6: Now that we've confirmed that the graph is indeed bipartite,
Step7: Next, we can compute the projection
Step8: And with that, we have our projected graphs!
Step10: Now, what is the interpretation of these projected graphs?
Step12: Exercise
Step13: Weighted Projection
Step14: Exercise
Step15: Degree Centrality
Step16: Exercise
Step17: Solutions
|
6,525
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import display
from IPython.display import (
display_pretty, display_html, display_jpeg,
display_png, display_json, display_latex, display_svg
)
from IPython.display import Image
i = Image(filename='../images/ipython_logo.png')
i
display(i)
Image(url='http://python.org/images/python-logo.gif')
from IPython.display import SVG
SVG(filename='../images/python_logo.svg')
from IPython.display import Image
img_url = 'http://www.lawrencehallofscience.org/static/scienceview/scienceview.berkeley.edu/html/view/view_assets/images/newview.jpg'
# by default Image data are embedded
Embed = Image(img_url)
# if kwarg `url` is given, the embedding is assumed to be false
SoftLinked = Image(url=img_url)
# In each case, embed can be specified explicitly with the `embed` kwarg
# ForceEmbed = Image(url=img_url, embed=True)
Embed
SoftLinked
from IPython.display import HTML
s = <table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
h = HTML(s)
display(h)
%%html
<table>
<tr>
<th>Header 1</th>
<th>Header 2</th>
</tr>
<tr>
<td>row 1, cell 1</td>
<td>row 1, cell 2</td>
</tr>
<tr>
<td>row 2, cell 1</td>
<td>row 2, cell 2</td>
</tr>
</table>
!pip install Tabipy
from tabipy import Table, TableHeaderRow, TableCell
t = Table(TableHeaderRow('divisions', 'result'))
num = 55
for x in range(7):
if num < 1:
resultcell = TableCell(num, bg_colour='DarkBlue', text_colour='white')
else:
resultcell = TableCell(num)
t.append_row((x, resultcell))
num /= 3
t
from IPython.display import Javascript
js = Javascript('alert("hi")');
display(js)
%%javascript
alert("hi");
Javascript(
$.getScript('//cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')
)
%%html
<style type="text/css">
circle {
fill: rgb(31, 119, 180);
fill-opacity: .25;
stroke: rgb(31, 119, 180);
stroke-width: 1px;
}
.leaf circle {
fill: #ff7f0e;
fill-opacity: 1;
}
text {
font: 10px sans-serif;
}
</style>
%%javascript
// element is the jQuery element we will append to
var e = element.get(0);
var diameter = 600,
format = d3.format(",d");
var pack = d3.layout.pack()
.size([diameter - 4, diameter - 4])
.value(function(d) { return d.size; });
var svg = d3.select(e).append("svg")
.attr("width", diameter)
.attr("height", diameter)
.append("g")
.attr("transform", "translate(2,2)");
d3.json("data/flare.json", function(error, root) {
var node = svg.datum(root).selectAll(".node")
.data(pack.nodes)
.enter().append("g")
.attr("class", function(d) { return d.children ? "node" : "leaf node"; })
.attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; });
node.append("title")
.text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); });
node.append("circle")
.attr("r", function(d) { return d.r; });
node.filter(function(d) { return !d.children; }).append("text")
.attr("dy", ".3em")
.style("text-anchor", "middle")
.text(function(d) { return d.name.substring(0, d.r / 3); });
});
d3.select(self.frameElement).style("height", diameter + "px");
from IPython.display import Math
Math(r'F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx')
from IPython.display import Latex
Latex(r\begin{eqnarray}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{eqnarray})
%%latex
\begin{align}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}
from IPython.display import Audio
Audio(url="http://www.nch.com.au/acm/8k16bitpcm.wav")
import numpy as np
max_time = 3
f1 = 220.0
f2 = 224.0
rate = 8000.0
L = 3
times = np.linspace(0,L,rate*L)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
Audio(data=signal, rate=rate)
from IPython.display import YouTubeVideo
YouTubeVideo('sjfsUzECqK0')
from IPython.display import HTML
from base64 import b64encode
video = open("../images/animation.m4v", "rb").read()
video_encoded = b64encode(video).decode('ascii')
video_tag = '<video controls alt="test" src="data:video/x-m4v;base64,{0}">'.format(video_encoded)
HTML(data=video_tag)
from IPython.display import IFrame
IFrame('http://ipython.org', width='100%', height=350)
from IPython.display import FileLink, FileLinks
FileLink('Cell Magics.ipynb')
FileLinks('.', recursive=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A few points
Step2: Images
Step3: Returning an Image object from an expression will automatically display it
Step4: Or you can pass an object with a rich representation to display
Step5: An image can also be displayed from raw data or a URL.
Step6: SVG images are also supported out of the box.
Step7: Embedded vs non-embedded Images
Step8: Here is the embedded version. Note that this image was pulled from the webcam when this code cell was originally run and stored in the Notebook. Unless we rerun this cell, this is not todays image.
Step9: Here is today's image from same webcam at Berkeley, (refreshed every minutes, if you reload the notebook), visible only with an active internet connection, that should be different from the previous one. Notebooks saved with this kind of image will be smaller and always reflect the current version of the source, but the image won't display offline.
Step11: Of course, if you re-run this Notebook, the two images will be the same again.
Step12: You can also use the %%html cell magic to accomplish the same thing.
Step13: The display machinery can also be used by libraries. Tabipy is a library for constructing tables
Step14: JavaScript
Step15: Pass a string of JavaScript source code to the JavaScript object and then display it.
Step16: The same thing can be accomplished using the %%javascript cell magic
Step18: Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples.
Step19: LaTeX
Step21: With the Latex class, you have to include the delimiters yourself. This allows you to use other LaTeX modes such as eqnarray
Step22: Or you can enter LaTeX directly with the %%latex cell magic
Step23: Audio
Step24: A NumPy array can be auralized automatically. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook.
Step25: Video
Step26: Using the nascent video capabilities of modern browsers, you may also be able to display local
Step27: External sites
Step28: Links to local files
Step29: Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. It will optionally recurse into subdirectories as well.
|
6,526
|
<ASSISTANT_TASK:>
Python Code:
# Imports
import math
import seaborn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import cm
from sklearn.neighbors import NearestNeighbors
from sklearn.preprocessing import normalize
# Loading xyz map
correct_map = pd.read_csv('correct_map.xyz', sep=' ', dtype='d', header=None, names=['x', 'y', 'z'])
scattered_data_10000 = pd.read_csv('scattered_data_10000.xyz', sep=' ', dtype='d', header=None, names=['x', 'y', 'z'])
NI = 100
NJ = 100
number_neighbors = 5
# Creating grid points
x_grid = np.linspace(1, 10, NI)
y_grid = np.linspace(1, 10, NJ)
grid_points = pd.DataFrame()
grid_points['x'] = np.tile(x_grid, NJ)
grid_points['y'] = y_grid.repeat(NI)
grid_points['z'] = np.zeros(NI*NJ)
import matplotlib as mpl
%matplotlib inline
mpl.rcParams['savefig.dpi'] = 250
mpl.rcParams['figure.dpi'] = 250
grid_points.plot(kind='scatter', x='x', y='y', marker='.', s=5)
plt.show()
scattered_data_10000.plot(kind='scatter', x='x', y='y', marker='.', s=5)
plt.show()
# Applying KNN
neighbors = NearestNeighbors(n_neighbors=number_neighbors, algorithm='ball_tree').fit(scattered_data_10000.loc[:, ['x', 'y']])
# Distance and index of each point from each vertice of the grid
distances, indexes = neighbors.kneighbors(grid_points.loc[:, ['x', 'y']])
# Maximum and minimum values in X axis
max_x = grid_points.loc[:, 'x'].max()
min_x = grid_points.loc[:, 'x'].min()
# Maximum and minimum values in Y axis
max_y = grid_points.loc[:, 'y'].max()
min_y = grid_points.loc[:, 'y'].min()
# Step X and Step Y
step_x = (max_x - min_x) / NI
step_y = (max_y - min_y) / NJ
# Radius
radius = 2 * math.sqrt((step_x ** 2) + (step_y ** 2))
less_radius = distances[:, 0] <= radius
distances = distances[less_radius, :]
indexes = indexes[less_radius, :]
# Using the scikit-learn library
weight_norm = 1 - normalize(distances, axis=1)
prod = weight_norm * scattered_data_10000.values[indexes, 2]
scalars = np.full(NI * NJ, 0.0)
grid_points.loc[less_radius, 'z'] = prod.sum(axis=1) / (weight_norm.sum(axis=1))
plt.pcolor(correct_map.values[:, 0].reshape(NI, NJ), correct_map.values[:, 1].reshape(NI, NJ), correct_map.values[:, 2].reshape(NI, NJ), cmap=cm.jet)
plt.pcolor(grid_points.values[:, 0].reshape(NI, NJ), grid_points.values[:, 1].reshape(NI, NJ), grid_points.values[:, 2].reshape(NI, NJ), cmap=cm.jet)
dif_map = correct_map.z - grid_points.z
dif_map.describe()
error = (grid_points.z / correct_map.z) - 1
plt.hist(error)
error[error < 0] *= -1
error.describe()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading the points and creating the structured grid
Step2: Structured Grid
Step3: Cloud of Points
Step4: Applying KNN
Step5: Calculating the radius which the nearest point has to be located
Step6: Selecting the points which the distance are equal or less than the radius
Step7: It is interesting to normalize the distance and subtract the value from 1. That will be the weight of each distance.
Step8: Formula to calculate the value for each vertice of the strcutured grid
Step9: Example - Map desired
Step10: Map reconstructed using the algorithm described
Step11: Error
|
6,527
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
import numpy as np
logger = phoebe.logger()
print(phoebe.list_available_solvers())
b = phoebe.default_binary()
b.add_dataset('lc', compute_phases=phoebe.linspace(0,1,101))
b.run_compute(irrad_method='none')
times = b.get_value('times', context='model')
fluxes = b.get_value('fluxes', context='model') + np.random.normal(size=times.shape) * 0.01
sigmas = np.ones_like(times) * 0.02
b = phoebe.default_binary()
b.add_dataset('lc', times=times, fluxes=fluxes, sigmas=np.full_like(fluxes, fill_value=0.1))
b.add_solver('estimator.lc_geometry', solver='my_lcgeom_solver')
print(b.get_solver(solver='my_lcgeom_solver'))
print(b.get_parameter('expose_model').description)
print(b.get_parameter('lc_datasets').description)
print(b.get_parameter('lc_datasets').choices)
b.run_solver(solver='my_lcgeom_solver', solution='my_lcgeom_solution')
_ = b.plot(solution='my_lcgeom_solution', show=True)
print(b.adopt_solution(trial_run=True))
print(b.adopt_solution())
b.add_distribution('teff@primary', phoebe.gaussian(6000,100), distribution='mydist01')
b.add_distribution('teff@secondary', phoebe.gaussian(5500,600), distribution='mydist01')
b.add_distribution('teff@primary', phoebe.uniform(5800,6200), distribution='mydist02')
b.add_solver('sampler.emcee', priors=['mydist01', 'mydist02'], solver='myemceesolver')
print(b.filter(qualifier='prior*'))
print(b.get_parameter('priors_combine').description)
_ = b.plot_distribution_collection('priors@myemceesolver', show=True)
b.calculate_lnp('priors@myemceesolver')
b.set_value('priors_combine', 'first')
_ = b.plot_distribution_collection('priors@myemceesolver', show=True)
b.calculate_lnp('priors@myemceesolver')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: General "Fitting" Workflow
Step2: Solving an eclipsing binary is a very time-intensive task (both for you as well as your computer). There is no one-size-fits-all recipe to follow, but in general you might find the following workflow useful
Step3: Adding Solver Options
Step4: In addition to the solver API docs, remember that each parameter has a description and possibly a set of available choices (if its a ChoiceParameter or SelectParameter).
Step5: run_solver
Step6: In many cases, the solution itself is plottable - showing some sort of diagnostic figures. In some cases, such as sampler.emcee or sampler.dynesty, there are several different diagnostic figures available which can be chosen by passing the available options to style.
Step7: The proposed values can be viewed via b.adopt_solution.
Step8: Otherwise, the changes will be made and all changed parameters (including those changed via constraints) will be returned.
Step9: The Merit Function
Step10: Now we'll look at the affect of priors_combine on the resulting priors distributions that would be sent to the merit function.
|
6,528
|
<ASSISTANT_TASK:>
Python Code:
from cobra.io import load_model
from cobra.flux_analysis import gapfill
model = load_model("iYS1720")
universal = cobra.Model("universal_reactions")
for i in [i.id for i in model.metabolites.f6p_c.reactions]:
reaction = model.reactions.get_by_id(i)
universal.add_reaction(reaction.copy())
model.remove_reactions([reaction])
model.optimize().objective_value
solution = gapfill(model, universal, demand_reactions=False)
for reaction in solution[0]:
print(reaction.id)
result = gapfill(model, universal, demand_reactions=False, iterations=4)
for i, entries in enumerate(result):
print("---- Run %d ----" % (i + 1))
for e in entries:
print(e.id)
with model:
model.objective = model.add_boundary(model.metabolites.f6p_c, type='demand')
solution = gapfill(model, universal)
for reaction in solution[0]:
print(reaction.id)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this model D-Fructose-6-phosphate is an essential metabolite. We will remove all the reactions using it, and at them to a separate model.
Step2: Now, because of these gaps, the model won't grow.
Step3: We will use can use the model's original objective, growth, to figure out which of the removed reactions are required for the model be feasible again. This is very similar to making the 'no-growth but growth (NGG)' predictions of Kumar et al. 2009.
Step4: We can obtain multiple possible reaction sets by having the algorithm go through multiple iterations.
Step5: We can also instead of using the original objective, specify a given metabolite that we want the model to be able to produce.
|
6,529
|
<ASSISTANT_TASK:>
Python Code:
import urllib
import zipfile
import StringIO
import string
import pandas
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import HTML
import json
county_name = ""
start_date = "20140101"
end_date = "20150101"
diag = False
## Retrieve the bounding box of the specified county - if no county is specified, the bounding boxes for all NM counties will be requested
countyBBOXlink = "http://gstore.unm.edu/apps/epscor/search/nm_counties.json?limit=100&query=" + county_name ## define the request URL
print countyBBOXlink ## print the request URL for verification
print
bboxFile = urllib.urlopen(countyBBOXlink) ## request the bounding box information from the server
bboxData = json.load(bboxFile)
# print bboxData
# Get data for BBOX defined by specified county(ies)
myCounties = []
for countyBBOX in bboxData["results"]:
minx,miny,maxx,maxy = countyBBOX[u'box']
myDownloadLink = "http://waterservices.usgs.gov/nwis/iv/?bBox=%f,%f,%f,%f&format=json&period=P7D¶meterCd=00060" % (minx,miny,maxx,maxy) # retrieve data for the specified BBOX for the last 7 days as JSON
print myDownloadLink
myCounty = {u'name':countyBBOX[u'text'],u'minx':minx,u'miny':miny,u'maxx':maxx,u'maxy':maxy,u'downloadLink':myDownloadLink}
myCounties.append(myCounty)
#countySubset = [myCounties[0]]
#print countySubset
valueList = []
for county in myCounties:
print "processing: %s" % county["downloadLink"]
try:
datafile = urllib.urlopen(county["downloadLink"])
data = json.load(datafile)
values = data["value"]["timeSeries"][0]["values"]
for item in values:
for valueItem in item["value"]:
#print json.dumps(item["value"], sort_keys=True, indent=4)
myValue = {"dateTime":valueItem["dateTime"].replace("T"," ").replace(".000-06:00",""),"value":valueItem["value"], "county":county["name"]}
#print myValue
valueList.append(myValue)
#print valueList
except:
print "\tfailed for this one ..."
#print json.dumps(values, sort_keys=True, indent=4)
df = pandas.DataFrame(valueList)
df['dateTime'] = pandas.to_datetime(df["dateTime"])
df['value'] = df['value'].astype(float).fillna(-1)
print df.shape
print df.dtypes
print "column names"
print "------------"
for colName in df.columns:
print colName
print
print df.head()
%matplotlib inline
fig,ax = plt.subplots(figsize=(10,8))
ax.width = 1
ax.height = .5
plt.xkcd()
#plt.ylim(-25,30)
ax.plot_date(df['dateTime'], df['value'], '.', label="Discharge (cf/sec)", color="0.2")
fig.autofmt_xdate()
plt.legend(loc=2, bbox_to_anchor=(1.0,1))
plt.title("15-minute Discharge - cubic feet per second")
plt.ylabel("Discharge")
plt.xlabel("Date")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set some initial variables
Step2: Options
|
6,530
|
<ASSISTANT_TASK:>
Python Code:
def anti_shuffle(s):
return ' '.join([''.join(sorted(list(i))) for i in s.split(' ')])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
6,531
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-2', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
6,532
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
from IPython.display import HTML
HTML('../style/code_toggle.html')
import ephem
import matplotlib
%pylab inline
pylab.rcParams['figure.figsize'] = (15, 10)
#Creating the observer: KAT-7
KAT7 = ephem.Observer()
KAT7.lat = '-30:43:17'
KAT7.lon = '21:25:40.08'
KAT7.elevation = 0.0
KAT7.date = '2016/5/30 00:00:00' #UTC
#Creating the celestial bodies
star_names = np.array(["Rigel","Thuban","Mimosa","Procyon","Sirius","Achernar","Menkar","Zaurak","Aldebaran","Betelgeuse"])
star_objects = np.empty((len(star_names),),dtype=object)
for k in xrange(len(star_names)):
star_objects[k] = ephem.star(star_names[k],KAT7)
#Creating the time-strings at which we observe
hours = np.empty((96,),dtype=object)
minutes = np.empty((96,),dtype=object)
alt_az_mat = np.zeros((len(star_names),len(hours)+1,2),dtype=float) #(sources,hours,horz_coord)
hours_c = 0
for k in xrange(len(hours)):
if k % 4 == 0:
if hours_c < 10:
hours[k] = '0'+str(hours_c)
else:
hours[k] = str(hours_c)
minutes[k] = "00"
elif k % 4 == 1:
if hours_c < 10:
hours[k] = '0'+str(hours_c)
else:
hours[k] = str(hours_c)
minutes[k] = "15"
elif k % 4 == 2:
if hours_c < 10:
hours[k] = '0'+str(hours_c)
else:
hours[k] = str(hours_c)
minutes[k] = "30"
elif k % 4 == 3:
if hours_c < 10:
hours[k] = '0'+str(hours_c)
else:
hours[k] = str(hours_c)
hours_c = hours_c + 1
minutes[k] = "45"
#Compute the alt/az for different stars observed by KAT-7 at different times on 2016/5/30
for k in xrange(len(hours)):
#Set new time
n_date = '2016/5/30 ' + hours[k] + ':' + minutes[k] + ':00'
KAT7.date = n_date
#Calculate new alt/az
for j in xrange(len(star_names)):
star_objects[j].compute(KAT7)
alt_az_mat[j,k,0] = float(star_objects[j].alt)
alt_az_mat[j,k,1] = float(star_objects[j].az)
#Copy first value to last value
alt_az_mat[:,-1,:] = alt_az_mat[:,0,:]
time_v = np.linspace(0,24,len(hours)+1,endpoint=True)
#Plot alt
matplotlib.rcParams.update({'font.size': 13.75})
fig, ax = plt.subplots()
c = ["r","b","g","y","m","c","k"]
l = ["-","--"]
l_ind = 0
c_ind = 0
for k in xrange(len(star_names)):
if c_ind == 7:
c_ind = 0
l_ind = 1
mask = np.logical_not(np.logical_and(alt_az_mat[k,:,0]*(180/np.pi)>-5,alt_az_mat[k,:,0]*(180/np.pi)<5))
new_curve_y = alt_az_mat[k,mask,0]*(180/np.pi)
new_curve_x = time_v[mask]
ax.plot(new_curve_x,new_curve_y,c[c_ind]+l[l_ind],label=star_names[k],lw=2,zorder=k)
c_ind = c_ind +1
ax.fill_between(time_v, -5, 5, facecolor='k',alpha=1,zorder=k+1)
ax.annotate("HORIZON", xy = (11.5,5), xytext=(11.5, 15),arrowprops=dict(facecolor="b", shrink=1))
ax.legend()
ax.set_xlim([0,24])
ax.set_ylim([-90,90])
ticks = np.array([-90,-80,-70,-60,-50,-40,-30,-20,-10,0,10,20,30,40,50,60,70,80,90])
plt.yticks(ticks)
ticks = np.array([0,2,4,6,8,10,12,14,16,18,20,22,24])
plt.xticks(ticks)
plt.xlabel("UTC [$h$]")
plt.ylabel("Altitude [$^{\circ}$]")
plt.title("KAT-7: 2016/5/30")
labels = [item.get_text() for item in ax.get_yticklabels()]
labels = np.array(["-90$^{\circ}$","-80$^{\circ}$","-70$^{\circ}$","-60$^{\circ}$","-50$^{\circ}$","-40$^{\circ}$","-30$^{\circ}$","-20$^{\circ}$","-10$^{\circ}$","0$^{\circ}$","10$^{\circ}$","20$^{\circ}$","30$^{\circ}$","40$^{\circ}$","50$^{\circ}$","60$^{\circ}$","70$^{\circ}$","80$^{\circ}$","90$^{\circ}$"])
ax.set_yticklabels(labels)
ax.grid('on')
#Plot az
matplotlib.rcParams.update({'font.size': 13.75})
fig, ax = plt.subplots()
c = ["r","b","g","y","m","c","k"]
l = ["-","--"]
l_ind = 0
c_ind = 0
for i in xrange(10):
if c_ind == 7:
c_ind = 0
l_ind = 1
plt.plot(time_v,alt_az_mat[i,:,1]*(180/np.pi),c[c_ind]+l[l_ind],lw=2,label=star_names[i])
c_ind = c_ind +1
ax.legend()
ax.set_xlim([0,24])
ax.set_ylim([0,360])
ticks = np.array([0,60,120,180,240,300,360])
plt.yticks(ticks)
ticks = np.array([0,2,4,6,8,10,12,14,16,18,20,22,24])
plt.xticks(ticks)
plt.xlabel("UTC [$h$]")
plt.ylabel("Azimuth [$^{\circ}$]")
plt.title("KAT-7: 2016/5/30")
labels = [item.get_text() for item in ax.get_yticklabels()]
labels = np.array(["0$^{\circ}$","60$^{\circ}$","120$^{\circ}$","180$^{\circ}$","240$^{\circ}$","300$^{\circ}$","360$^{\circ}$"])
ax.set_yticklabels(labels)
ax.grid('on')
#Preliminaries
matplotlib.rcParams.update({'font.size': 13.75})
observatories = ["LOFAR","KAT7","MWA","VLA","ALMA","GMRT"]
lat_v = ["52:54:32","-30:43:17","-26:42:12","34:04:43","-23:01:09","19:05:47"]
lon_v = ["06:52:08","21:25:40.08","116:40:16","-107:37:05","-67:45:12","74:02:59"]
alt_az = np.zeros((len(observatories),2),dtype=float)
#Loading different observatories and calculating alt/az of Betelgeuse for each of them
for k in xrange(len(observatories)):
obs = ephem.Observer()
obs.lat = lat_v[k]
obs.lon = lon_v[k]
obs.elevation = 0.0
obs.date = '2016/5/30 00:00:00' #UTC
betelgeuse = ephem.star("Betelgeuse",obs)
alt_az[k,0] = float(betelgeuse.alt)
alt_az[k,1] = float(betelgeuse.az)
#Plotting
cluster = ['o','^','>','s','*','v']
col = ['b','r','g','k','c','m']
fig, ax = plt.subplots()
for xp, yp, m, n, col_v in zip(alt_az[:,0]*(180/np.pi), alt_az[:,1]*(180/np.pi), cluster, observatories,col):
ax.plot([xp],[yp], marker=m, c = col_v, label = n, markersize = 20, linestyle='None')
ax.legend(numpoints=1)
ax.set_xlim([-90,90])
ax.set_ylim([0,360])
ticks = np.array([0,60,120,180,240,300,360])
plt.yticks(ticks)
ticks = np.array([-90,-80,-70,-60,-50,-40,-30,-20,-10,0,10,20,30,40,50,60,70,80,90])
plt.xticks(ticks)
labels = [item.get_text() for item in ax.get_yticklabels()]
labels = np.array(["0$^{\circ}$","60$^{\circ}$","120$^{\circ}$","180$^{\circ}$","240$^{\circ}$","300$^{\circ}$","360$^{\circ}$"])
ax.set_yticklabels(labels)
labels = [item.get_text() for item in ax.get_xticklabels()]
labels = np.array(["-90$^{\circ}$","-80$^{\circ}$","-70$^{\circ}$","-60$^{\circ}$","-50$^{\circ}$","-40$^{\circ}$","-30$^{\circ}$","-20$^{\circ}$","-10$^{\circ}$","0$^{\circ}$","10$^{\circ}$","20$^{\circ}$","30$^{\circ}$","40$^{\circ}$","50$^{\circ}$","60$^{\circ}$","70$^{\circ}$","80$^{\circ}$","90$^{\circ}$"])
ax.set_xticklabels(labels)
plt.xlabel("Altitude [$^{\circ}$]")
plt.ylabel("Azimuth [$^{\circ}$]")
plt.title("Betelgeuse: 2016/5/30 - 00:00:00 UTC")
ax.grid('on')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
Step2: Horizontal Coordinates (ALT/AZ)
Step3: Figure 3.3.3
Step4: Figure 3.3.4
|
6,533
|
<ASSISTANT_TASK:>
Python Code:
import open_cp.sources.chicago as chicago
import geopandas as gpd
import sys, os, csv, lzma
filename = os.path.join("..", "..", "open_cp", "sources", "chicago.csv")
filename_all = os.path.join("..", "..", "open_cp", "sources", "chicago_all.csv.xz")
filename_all1 = os.path.join("..", "..", "open_cp", "sources", "chicago_all1.csv.xz")
with open(filename, "rt") as file:
reader = csv.reader(file)
print(next(reader))
print(next(reader))
with lzma.open(filename_all, "rt") as file:
reader = csv.reader(file)
print(next(reader))
print(next(reader))
geo_data = chicago.load_to_GeoJSON()
geo_data[0]
frame = chicago.load_to_geoDataFrame()
frame.head()
# On my Windows install, if I don't do this, I get a GDAL error in
# the Jupyter console, and the resulting ".prj" file is empty.
# This isn't critical, but it confuses QGIS, and you end up having to
# choose a projection when loading the shape-file.
import os
os.environ["GDAL_DATA"] = "C:\\Users\\Matthew\\Anaconda3\\Library\\share\\gdal\\"
frame.to_file("chicago")
with lzma.open(filename_all, "rt") as file:
features = [ event for event in chicago.generate_GeoJSON_Features(file, type="all")
if event["properties"]["crime"] == "THEFT" ]
frame = gpd.GeoDataFrame.from_features(features)
frame.crs = {"init":"EPSG:4326"} # Lon/Lat native coords
frame.head()
frame.to_file("chicago_all_theft")
with lzma.open(filename_all, "rt") as file:
features = [ event for event in chicago.generate_GeoJSON_Features(file, type="all")
if event["properties"]["crime"] == "BURGLARY" ]
frame = gpd.GeoDataFrame.from_features(features)
frame.crs = {"init":"EPSG:4326"} # Lon/Lat native coords
frame.head()
frame.to_file("chicago_all_burglary")
frame["type"].unique()
frame["location"].unique()
with lzma.open(filename_all, "rt") as file:
features = [ event for event in chicago.generate_GeoJSON_Features(file, type="all")
if event["properties"]["timestamp"].startswith("2001") ]
frame = gpd.GeoDataFrame.from_features(features)
frame.crs = {"init":"EPSG:4326"} # Lon/Lat native coords
frame.head()
frame.to_file("chicago_2001")
longs, lats = [], []
xcs, ycs = [], []
with open(filename, "rt") as file:
reader = csv.reader(file)
header = next(reader)
print(header)
for row in reader:
if len(row[14]) > 0:
longs.append(row[14])
lats.append(row[15])
xcs.append(row[12])
ycs.append(row[13])
set(len(x) for x in longs), set(len(x) for x in lats)
any(x.find('.') >= 0 for x in xcs), any(y.find('.') >= 0 for y in ycs)
import collections
with lzma.open(filename_all, "rt") as file:
c = collections.Counter( event["properties"]["case"] for event in
chicago.generate_GeoJSON_Features(file, type="all") )
multiples = set( key for key in c if c[key] > 1 )
len(multiples)
with lzma.open(file_all, "rt") as file:
data = gpd.GeoDataFrame.from_features(
event for event in chicago.generate_GeoJSON_Features(file, type="all")
if event["properties"]["case"] in multiples
)
len(data), len(data.case.uniques())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let us look at the snapshot of the last year, vs the total dataset. The data appears to be the same, though the exact format changes.
Step2: As well as loading data directly into a TimedPoints class, we can process a sub-set of the data to GeoJSON, or straight to a geopandas dataframe (if geopandas is installed).
Step3: Explore with QGIS
Step4: A geoPandas example
Step5: Upon loading into QGIS to visualise, we find that the 2001 data seems to be geocoded in a different way... The events are not on the road, and the distribution looks less artificial. Let's extract the 2001 burglary data, and then the all the 2001 data, and save.
Step6: Explore rounding errors
Step7: Repeated data
|
6,534
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot
# We have this here to trigger matplotlib's font cache stuff.
# This cell is hidden from the output
import pandas as pd
import numpy as np
np.random.seed(24)
df = pd.DataFrame({'A': np.linspace(1, 10, 10)})
df = pd.concat([df, pd.DataFrame(np.random.randn(10, 4), columns=list('BCDE'))],
axis=1)
df.iloc[0, 2] = np.nan
df.style
df.style.highlight_null().render().split('\n')[:10]
def color_negative_red(val):
Takes a scalar and returns a string with
the css property `'color: red'` for negative
strings, black otherwise.
color = 'red' if val < 0 else 'black'
return 'color: %s' % color
s = df.style.applymap(color_negative_red)
s
def highlight_max(s):
'''
highlight the maximum in a Series yellow.
'''
is_max = s == s.max()
return ['background-color: yellow' if v else '' for v in is_max]
df.style.apply(highlight_max)
df.style.\
applymap(color_negative_red).\
apply(highlight_max)
def highlight_max(data, color='yellow'):
'''
highlight the maximum in a Series or DataFrame
'''
attr = 'background-color: {}'.format(color)
if data.ndim == 1: # Series from .apply(axis=0) or axis=1
is_max = data == data.max()
return [attr if v else '' for v in is_max]
else: # from .apply(axis=None)
is_max = data == data.max().max()
return pd.DataFrame(np.where(is_max, attr, ''),
index=data.index, columns=data.columns)
df.style.apply(highlight_max, color='darkorange', axis=None)
df.style.apply(highlight_max, subset=['B', 'C', 'D'])
df.style.applymap(color_negative_red,
subset=pd.IndexSlice[2:5, ['B', 'D']])
df.style.format("{:.2%}")
df.style.format({'B': "{:0<4.0f}", 'D': '{:+.2f}'})
df.style.format({"B": lambda x: "ยฑ{:.2f}".format(abs(x))})
df.style.highlight_null(null_color='red')
import seaborn as sns
cm = sns.light_palette("green", as_cmap=True)
s = df.style.background_gradient(cmap=cm)
s
# Uses the full color range
df.loc[:4].style.background_gradient(cmap='viridis')
# Compress the color range
(df.loc[:4]
.style
.background_gradient(cmap='viridis', low=.5, high=0)
.highlight_null('red'))
df.style.highlight_max(axis=0)
df.style.set_properties(**{'background-color': 'black',
'color': 'lawngreen',
'border-color': 'white'})
df.style.bar(subset=['A', 'B'], color='#d65f5f')
df.style.bar(subset=['A', 'B'], align='mid', color=['#d65f5f', '#5fba7d'])
import pandas as pd
from IPython.display import HTML
# Test series
test1 = pd.Series([-100,-60,-30,-20], name='All Negative')
test2 = pd.Series([10,20,50,100], name='All Positive')
test3 = pd.Series([-10,-5,0,90], name='Both Pos and Neg')
head =
<table>
<thead>
<th>Align</th>
<th>All Negative</th>
<th>All Positive</th>
<th>Both Neg and Pos</th>
</thead>
</tbody>
aligns = ['left','zero','mid']
for align in aligns:
row = "<tr><th>{}</th>".format(align)
for serie in [test1,test2,test3]:
s = serie.copy()
s.name=''
row += "<td>{}</td>".format(s.to_frame().style.bar(align=align,
color=['#d65f5f', '#5fba7d'],
width=100).render()) #testn['width']
row += '</tr>'
head += row
head+=
</tbody>
</table>
HTML(head)
df2 = -df
style1 = df.style.applymap(color_negative_red)
style1
style2 = df2.style
style2.use(style1.export())
style2
with pd.option_context('display.precision', 2):
html = (df.style
.applymap(color_negative_red)
.apply(highlight_max))
html
df.style\
.applymap(color_negative_red)\
.apply(highlight_max)\
.set_precision(2)
df.style.set_caption('Colormaps, with a caption.')\
.background_gradient(cmap=cm)
from IPython.display import HTML
def hover(hover_color="#ffff99"):
return dict(selector="tr:hover",
props=[("background-color", "%s" % hover_color)])
styles = [
hover(),
dict(selector="th", props=[("font-size", "150%"),
("text-align", "center")]),
dict(selector="caption", props=[("caption-side", "bottom")])
]
html = (df.style.set_table_styles(styles)
.set_caption("Hover to highlight."))
html
from IPython.html import widgets
@widgets.interact
def f(h_neg=(0, 359, 1), h_pos=(0, 359), s=(0., 99.9), l=(0., 99.9)):
return df.style.background_gradient(
cmap=sns.palettes.diverging_palette(h_neg=h_neg, h_pos=h_pos, s=s, l=l,
as_cmap=True)
)
def magnify():
return [dict(selector="th",
props=[("font-size", "4pt")]),
dict(selector="td",
props=[('padding', "0em 0em")]),
dict(selector="th:hover",
props=[("font-size", "12pt")]),
dict(selector="tr:hover td:hover",
props=[('max-width', '200px'),
('font-size', '12pt')])
]
np.random.seed(25)
cmap = cmap=sns.diverging_palette(5, 250, as_cmap=True)
bigdf = pd.DataFrame(np.random.randn(20, 25)).cumsum()
bigdf.style.background_gradient(cmap, axis=1)\
.set_properties(**{'max-width': '80px', 'font-size': '1pt'})\
.set_caption("Hover to magnify")\
.set_precision(2)\
.set_table_styles(magnify())
df.style.\
applymap(color_negative_red).\
apply(highlight_max).\
to_excel('styled.xlsx', engine='openpyxl')
from jinja2 import Environment, ChoiceLoader, FileSystemLoader
from IPython.display import HTML
from pandas.io.formats.style import Styler
%mkdir templates
%%file templates/myhtml.tpl
{% extends "html.tpl" %}
{% block table %}
<h1>{{ table_title|default("My Table") }}</h1>
{{ super() }}
{% endblock table %}
class MyStyler(Styler):
env = Environment(
loader=ChoiceLoader([
FileSystemLoader("templates"), # contains ours
Styler.loader, # the default
])
)
template = env.get_template("myhtml.tpl")
MyStyler(df)
HTML(MyStyler(df).render(table_title="Extending Example"))
EasyStyler = Styler.from_custom_template("templates", "myhtml.tpl")
EasyStyler(df)
with open("template_structure.html") as f:
structure = f.read()
HTML(structure)
# Hack to get the same style in the notebook as the
# main site. This is hidden in the docs.
from IPython.display import HTML
with open("themes/nature_with_gtoc/static/nature.css_t") as f:
css = f.read()
HTML('<style>{}</style>'.format(css))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here's a boring example of rendering a DataFrame, without any (visible) styles
Step2: Note
Step4: The row0_col2 is the identifier for that particular cell. We've also prepended each row/column identifier with a UUID unique to each DataFrame so that the style from one doesn't collide with the styling from another within the same notebook or page (you can set the uuid if you'd like to tie together the styling of two DataFrames).
Step5: In this case, the cell's style depends only on it's own value.
Step6: Notice the similarity with the standard df.applymap, which operates on DataFrames elementwise. We want you to be able to resuse your existing knowledge of how to interact with DataFrames.
Step7: In this case the input is a Series, one column at a time.
Step8: Above we used Styler.apply to pass in each column one at a time.
Step9: When using Styler.apply(func, axis=None), the function must return a DataFrame with the same index and column labels.
Step10: Building Styles Summary
Step11: For row and column slicing, any valid indexer to .loc will work.
Step12: Only label-based slicing is supported right now, not positional.
Step13: Use a dictionary to format specific columns.
Step14: Or pass in a callable (or dictionary of callables) for more flexible handling.
Step15: Builtin Styles
Step16: You can create "heatmaps" with the background_gradient method. These require matplotlib, and we'll use Seaborn to get a nice colormap.
Step17: Styler.background_gradient takes the keyword arguments low and high. Roughly speaking these extend the range of your data by low and high percent so that when we convert the colors, the colormap's entire range isn't used. This is useful so that you can actually read the text still.
Step18: There's also .highlight_min and .highlight_max.
Step19: Use Styler.set_properties when the style doesn't actually depend on the values.
Step20: Bar charts
Step21: New in version 0.20.0 is the ability to customize further the bar chart
Step24: The following example aims to give a highlight of the behavior of the new align options
Step25: Sharing Styles
Step26: Notice that you're able share the styles even though they're data aware. The styles are re-evaluated on the new DataFrame they've been used upon.
Step27: Or through a set_precision method.
Step28: Setting the precision only affects the printed number; the full-precision values are always passed to your style functions. You can always use df.round(2).style if you'd prefer to round from the start.
Step29: Table Styles
Step30: table_styles should be a list of dictionaries.
Step31: Export to Excel
Step32: A screenshot of the output
Step33: This next cell writes the custom template.
Step34: Now that we've created a template, we need to set up a subclass of Styler that
Step35: Notice that we include the original loader in our environment's loader.
Step36: Our custom template accepts a table_title keyword. We can provide the value in the .render method.
Step37: For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass.
Step38: Here's the template structure
Step39: See the template in the GitHub repo for more details.
|
6,535
|
<ASSISTANT_TASK:>
Python Code:
from pypot.creatures import PoppyErgo
ergo = PoppyErgo()
from pypot.creatures import PoppyErgoJr
ergo = PoppyErgoJr()
ergo
ergo.m2
ergo.m2.present_position
ergo.m2.present_temperature
for m in ergo.motors:
print 'Motor "{}" current position = {}'.format(m.name, m.present_position)
ergo.m3.compliant
ergo.m6.compliant = False
ergo.m6.goal_position = 0.
for m in ergo.motors:
m.compliant = False
# Goes to the position 0 in 2s
m.goto_position(0, 2)
# You can also change the maximum speed of the motors
# Warning! Goto position also change the maximum speed.
for m in ergo.motors:
m.moving_speed = 50
import time
ergo.m4.goal_position = 30
time.sleep(1.)
ergo.m4.goal_position = -30
ergo.m4.goal_position = 30
ergo.m5.goal_position = 20
ergo.m6.goal_position = -20
time.sleep(1.)
ergo.m4.goal_position = -30
ergo.m5.goal_position = -20
ergo.m6.goal_position = 20
def dance():
ergo.m4.goal_position = 30
ergo.m5.goal_position = 20
ergo.m6.goal_position = -20
time.sleep(1.)
ergo.m4.goal_position = -30
ergo.m5.goal_position = -20
ergo.m6.goal_position = 20
time.sleep(1.)
dance()
for _ in range(4):
dance()
def dance2():
ergo.goto_position({'m4': 30, 'm5': 20, 'm6': -20}, 1., wait=True)
ergo.goto_position({'m4': -30, 'm5': -20, 'm6': 20}, 1., wait=True)
for _ in range(4):
dance2()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For a PoppyErgoJr
Step2: Get robot current status
Step3: Turn on/off the compliancy of a motor
Step4: Go to the zero position
Step5: Make a simple dance movement
Step6: On multiple motors
Step7: Wrap it inside a function for convenience
Step8: Using goto position instead
|
6,536
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('seaborn-whitegrid')
# %load _solutions/case2_observations_analysis1.py
# %load _solutions/case2_observations_analysis2.py
# %load _solutions/case2_observations_analysis3.py
# %load _solutions/case2_observations_analysis4.py
# %load _solutions/case2_observations_analysis5.py
# %load _solutions/case2_observations_analysis6.py
# %load _solutions/case2_observations_analysis7.py
# %load _solutions/case2_observations_analysis8.py
# %load _solutions/case2_observations_analysis9.py
# %load _solutions/case2_observations_analysis10.py
not_identified.head()
survey_data = survey_data_unique.dropna(subset=['species']).copy()
survey_data['name'] = survey_data['genus'] + ' ' + survey_data['species']
survey_data['taxa'].value_counts()
#survey_data.groupby('taxa').size()
# %load _solutions/case2_observations_analysis11.py
len(non_rodent_species)
# %load _solutions/case2_observations_analysis12.py
len(r_species)
r_species["name"].value_counts()
# %load _solutions/case2_observations_analysis13.py
len(non_bird_species)
# %load _solutions/case2_observations_analysis14.py
# %load _solutions/case2_observations_analysis15.py
# %load _solutions/case2_observations_analysis16.py
# %load _solutions/case2_observations_analysis17.py
# %load _solutions/case2_observations_analysis18.py
# %load _solutions/case2_observations_analysis19.py
# %load _solutions/case2_observations_analysis20.py
# %load _solutions/case2_observations_analysis21.py
# %load _solutions/case2_observations_analysis22.py
# %load _solutions/case2_observations_analysis23.py
pivoted.head()
pivoted.plot(kind='bar', figsize=(12, 6), rot=0)
# %load _solutions/case2_observations_analysis24.py
# %load _solutions/case2_observations_analysis25.py
# %load _solutions/case2_observations_analysis26.py
# %load _solutions/case2_observations_analysis27.py
# %load _solutions/case2_observations_analysis28.py
# %load _solutions/case2_observations_analysis29.py
# %load _solutions/case2_observations_analysis30.py
# %load _solutions/case2_observations_analysis31.py
# %load _solutions/case2_observations_analysis32.py
# %load _solutions/case2_observations_analysis33.py
# %load _solutions/case2_observations_analysis34.py
# %load _solutions/case2_observations_analysis35.py
# %load _solutions/case2_observations_analysis36.py
# %load _solutions/case2_observations_analysis37.py
# %load _solutions/case2_observations_analysis38.py
# %load _solutions/case2_observations_analysis39.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Reading in the enriched observations data
Step2: 2. Tackle missing values (NaN) and duplicate values
Step3: <div class="alert alert-success">
Step4: <div class="alert alert-success">
Step5: <div class="alert alert-success">
Step6: <div class="alert alert-success">
Step7: <div class="alert alert-success">
Step8: NOTE!
Step9: <div class="alert alert-info">
Step10: <div class="alert alert-success">
Step11: <div class="alert alert-success">
Step12: <div class="alert alert-success">
Step13: <div class="alert alert-success">
Step14: Alternative solution
Step15: <div class="alert alert-success">
Step16: 4. Species abundance
Step17: <div class="alert alert-success">
Step18: <div class="alert alert-success">
Step19: <div class="alert alert-success">
Step20: To check, we can use the variable pivoted to plot the result
Step21: <div class="alert alert-success">
Step22: <div class="alert alert-success">
Step23: <div class="alert alert-success">
Step24: 5. Observations over time
Step25: To evaluate the intensity or number of occurrences during different time spans, a heatmap is an interesting representation.
Step26: Remark that we started from a tidy data format (also called long format) and converted to short format with in the row index the years, in the column the months and the counts for each of these year/month combinations as values.
Step27: <div class="alert alert-success">
Step28: <div class="alert alert-success">
Step29: Plotting with seaborn
Step30: <div class="alert alert-success">
Step31: <div class="alert alert-success">
|
6,537
|
<ASSISTANT_TASK:>
Python Code:
import logging
import random
import time
import matplotlib.pyplot as plt
import mxnet as mx
from mxnet import gluon, nd, autograd
import numpy as np
batch_size = 128
epochs = 5
ctx = mx.gpu() if mx.context.num_gpus() > 0 else mx.cpu()
lr = 0.01
train_dataset = gluon.data.vision.MNIST(train=True)
test_dataset = gluon.data.vision.MNIST(train=False)
def transform(x,y):
x = x.transpose((2,0,1)).astype('float32')/255.
y1 = y
y2 = y % 2 #odd or even
return x, np.float32(y1), np.float32(y2)
train_dataset_t = train_dataset.transform(transform)
test_dataset_t = test_dataset.transform(transform)
train_data = gluon.data.DataLoader(train_dataset_t, shuffle=True, last_batch='rollover', batch_size=batch_size, num_workers=5)
test_data = gluon.data.DataLoader(test_dataset_t, shuffle=False, last_batch='rollover', batch_size=batch_size, num_workers=5)
print("Input shape: {}, Target Labels: {}".format(train_dataset[0][0].shape, train_dataset_t[0][1:]))
class MultiTaskNetwork(gluon.HybridBlock):
def __init__(self):
super(MultiTaskNetwork, self).__init__()
self.shared = gluon.nn.HybridSequential()
with self.shared.name_scope():
self.shared.add(
gluon.nn.Dense(128, activation='relu'),
gluon.nn.Dense(64, activation='relu'),
gluon.nn.Dense(10, activation='relu')
)
self.output1 = gluon.nn.Dense(10) # Digist recognition
self.output2 = gluon.nn.Dense(1) # odd or even
def hybrid_forward(self, F, x):
y = self.shared(x)
output1 = self.output1(y)
output2 = self.output2(y)
return output1, output2
loss_digits = gluon.loss.SoftmaxCELoss()
loss_odd_even = gluon.loss.SigmoidBCELoss()
mx.random.seed(42)
random.seed(42)
net = MultiTaskNetwork()
net.initialize(mx.init.Xavier(), ctx=ctx)
net.hybridize() # hybridize for speed
trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate':lr})
def evaluate_accuracy(net, data_iterator):
acc_digits = mx.gluon.metric.Accuracy(name='digits')
acc_odd_even = mx.gluon.metric.Accuracy(name='odd_even')
for i, (data, label_digit, label_odd_even) in enumerate(data_iterator):
data = data.as_in_context(ctx)
label_digit = label_digit.as_in_context(ctx)
label_odd_even = label_odd_even.as_in_context(ctx).reshape(-1,1)
output_digit, output_odd_even = net(data)
acc_digits.update(label_digit, output_digit.softmax())
acc_odd_even.update(label_odd_even, output_odd_even.sigmoid() > 0.5)
return acc_digits.get(), acc_odd_even.get()
alpha = 0.5 # Combine losses factor
for e in range(epochs):
# Accuracies for each task
acc_digits = mx.gluon.metric.Accuracy(name='digits')
acc_odd_even = mx.gluon.metric.Accuracy(name='odd_even')
# Accumulative losses
l_digits_ = 0.
l_odd_even_ = 0.
for i, (data, label_digit, label_odd_even) in enumerate(train_data):
data = data.as_in_context(ctx)
label_digit = label_digit.as_in_context(ctx)
label_odd_even = label_odd_even.as_in_context(ctx).reshape(-1,1)
with autograd.record():
output_digit, output_odd_even = net(data)
l_digits = loss_digits(output_digit, label_digit)
l_odd_even = loss_odd_even(output_odd_even, label_odd_even)
# Combine the loss of each task
l_combined = (1-alpha)*l_digits + alpha*l_odd_even
l_combined.backward()
trainer.step(data.shape[0])
l_digits_ += l_digits.mean()
l_odd_even_ += l_odd_even.mean()
acc_digits.update(label_digit, output_digit.softmax())
acc_odd_even.update(label_odd_even, output_odd_even.sigmoid() > 0.5)
print("Epoch [{}], Acc Digits {:.4f} Loss Digits {:.4f}".format(
e, acc_digits.get()[1], l_digits_.asscalar()/(i+1)))
print("Epoch [{}], Acc Odd/Even {:.4f} Loss Odd/Even {:.4f}".format(
e, acc_odd_even.get()[1], l_odd_even_.asscalar()/(i+1)))
print("Epoch [{}], Testing Accuracies {}".format(e, evaluate_accuracy(net, test_data)))
def get_random_data():
idx = random.randint(0, len(test_dataset))
img = test_dataset[idx][0]
data, _, _ = test_dataset_t[idx]
data = data.as_in_context(ctx).expand_dims(axis=0)
plt.imshow(img.squeeze().asnumpy(), cmap='gray')
return data
data = get_random_data()
digit, odd_even = net(data)
digit = digit.argmax(axis=1)[0].asnumpy()
odd_even = (odd_even.sigmoid()[0] > 0.5).asnumpy()
print("Predicted digit: {}, odd: {}".format(digit, odd_even))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Parameters
Step2: Data
Step3: We assign the transform to the original dataset
Step4: We load the datasets DataLoaders
Step5: Multi-task Network
Step6: We can use two different losses, one for each output
Step7: We create and initialize the network
Step8: Evaluate Accuracy
Step9: Training Loop
Step10: Testing
|
6,538
|
<ASSISTANT_TASK:>
Python Code:
print("Happy Birthday to you!")
print("Happy Birthday to you!")
print("Happy Birthday, dear Emily.")
print("Happy Birthday to you!")
def happy_birthday_to_emily(): # Function definition
Print a birthday song to Emily.
print("Happy Birthday to you!")
print("Happy Birthday to you!")
print("Happy Birthday, dear Emily.")
print("Happy Birthday to you!")
# function definition:
def happy_birthday_to_emily(): # Function definition
Print a birthday song to Emily.
print("Happy Birthday to you!")
print("Happy Birthday to you!")
print("Happy Birthday, dear Emily.")
print("Happy Birthday to you!")
# function call:
print('Function call 1')
happy_birthday_to_emily()
print()
# We can call the function as many times as we want (but we define it only once)
print('Function call 2')
happy_birthday_to_emily()
print()
print('Function call 3')
happy_birthday_to_emily()
print()
# This will not call the function
print('This is not a function call')
happy_birthday_to_emily
def new_line():
Print a new line.
print()
def two_new_lines():
Print two new lines.
new_line()
new_line()
print("Printing a single line...")
new_line()
print("Printing two lines...")
two_new_lines()
print("Printed two lines")
help(happy_birthday_to_emily)
type(happy_birthday_to_emily)
# function definition with using the parameter `name'
def happy_birthday(name):
Print a birthday song with the "name" of the person inserted.
print("Happy Birthday to you!")
print("Happy Birthday to you!")
print(f"Happy Birthday, dear {name}.")
print("Happy Birthday to you!")
# function call using specifying the value of the argument
happy_birthday("James")
my_name="James"
happy_birthday(my_name)
happy_birthday()
def multiply(x, y):
Multiply two numeric values.
result = x * y
print(result)
multiply(2020,5278238)
multiply(2,3)
def multiply(x, y, third_number=1): # x and y are positional parameters, third_number is a keyword parameter
Multiply two or three numbers and print the result.
result=x*y*third_number
print(result)
multiply(2,3) # We only specify values for the positional parameters
multiply(2,3,third_number=4) # We specify values for both the positional parameters, and the keyword parameter
multiply(3)
def multiply(x, y):
Multiply two numbers and return the result.
multiplied = x * y
return multiplied
#here we assign the returned value to variable z
result = multiply(2, 5)
print(result)
print(multiply(30,20))
def multiply_no_return(x, y):
Multiply two numbers and does not return the result.
result = x * y
is_this_a_result = multiply_no_return(2,3)
print(is_this_a_result)
def calculate(x,y):
Calculate product and sum of two numbers.
product = x * y
summed = x + y
#we return a tuple of values
return product, summed
# the function returned a tuple and we unpack it to var1 and var2
var1, var2 = calculate(10,5)
print("product:",var1,"sum:",var2)
#this will assign `var` to a tuple:
var = calculate(10,5)
print(var)
#this will generate an error
var1, var2, var3 = calculate(10,5)
def sum_and_diff_len_strings(string1, string2):
Return the sum of and difference between the lengths of two strings.
sum_strings = len(string1) + len(string2)
diff_strings = len(string1) - len(string2)
return sum_strings, diff_strings
sum_strings, diff_strings = sum_and_diff_len_strings("horse", "dog")
print("Sum:", sum_strings)
print("Difference:", diff_strings)
def my_function(param1, param2):
This is a reST style.
:param param1: this is a first param
:param param2: this is a second param
:returns: this is a description of what is returned
return
def is_even(p):
Check whether a number is even.
if p % 2 == 1:
return False
else:
return True
input_value = 2
expected_output = True
actual_output = is_even(input_value)
assert actual_output == expected_output, f'expected {expected_output}, got {actual_output}'
def is_even(p):
Check whether a number is even.
if p % 2 == 1:
return False
else:
return False
input_value = 2
expected_output = True
actual_output = is_even(input_value)
assert actual_output == expected_output, f'expected {expected_output}, got {actual_output}'
from utils_chapter11 import happy_birthday
happy_birthday('George')
from utils_chapter11 import multiply
multiply(1,2)
from utils_chapter11 import is_even
is_it_even = is_even(5)
print(is_it_even)
def setx():
Set the value of a variable to 1.
x = 1
setx()
print(x)
def setx():
Set the value of a variable to 1.
x = 1
return x
setx()
print(x)
x = 0
def setx():
Set the value of a variable to 1.
x = 1
setx()
print(x)
x = 1
def getx():
Print the value of a variable x.
print(x)
getx()
a=3
b=2
def setb():
Set the value of a variable b to 11.
b=11
c=20
print("Is 'a' defined locally in the function:", 'a' in locals())
print("Is 'b' defined locally in the function:", 'b' in locals())
print("Is 'b' defined globally:", 'b' in globals())
setb()
print("Is 'a' defined globally:", 'a' in globals())
print("Is 'b' defined globally:", 'b' in globals())
print("Is 'c' defined globally:", 'c' in globals())
def setb_again():
Set the value of a variable to 3.
b=3
print("in 'setb_again' b =", b)
def setb():
Set the value of a variable b to 2.
b=2
setb_again()
print("in 'setb' b =", b)
b=1
setb()
print("global b =", b)
# you code here
# function to modify:
def multiply(x, y, third_number=1):
Multiply two or three numbers and print the result.
result=x*y*third_number
print(result)
def new_line():
Print a new line.
print()
# you code here
def happy_birthday_to_you():
# your code here
# original function - replace the print statements by the happy_birthday_to_you() function:
def happy_birthday(name):
Print a birthday song with the "name" of the person inserted.
print("Happy Birthday to you!")
print("Happy Birthday to you!")
print("Happy Birthday, dear " + name + ".")
print("Happy Birthday to you!")
def multiply(x, y, third_number=1):
Multiply two or three numbers and print the result.
result=x*y*third_number
return result
print(multiply(1+1,6-2))
print(multiply(multiply(4,2),multiply(2,5)))
print(len(str(multiply(10,100))))
def switch_two_values(x,y):
# your code here
a='orange'
b='apple'
a,b = switch_two_values(a,b) # `a` should contain "apple" after this call, and `b` should contain "orange"
print(a,b)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: This could be the purpose of a function
Step4: If we execute the code above, we don't get any output. That's because we only told Python
Step7: 1.3.2 Calling a function from within another function
Step8: You can do the same tricks that we learnt to apply on the built-in functions, like asking for help or for a function type
Step10: The help we get on a function will become more interesting once we learn about function inputs and outputs ;-)
Step11: We can also store the name in a variable
Step12: If we forgot to specify the name, we get an error
Step14: Functions can have multiple parameters. We can for example multiply two numbers in a function (using the two parameters x and y) and then call the function by giving it two arguments
Step16: 1.4.2 Positional vs keyword parameters and arguments
Step17: If we do not specify a value for a positional parameter, the function call will fail (with a very helpful error message)
Step19: 1.5 Output
Step20: We can also print the result directly (without assigning it to a variable), which gives us the same effect as using the print statements we used before
Step22: If we assign the result to a variable, but do not use the return statement, the function cannot return it. Instead, it returns None (as you can try out below).
Step24: Returning multiple values
Step25: Make sure you actually save your 2 values into 2 variables, or else you end up with errors or unexpected behavior
Step27: Saving the resulting values in different variables can be useful when you want to use them in different places in your code
Step29: 1.6 Documenting your functions with docstrings
Step32: You can see that this docstring describes the function goal, its parameters, its outputs, and the errors it raises.
Step33: If the function output is what you expect, Python will show nothing.
Step35: However, when the actual output is different from what we expected, we got an error. Let's say we made a mistake in writing the function.
Step36: 1.8 Storing a function in a Python module
Step38: 2. Variable scope
Step40: Even when we return x, it does not exist outside of the function
Step42: Also consider this
Step44: In fact, this code has produced two completely unrelated x's!
Step46: You can use two built-in functions in Python when you are unsure whether a variable is local or global. The function locals() returns a list of all local variables, and the function globals() - a list of all global variables. Note that there are many non-interesting system variables that these functions return, so in practice it is best to check for membership with the in operator. For example
Step49: Finally, note that the local context stays local to the function, and is not shared even with other functions called within a function, for example
Step50: We call the function setb() from the global context, and we call the function setb_again() from the context of the function setb(). The variable b in the function setb_again() is set to 3, but this does not affect the value of this variable in the function setb() which is still 2. And as we saw before, the changes in setb() do not influence the value of the global variable (b=1).
Step52: Exercise 2
Step54: Exercise 3
Step56: Exercise 4
Step58: Exercise 5
Step59: Exercise 6
|
6,539
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import os
import shutil
import tensorflow as tf
from google.cloud import aiplatform
from google.cloud import bigquery
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except:
print("Dataset already exists.")
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
bq.create_table(table)
print("Table created.")
except:
print("Table already exists.")
%%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string =
TODO: Your code goes here
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instance['traffic_last_5min'] = # TODO: Your code goes here.
return instance
add_traffic_last_5min(instance={'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07})
# TODO 2b. Write code to call prediction on instance using realtime traffic info.
#Hint: Look at this sample https://github.com/googleapis/python-aiplatform/blob/master/samples/snippets/predict_custom_trained_model_sample.py
ENDPOINT_ID = # TODO: Copy the `ENDPOINT_ID` from the deployment in the previous lab.
api_endpoint = f'{REGION}-aiplatform.googleapis.com'
# The AI Platform services require regional API endpoints.
client_options = {"api_endpoint": api_endpoint}
# Initialize client that will be used to create and send requests.
# This client only needs to be created once, and can be reused for multiple requests.
client = aiplatform.gapic.PredictionServiceClient(client_options=client_options)
instance = {'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07}
# The format of each instance should conform to the deployed model's prediction input schema.
instance_dict = # TODO: Your code goes here.
instance = json_format.ParseDict(instance_dict, Value())
instances = [instance]
endpoint = client.endpoint_path(
project=PROJECT, location=REGION, endpoint=ENDPOINT_ID
)
response = # TODO: Your code goes here.
# The predictions are a google.protobuf.Value representation of the model's predictions.
print(" prediction:", # TODO: Your code goes here.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Re-train our model with trips_last_5min feature
Step2: Next, we create a table called traffic_realtime and set up the schema.
Step3: Launch Streaming Dataflow Pipeline
Step5: Make predictions from the new data
Step6: The traffic_realtime table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the traffic_last_5min feature added to the instance and change over time.
Step7: Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well.
|
6,540
|
<ASSISTANT_TASK:>
Python Code:
import rebound
sim = rebound.Simulation()
sim.add(m=1.) # free floating particle
sim.integrator = "leapfrog"
sim.dt = 0.01
import reboundx
rebx = reboundx.Extras(sim)
sto = rebx.load_force("stochastic_forces")
rebx.add_force(sto)
sim.particles[0].params["kappa_x"] = 5.0
sim.particles[0].params["kappa_y"] = 1.0
sim.particles[0].params["tau_kappa_x"] = 10.0
sim.particles[0].params["tau_kappa_y"] = 1.0
import numpy as np
times = np.linspace(0.,100, 1000)
xys = np.zeros((len(times),2))
vxvys = np.zeros((len(times),2))
forces = np.zeros((len(times),2))
for i, t in enumerate(times):
sim.integrate(t)
xys[i] = [sim.particles[0].x, sim.particles[0].y]
vxvys[i] = [sim.particles[0].vx, sim.particles[0].vy]
try:
forces[i] = [sim.particles[0].params["stochastic_force_x"], sim.particles[0].params["stochastic_force_y"]]
except:
pass # not set before first timestep
import matplotlib.pyplot as plt
fig, ax = plt.subplots(2,1, sharex=True)
#ax.set_aspect("equal")
ax[0].set_ylabel("position")
ax[0].plot(times, xys[:,0],label="x")
ax[0].plot(times, xys[:,1],label="y")
ax[1].set_xlabel("time")
ax[1].set_ylabel("velocity")
ax[1].plot(times, vxvys[:,0])
ax[1].plot(times, vxvys[:,1])
ax[0].legend();
fig, ax = plt.subplots(1,1)
ax.set_xlabel("time")
ax.set_ylabel("force")
p=ax.plot(times, forces[:,0], label="$F_x$")
ax.hlines(np.std(forces[:,0]), xmin=times[0], xmax=times[-1],zorder=4, color=p[0].get_color(), ls="--", label="$\sigma_{F,x}$")
p=ax.plot(times, forces[:,1], label="$F_y$")
ax.hlines(np.std(forces[:,1]), xmin=times[0], xmax=times[-1],zorder=4, color=p[0].get_color(), ls="--", label="$\sigma_{F,y}$")
ax.legend();
sim.rand_seed = 123
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will be using the Leap-Frog integrator with a fixed timestep. It's important to point out that the default IAS15 integrator is not well suited for stochastic forces because it automatically reduces the timestep if it doesn't achieve an accuracy near machine precision. Because the stochastic forces are noisy by design, it might never converge or reduce the timestep to a very small value.
Step2: Next, we add the stochastic_forces module in REBOUNDx
Step3: We can now turn on stochastic forces in the $x$ and $y$ direction for a particle. We do this by setting the particle's $\kappa_x$ (kappa_x) and $\kappa_y$ (kappa_y) fields to a finite value. These parameter determine the standard deviation of the stochastic forces in the given direction. The strengths can be different in the different directions, as shown in this example
Step4: We also need to specify the auto-correlation time of the stochastic forces. Note that the timestep needs to be smaller than the correlation time.
Step5: We can now run the simulation for some 100 time units and record the particle's position, velocity, and stochastic forces.
Step6: Let's plot the positions and velocities as a function of time.
Step7: We can see that the velocities undergo an random walk, whereas the positions undergo an integrated random walk. With the parameters that we have chosen, the diffusion coefficient (see e.g. Rein and Papaloizou 2009) in the $x$ direction is larger than in the $y$ direction, resulting in a faster random walk in the $x$ direction.
Step8: The stochastic_forces module uses the internal pseudo random number generator of REBOUND. This means that by default, each simulation starts with a random seed. If you want to generate reproducible results, you can manually set the seed to the same value every time you run the simulation.
|
6,541
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from sklearn.decomposition import PCA
import pandas as pd
from pylab import *
data = pd.read_hdf("swr_modth.h5")
figure()
plot(data)
xlabel("Time lag (ms)")
ylabel("Modulation (z-scored)")
show()
print(data.shape)
n = 6
pca = PCA(n_components = n)
new_data = pca.fit_transform(data.values.T) # data needs to be inverted here depending of how you do the PCA
X = pca.components_.transpose()
times = data.index.values
figure()
plot(times, X)
xlabel("Time lag (ms)")
ylabel("pc")
show()
def derivative(x, f):
'''
Compute the derivative of a time serie
Used for jPCA
'''
from scipy.stats import linregress
fish = np.zeros(len(f))
slopes_ = []
tmpf = np.hstack((f[0],f,f[-1])) # not circular
binsize = x[1]-x[0]
tmpx = np.hstack((np.array([x[0]-binsize]),x,np.array([x[-1]+binsize])))
# plot(tmpx, tmpf, 'o')
# plot(x, f, '+')
for i in range(len(f)):
slope, intercept, r_value, p_value, std_err = linregress(tmpx[i:i+3], tmpf[i:i+3])
slopes_.append(slope)
# plot(tmpx[i:i+3], tmpx[i:i+3]*slope+intercept, '-')
return np.array(slopes_)/binsize
dX = np.zeros_like(X)
for i in range(n):
dX[:,i] = derivative(times, X[:,i])
def buildHMap(n, ):
'''
build the H mapping for a given n
used for the jPCA
'''
from scipy.sparse import lil_matrix
M = np.zeros((n,n), dtype = np.int)
M[np.triu_indices(n,1)] = np.arange(1,int(n*(n-1)/2)+1)
M = M - M.transpose()
m = np.vstack(M.reshape(n*n))
k = np.vstack(M[np.triu_indices(n,1)]).astype('int')
H = lil_matrix( (len(m), len(k)), dtype = np.float16)
H = np.zeros( (len(m), len(k) ))
# first column
for i in k.flatten():
# positive
H[np.where(m == i)[0][0],i-1] = 1.0
# negative
H[np.where(m == -i)[0][0],i-1] = -1.0
return H
H = buildHMap(n)
Xtilde = np.zeros( (X.shape[0]*X.shape[1], X.shape[1]*X.shape[1]) )
for i, j in zip( (np.arange(0,n**2,n) ), np.arange(0, n*X.shape[0], X.shape[0]) ):
Xtilde[j:j+X.shape[0],i:i+X.shape[1]] = X
dXv = np.vstack(dX.transpose().reshape(X.shape[0]*X.shape[1]))
XtH = np.dot(Xtilde, H)
k, residuals, rank, s = np.linalg.lstsq(XtH, dXv, rcond = None)
m = np.dot(H, k)
Mskew = m.reshape(n,n).transpose()
evalues, evectors = np.linalg.eig(Mskew)
index = np.argsort(np.array([np.linalg.norm(i) for i in evalues]).reshape(int(n/2),2)[:,0])
evectors = evectors.transpose().reshape(int(n/2),2,n)
u = np.vstack([np.real(evectors[index[-1]][0] + evectors[index[-1]][1]),
np.imag(evectors[index[-1]][0] - evectors[index[-1]][1])]).transpose()
rX = np.dot(X, u)
figure(figsize=(15, 5))
subplot(121)
plot(times, rX)
xlabel("Time lag (ms)")
subplot(122)
plot(rX[:,0], rX[:,1])
show()
score = np.dot(data.values.T, rX)
phi = np.mod(np.arctan2(score[:,1], score[:,0]), 2*np.pi)
figure(figsize = (10,10))
scatter(score[:,0], score[:,1], c = phi)
scatter(np.cos(phi)*np.max(score), np.sin(phi)*np.max(score), c = phi)
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data can be loaded with pandas
Step2: It's the responses of theta-modulated thalamic neurons to hippocampal sharp-waves ripples
Step3: There are 767 neurons here with 201 time bins. The order of the data matrix is
Step4: First step is the classical PCA to reduce the dimensionality of the dataset.
Step5: We can thus work on the 6 first components of the PCA
Step6: We can plot the 6 components
Step7: Now we can compute $\dot{X}$ using the function written below
Step8: The function derivative is called for each component
Step9: Next step is to build the H mapping using this function
Step10: $\tilde{X}$ is the block diagonal matrix
Step11: We can put $\dot{X}$ in columns
Step12: Multiply $\tilde{X}$ by $H$
Step13: and solve $(\tilde{X}.H).k = \dot{X}$
Step14: Do $m = H.k$ to get $M_{skew}$
Step15: Construct the two vectors for projection with $M_{skew}$
Step16: and get the jpc vectors as $X_r = X.u$
Step17: We can now look at the two jpc components
Step18: We can now project the data on rX to find the swr angle
Step19: We can now represent the sharp-waves phase for all neurons as
|
6,542
|
<ASSISTANT_TASK:>
Python Code:
# Imports
import numpy as np
import gurobipy as gbp
import datetime as dt
# Constants
Aij = np.random.randint(5, 50, 400)
Aij = Aij.reshape(20,20)
AijSum = np.sum(Aij)
Cj = np.random.randint(10, 20, 20)
CjSum = np.sum(Cj)
Bi = np.random.randint(10, 20, 20)
BiSum = np.sum(Bi)
# Matrix Shape
rows = range(len(Aij))
cols = range(len(Aij[0]))
def GbpPrimCan():
# Instantiate Model
mPrimal_Canonical_GUROBI = gbp.Model(' -- Canonical Primal Linear Programming Problem -- ')
# Set Focus to Optimality
gbp.setParam('MIPFocus', 2)
# Decision Variables
desc_var = []
for dest in cols:
desc_var.append([])
desc_var[dest].append(mPrimal_Canonical_GUROBI.addVar(vtype=gbp.GRB.CONTINUOUS,
name='y'+str(dest+1)))
# Update Model
mPrimal_Canonical_GUROBI.update()
#Objective Function
mPrimal_Canonical_GUROBI.setObjective(gbp.quicksum(Cj[dest]*desc_var[dest][0]
for dest in cols),
gbp.GRB.MINIMIZE)
# Constraints
for orig in rows:
mPrimal_Canonical_GUROBI.addConstr(gbp.quicksum(Aij[orig][dest]*desc_var[dest][0]
for dest in cols) - Bi[orig] >= 0)
# Optimize
try:
mPrimal_Canonical_GUROBI.optimize()
except Exception as e:
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# Write LP file
mPrimal_Canonical_GUROBI.write('LP.lp')
print '\n*************************************************************************'
print ' | Decision Variables'
for v in mPrimal_Canonical_GUROBI.getVars():
print ' | ', v.VarName, '=', v.x
print '*************************************************************************'
val = mPrimal_Canonical_GUROBI.objVal
print ' | Objective Value ------------------ ', val
print ' | Aij Sum -------------------------- ', AijSum
print ' | Cj Sum --------------------------- ', CjSum
print ' | Bi Sum --------------------------- ', BiSum
print ' | Matrix Dimensions ---------------- ', Aij.shape
print ' | Date/Time ------------------------ ', dt.datetime.now()
print '*************************************************************************'
print '-- Gurobi Canonical Primal Linear Programming Problem --'
try:
GbpPrimCan()
print '\nJames Gaboardi, 2015'
except Exception as e:
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
def GbpDualCan():
# Instantiate Model
mDual_Canonical_GUROBI = gbp.Model(' -- Canonical Dual Linear Programming Problem -- ')
# Set Focus to Optimality
gbp.setParam('MIPFocus', 2)
# Decision Variables
desc_var = []
for dest in cols:
desc_var.append([])
desc_var[dest].append(mDual_Canonical_GUROBI.addVar(vtype=gbp.GRB.CONTINUOUS,
name='u'+str(dest+1)))
# Update Model
mDual_Canonical_GUROBI.update()
#Objective Function
mDual_Canonical_GUROBI.setObjective(gbp.quicksum(Bi[orig]*desc_var[orig][0]
for orig in rows),
gbp.GRB.MAXIMIZE)
# Constraints
for dest in cols:
mDual_Canonical_GUROBI.addConstr(gbp.quicksum(Aij[orig][dest]*desc_var[dest][0]
for orig in rows) - Cj[dest] <= 0)
# Optimize
try:
mDual_Canonical_GUROBI.optimize()
except Exception as e:
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
# Write LP file
mDual_Canonical_GUROBI.write('LP.lp')
print '\n*************************************************************************'
print ' | Decision Variables'
for v in mDual_Canonical_GUROBI.getVars():
print ' | ', v.VarName, '=', v.x
print '*************************************************************************'
val = mDual_Canonical_GUROBI.objVal
print ' | Objective Value ------------------ ', val
print ' | Aij Sum -------------------------- ', AijSum
print ' | Cj Sum --------------------------- ', CjSum
print ' | Bi Sum --------------------------- ', BiSum
print ' | Matrix Dimensions ---------------- ', Aij.shape
print ' | Date/Time ------------------------ ', dt.datetime.now()
print '*************************************************************************'
print '-- Gurobi Canonical Dual Linear Programming Problem --'
try:
GbpDualCan()
print '\nJames Gaboardi, 2015'
except Exception as e:
print ' ################################################################'
print ' < ISSUE : ', e, ' >'
print ' ################################################################'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font size='7' face='Times New Roman'><b>1. <u>Primal</u></b></font>
Step2: <font size='7' face='Times New Roman'><b>2. <u>Dual</u></b></font>
|
6,543
|
<ASSISTANT_TASK:>
Python Code:
# determine the specific enthalpies at the principal states of the cycle.
import seuif97 as if97
# State 1 is superheated vapor at 8MPa, 480C.
p1=8
t1=480
h1 = if97.pt2h(p1,t1)
s1 =if97.pt2s(p1,t1)
print(h1,s1)
# State 2 is fixed by p2 =2.0MPa and the specific entropy s2, which is the same as that of state 1
p2=2.0
s2=s1
h2 = if97.ps2h(p2,s2)
t2=if97.ps2t(p2,s2)
print(h2,t2)
# State 3 is fixed by p2 =0.7MPa and the specific entropy s2, which is the same as that of state 1
p3=0.7
s3=s1
h3 = if97.ps2h(p3,s3)
t3=if97.ps2t(p3,s3)
print(h3,t3)
# State 4 is superheated vapor at 0.7 MPa, 440C.
p4=0.7
t4=440
h4 = if97.pt2h(p4,t4) # in kj/kg
s4 =if97.pt2s(p4,t4)
print(h4,s4)
# State 5 : p5 =0.3MPa and s5 = s4
p5=0.3
s5=s4
h5 =if97.ps2h(p5,s5)
t5=if97.ps2t(p5,s5)
print(h5,t5)
# State 6: p6=0.008MPA, s6= s4
p6=0.008
s6=s4
h6 =if97.ps2h(p6,s6)
t6 =if97.ps2t(p6,s6)
print(h6,t6)
# State 7 : p7=0.008MP Saturated water at the condenser exit
p7=0.008
t7=if97.px2t(p7,0)
h7=if97.px2h(p7,0)
s7=if97.px2s(p7,0)
v7=if97.px2v(p7,0)
print(t7,h7,s7,v7)
# State 8 : p8=0.3MP at the exit of the first pump
p8=0.3
s8=s7
h8=if97.ps2h(p8,s8)
t8=if97.ps2t(p8,s8)
print(h8,t8)
# state 9 : The liquid leaving the open feedwater heater at is saturated liquid at 0.3 MPa
p9=0.3
t9=if97.px2t(p9,0)
h9=if97.px2h(p9,0)
s9=if97.px2s(p9,0)
print(t9,h9,s9)
# State 10 p=8.0Mpa, at the exit of the second pump,
p10=8.0
s10=s9
h10 =if97.ps2h(p10,s10)
t10 =if97.ps2t(p10,s10)
print(h10,t10)
# State 11: the feedwater exiting the closed heater
p11=8.0
t11=205
h11 = if97.pt2h(p11,t11) # in kj/kg
s11 =if97.pt2s(p11,t11)
print(h11,s11)
# State 12: the condensate leaving the closed heater is saturated at 2 MPa.
p12=2.0
t12=if97.px2t(p12,0)
h12=if97.px2h(p12,0)
s12=if97.px2s(p12,0)
print(t12,h12,s12)
# State 13: the fluid passing through the trap undergoes a throttling process
p13=0.3
h13=h12
s13=if97.ph2s(p13,h13)
t13=if97.ph2t(p13,h13)
print(s13,t13)
ydash = (h11-h10)/(h2-h12) # the fraction of the total flow diverted to the closed heater
ydashdash = ((1-ydash)*h8+ydash*h13-h9)/(h8-h5) # the fraction of the total flow diverted to the open heater
print(ydash,ydashdash)
# Part(a)
wt1dot = (h1-h2) + (1-ydash)*(h2-h3) # The work developed by the first turbine per unit of mass entering in kj/kg
wt2dot = (1-ydash)*(h4-h5) + (1-ydash-ydashdash)*(h5-h6) # The work developed by the second turbine per unit of mass in kj/kg
wp1dot = (1-ydash-ydashdash)*(h8-h7) # The work for the first pump per unit of mass in kj/kg
wp2dot = h10-h9 # The work for the second pump per unit of mass in kj/kg
qindot = (h1-h11) + (1-ydash)*(h4-h3) # The total heat added expressed on the basis of a unit of mass entering the first turbine
eta = (wt1dot+wt2dot-wp1dot-wp2dot)/qindot # thermal efficiency
# Result
print(' The thermal efficiency is {:>.2f}%'.format(100.0*eta))
# Part(b)
Wcycledot = 100.0 # the net power output of the cycle in MW
m1dot = (Wcycledot*3600*10**3)/(wt1dot+wt2dot-wp1dot-wp2dot)
# Result
print(' The mass flow rate of the steam entering the first turbine is {:>.2f}kg/h'.format(m1dot))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The schematic diagram of the cycle is labeled with the fractions of the total flow into the turbine that remain
Step2: SOLUTION
Step3: (b) The mass flow rate entering the first turbine can be determined using the given value of the net power output. Thus
|
6,544
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'nicam16-9s', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Aod Plus Ccn
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 13.3. External Mixture
Step59: 14. Optical Radiative Properties --> Radiative Scheme
Step60: 14.2. Shortwave Bands
Step61: 14.3. Longwave Bands
Step62: 15. Optical Radiative Properties --> Cloud Interactions
Step63: 15.2. Twomey
Step64: 15.3. Twomey Minimum Ccn
Step65: 15.4. Drizzle
Step66: 15.5. Cloud Lifetime
Step67: 15.6. Longwave Bands
Step68: 16. Model
Step69: 16.2. Processes
Step70: 16.3. Coupling
Step71: 16.4. Gas Phase Precursors
Step72: 16.5. Scheme Type
Step73: 16.6. Bulk Scheme Species
|
6,545
|
<ASSISTANT_TASK:>
Python Code:
!pip install --user --upgrade pip
!pip install kfp --upgrade --user --quiet
# confirm the kfp sdk
! pip show kfp
import kfp
import kfp.components as comp
import kfp.dsl as dsl
from kfp.components import InputPath, OutputPath
from typing import NamedTuple
# download data step
def download_data(download_link: str, data_path: OutputPath(str)):
import zipfile
import sys, subprocess;
subprocess.run(["python", "-m", "pip", "install", "--upgrade", "pip"])
subprocess.run([sys.executable, "-m", "pip", "install", "wget"])
import wget
import os
if not os.path.exists(data_path):
os.makedirs(data_path)
# download files
wget.download(download_link.format(file='train'), f'{data_path}/train_csv.zip')
wget.download(download_link.format(file='test'), f'{data_path}/test_csv.zip')
with zipfile.ZipFile(f"{data_path}/train_csv.zip","r") as zip_ref:
zip_ref.extractall(data_path)
with zipfile.ZipFile(f"{data_path}/test_csv.zip","r") as zip_ref:
zip_ref.extractall(data_path)
return(print('Done!'))
# load data
def load_data(data_path: InputPath(str),
load_data_path: OutputPath(str)):
# import Library
import sys, subprocess;
subprocess.run(["python", "-m", "pip", "install", "--upgrade", "pip"])
subprocess.run([sys.executable, '-m', 'pip', 'install','pandas'])
# import Library
import os, pickle;
import pandas as pd
import numpy as np
#importing the data
# Data Path
train_data_path = data_path + '/train.csv'
test_data_path = data_path + '/test.csv'
# Loading dataset into pandas
train_df = pd.read_csv(train_data_path)
test_df = pd.read_csv(test_data_path)
# join train and test together
ntrain = train_df.shape[0]
ntest = test_df.shape[0]
all_data = pd.concat((train_df, test_df)).reset_index(drop=True)
print("all_data size is : {}".format(all_data.shape))
#creating the preprocess directory
os.makedirs(load_data_path, exist_ok = True)
#Save the combined_data as a pickle file to be used by the preprocess component.
with open(f'{load_data_path}/all_data', 'wb') as f:
pickle.dump((ntrain, all_data), f)
return(print('Done!'))
# preprocess data
def preprocess_data(load_data_path: InputPath(str),
preprocess_data_path: OutputPath(str)):
# import Library
import sys, subprocess;
subprocess.run(["python", "-m", "pip", "install", "--upgrade", "pip"])
subprocess.run([sys.executable, '-m', 'pip', 'install','pandas'])
subprocess.run([sys.executable, '-m', 'pip', 'install','scikit-learn'])
import os, pickle;
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
#loading the train data
with open(f'{load_data_path}/all_data', 'rb') as f:
ntrain, all_data = pickle.load(f)
# split features and label
all_data_X = all_data.drop('label', axis=1)
all_data_y = all_data.label
# Reshape image in 3 dimensions (height = 28px, width = 28px , channel = 1)
all_data_X = all_data_X.values.reshape(-1,28,28,1)
# Normalize the data
all_data_X = all_data_X / 255.0
#Get the new dataset
X = all_data_X[:ntrain].copy()
y = all_data_y[:ntrain].copy()
# split into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)
#creating the preprocess directory
os.makedirs(preprocess_data_path, exist_ok = True)
#Save the train_data as a pickle file to be used by the modelling component.
with open(f'{preprocess_data_path}/train', 'wb') as f:
pickle.dump((X_train, y_train), f)
#Save the test_data as a pickle file to be used by the predict component.
with open(f'{preprocess_data_path}/test', 'wb') as f:
pickle.dump((X_test, y_test), f)
return(print('Done!'))
def modeling(preprocess_data_path: InputPath(str),
model_path: OutputPath(str)):
# import Library
import sys, subprocess;
subprocess.run(["python", "-m", "pip", "install", "--upgrade", "pip"])
subprocess.run([sys.executable, '-m', 'pip', 'install','pandas'])
subprocess.run([sys.executable, '-m', 'pip', 'install','tensorflow'])
import os, pickle;
import numpy as np
import tensorflow as tf
from tensorflow import keras, optimizers
from tensorflow.keras.metrics import SparseCategoricalAccuracy
from tensorflow.keras.losses import SparseCategoricalCrossentropy
from tensorflow.keras import layers
#loading the train data
with open(f'{preprocess_data_path}/train', 'rb') as f:
train_data = pickle.load(f)
# Separate the X_train from y_train.
X_train, y_train = train_data
#initializing the classifier model with its input, hidden and output layers
hidden_dim1=56
hidden_dim2=100
DROPOUT=0.5
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(filters = hidden_dim1, kernel_size = (5,5),padding = 'Same',
activation ='relu'),
tf.keras.layers.Dropout(DROPOUT),
tf.keras.layers.Conv2D(filters = hidden_dim2, kernel_size = (3,3),padding = 'Same',
activation ='relu'),
tf.keras.layers.Dropout(DROPOUT),
tf.keras.layers.Conv2D(filters = hidden_dim2, kernel_size = (3,3),padding = 'Same',
activation ='relu'),
tf.keras.layers.Dropout(DROPOUT),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation = "softmax")
])
model.build(input_shape=(None,28,28,1))
#Compiling the classifier model with Adam optimizer
model.compile(optimizers.Adam(learning_rate=0.001),
loss=SparseCategoricalCrossentropy(),
metrics=SparseCategoricalAccuracy(name='accuracy'))
# model fitting
history = model.fit(np.array(X_train), np.array(y_train),
validation_split=.1, epochs=1, batch_size=64)
#loading the X_test and y_test
with open(f'{preprocess_data_path}/test', 'rb') as f:
test_data = pickle.load(f)
# Separate the X_test from y_test.
X_test, y_test = test_data
# Evaluate the model and print the results
test_loss, test_acc = model.evaluate(np.array(X_test), np.array(y_test), verbose=0)
print("Test_loss: {}, Test_accuracy: {} ".format(test_loss,test_acc))
#creating the preprocess directory
os.makedirs(model_path, exist_ok = True)
#saving the model
model.save(f'{model_path}/model.h5')
def prediction(model_path: InputPath(str),
preprocess_data_path: InputPath(str),
mlpipeline_ui_metadata_path: OutputPath(str)) -> NamedTuple('conf_m_result', [('mlpipeline_ui_metadata', 'UI_metadata')]):
# import Library
import sys, subprocess;
subprocess.run(["python", "-m", "pip", "install", "--upgrade", "pip"])
subprocess.run([sys.executable, '-m', 'pip', 'install','scikit-learn'])
subprocess.run([sys.executable, '-m', 'pip', 'install','pandas'])
subprocess.run([sys.executable, '-m', 'pip', 'install','tensorflow'])
import pickle, json;
import pandas as pd
import numpy as np
from collections import namedtuple
from sklearn.metrics import confusion_matrix
from tensorflow.keras.models import load_model
#loading the X_test and y_test
with open(f'{preprocess_data_path}/test', 'rb') as f:
test_data = pickle.load(f)
# Separate the X_test from y_test.
X_test, y_test = test_data
#loading the model
model = load_model(f'{model_path}/model.h5')
# prediction
y_pred = np.argmax(model.predict(X_test), axis=-1)
# confusion matrix
cm = confusion_matrix(y_test, y_pred)
vocab = list(np.unique(y_test))
# confusion_matrix pair dataset
data = []
for target_index, target_row in enumerate(cm):
for predicted_index, count in enumerate(target_row):
data.append((vocab[target_index], vocab[predicted_index], count))
# convert confusion_matrix pair dataset to dataframe
df = pd.DataFrame(data,columns=['target','predicted','count'])
# change 'target', 'predicted' to integer strings
df[['target', 'predicted']] = (df[['target', 'predicted']].astype(int)).astype(str)
# create kubeflow metric metadata for UI
metadata = {
"outputs": [
{
"type": "confusion_matrix",
"format": "csv",
"schema": [
{
"name": "target",
"type": "CATEGORY"
},
{
"name": "predicted",
"type": "CATEGORY"
},
{
"name": "count",
"type": "NUMBER"
}
],
"source": df.to_csv(header=False, index=False),
"storage": "inline",
"labels": [
"0",
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
]
}
]
}
with open(mlpipeline_ui_metadata_path, 'w') as metadata_file:
json.dump(metadata, metadata_file)
conf_m_result = namedtuple('conf_m_result', ['mlpipeline_ui_metadata'])
return conf_m_result(json.dumps(metadata))
# create light weight components
download_op = comp.create_component_from_func(download_data,base_image="python:3.7.1")
load_op = comp.create_component_from_func(load_data,base_image="python:3.7.1")
preprocess_op = comp.create_component_from_func(preprocess_data,base_image="python:3.7.1")
modeling_op = comp.create_component_from_func(modeling, base_image="tensorflow/tensorflow:latest")
predict_op = comp.create_component_from_func(prediction, base_image="tensorflow/tensorflow:latest")
# create client that would enable communication with the Pipelines API server
client = kfp.Client()
# define pipeline
@dsl.pipeline(name="digit-recognizer-pipeline",
description="Performs Preprocessing, training and prediction of digits")
# Define parameters to be fed into pipeline
def digit_recognize_pipeline(download_link: str,
data_path: str,
load_data_path: str,
preprocess_data_path: str,
model_path:str
):
# Create download container.
download_container = download_op(download_link)
# Create load container.
load_container = load_op(download_container.output)
# Create preprocess container.
preprocess_container = preprocess_op(load_container.output)
# Create modeling container.
modeling_container = modeling_op(preprocess_container.output)
# Create prediction container.
predict_container = predict_op(modeling_container.output, preprocess_container.output)
# replace download_link with the repo link where the data is stored https:github-repo/data-dir/{file}.csv.zip?raw=true
download_link = 'https://github.com/josepholaide/KfaaS/blob/main/kale/data/{file}.csv.zip?raw=true'
data_path = "/mnt"
load_data_path = "load"
preprocess_data_path = "preprocess"
model_path = "model"
pipeline_func = digit_recognize_pipeline
experiment_name = 'digit_recognizer_lightweight'
run_name = pipeline_func.__name__ + ' run'
arguments = {"download_link": download_link,
"data_path": data_path,
"load_data_path": load_data_path,
"preprocess_data_path": preprocess_data_path,
"model_path":model_path}
# Compile pipeline to generate compressed YAML definition of the pipeline.
kfp.compiler.Compiler().compile(pipeline_func,
'{}.zip'.format(experiment_name))
# Submit pipeline directly from pipeline function
run_result = client.create_run_from_pipeline_func(pipeline_func,
experiment_name=experiment_name,
run_name=run_name,
arguments=arguments
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import kubeflow pipeline libraries
Step2: Kubeflow pipeline component creation
Step3: Component 2
Step4: Component 3
Step5: Component 4
Step6: Component 5
Step7: Create kubeflow pipeline components from images
|
6,546
|
<ASSISTANT_TASK:>
Python Code:
# ๅบ็กๅบๅฏผๅ
ฅ
from __future__ import print_function
from __future__ import division
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import os
import sys
# ไฝฟ็จinsert 0ๅณๅชไฝฟ็จgithub๏ผ้ฟๅ
ไบคๅไฝฟ็จไบpipๅฎ่ฃ
็abupy๏ผๅฏผ่ด็็ๆฌไธไธ่ด้ฎ้ข
sys.path.insert(0, os.path.abspath('../'))
import abupy
# ไฝฟ็จๆฒ็ๆฐๆฎ๏ผ็ฎ็ๆฏๅไนฆไธญไธๆ ท็ๆฐๆฎ็ฏๅข
abupy.env.enable_example_env_ipython()
from abupy import AbuDoubleMaBuy, AbuDoubleMaSell, AbuSDBreak, AbuFactorBuyBreak
from abupy import AbuFactorCloseAtrNStop, AbuFactorAtrNStop, AbuFactorPreAtrNStop
from abupy import abu, ABuProgress, AbuMetricsBase, EMarketTargetType
from abupy import AbuFactorSellNDay, AbuFactorBuyWD, AbuFactorSellBreak
buy_factors = [{'class': AbuDoubleMaBuy},
{'xd': 42, 'class': AbuFactorBuyBreak},
{'xd': 21, 'class': AbuSDBreak}]
sell_factors = [
{'class': AbuDoubleMaSell},
{'xd': 21, 'class': AbuFactorSellBreak},
{'stop_loss_n': 1.0, 'stop_win_n': 3.0, 'class': AbuFactorAtrNStop},
{'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},
{'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}
]
# ไฝฟ็จๆฒ็ๅ
็็พ่กๅไธบๅๆต็ฎๆ
us_choice_symbols = ['usTSLA', 'usNOAH', 'usSFUN', 'usBIDU', 'usAAPL',
'usGOOG', 'usWUBA', 'usVIPS']
# ๅๅง่ต้้
cash = 3000000
def run_loo_back(choice_symbols, ps=None, n_folds=2, start=None, end=None, only_info=False):
ๅฐ่ฃ
ไธไธชๅๆตๅฝๆฐ๏ผ่ฟๅๅๆต็ปๆ๏ผไปฅๅๅๆตๅบฆ้ๅฏน่ฑก
if choice_symbols[0].startswith('us'):
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_US
else:
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
abu_result_tuple, _ = abu.run_loop_back(cash,
buy_factors,
sell_factors,
ps,
start=start,
end=end,
n_folds=n_folds,
choice_symbols=choice_symbols)
่ฟ้ๆๆๆๅ ๅญ็ๅฏไธๅ็งฐๅชๅ็ฑปๅ็งฐ๏ผไธ่ฆๅๆฐไบ:
eg๏ผAbuDoubleMaBuy:fast=5,slow=60->AbuDoubleMaBuy
abu_result_tuple.orders_pd['buy_factor'] = abu_result_tuple.orders_pd[
'buy_factor'].apply(lambda bf: bf.split(':')[0])
ABuProgress.clear_output()
metrics = AbuMetricsBase.show_general(*abu_result_tuple, returns_cmp=only_info,
only_info=only_info,
only_show_returns=True)
return abu_result_tuple, metrics
abu_result_tuple, metrics = run_loo_back(us_choice_symbols)
metrics.plot_buy_factors()
metrics.plot_sell_factors()
# ๆทปๅ AbuFactorBuyWD
buy_factors = [
{'class': AbuFactorBuyWD},
{'class': AbuDoubleMaBuy},
{'xd': 42, 'class': AbuFactorBuyBreak},
{'xd': 21, 'class': AbuSDBreak}]
# ๆทปๅ AbuFactorSellNDay
sell_factors = [
{'class': AbuFactorSellNDay, 'sell_n': 1, 'is_sell_today': True},
{'class': AbuDoubleMaSell},
{'xd': 21, 'class': AbuFactorSellBreak},
{'stop_loss_n': 1.0, 'stop_win_n': 3.0, 'class': AbuFactorAtrNStop},
{'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},
{'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}
]
abu_result_tuple, metrics = run_loo_back(us_choice_symbols)
abu_result_tuple.orders_pd.groupby('buy_factor')['sell_type_extra'].value_counts()
ๆๆไธๅคฉๅๅบ็ญ็ฅAbuFactorSellNDayๅชๅฏนๅบAbuFactorBuyWD็ๆ
ๅๅ็บฟๅๅบๆญปๅๅชๅฏนๅบไนฐๅ
ฅๅจๆๅๅ็บฟ้ๅ็ๆ
ๆตท้พ21ๅคฉๅจๆๅไธๅๅบๅชๅฏนๅบๆตท้พ42ๅคฉๅจๆๅไธไนฐๅ
ฅ็ๆ
buy_factors = [
{'class': AbuFactorBuyWD,
'sell_factors': [{'class': AbuFactorSellNDay,
'sell_n': 1, 'is_sell_today': True}]},
{'class': AbuDoubleMaBuy,
'sell_factors': [{'fast': 5, 'slow': 60,
'class': AbuDoubleMaSell}]},
{'xd': 42, 'class': AbuFactorBuyBreak,
'sell_factors': [{'xd': 21,
'class': AbuFactorSellBreak}]},
{'xd': 21, 'class': AbuSDBreak}]
# ๅบ็กๅๅบๅ ๅญ๏ผๆญข็ๆญขๆ&้ฃ้ฉๆงๅถ&ๅฉๆถฆไฟๆค๏ผๅฏนๅบๆๆไนฐๅ
ฅๅ ๅญ็ๆ
sell_factors = [
{'stop_loss_n': 1.0, 'stop_win_n': 3.0, 'class': AbuFactorAtrNStop},
{'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},
{'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}
]
abu_result_tuple, metrics = run_loo_back(us_choice_symbols)
abu_result_tuple.orders_pd.groupby('buy_factor')['sell_type_extra'].value_counts()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ๅฐ็ฎๅไธบๆญขๅทฒ็ป็คบไพไบๅพๅคไนฐๅ
ฅๅ ๅญ๏ผabupyไธญไธไธช็น็นๅณๆฏๅฏไปฅๅจไบคๆไธญไฝฟ็จๅคไธชไนฐๅ
ฅ๏ผๅๅบๅ ๅญๅนถ่กๆง่ก็ๆใ
Step4: ๆๅปบๅฎ็ญ็ฅๅ๏ผไธ้ขไฝฟ็จๆฒ็ๆฐๆฎ็พ่กๆฐๆฎ๏ผ็คบไพๅนถ่กๆง่กไธ่ฟฐ็ญ็ฅ่ฟ่กๅๆต๏ผๅฆไธ๏ผ
Step5: ไธ้ขๅฏ่งๅๅไธชไนฐๅ
ฅๅ ๅญ็็ๆๆฐ้ไปฅๅๆฏไพ๏ผๅฆไธ๏ผ
Step6: ไธ้ขๅฏ่งๅๅไธชๅๅบๅ ๅญ็็ๆๆฐ้ๆฏไพ๏ผๅฏไปฅๅ็ฐๅๅ็บฟๅๅบๆญปๅ็ญ็ฅ(AbuDoubleMaSell)ๅๆตท้พ21ๅคฉๅจๆๅไธ็ช็ ดๅๅบ็ญ็ฅ(AbuFactorSellBreak)็ๆ็ๆฏไพๅๆฐ้้ฝๅพๅค๏ผ
Step7: ่ฝ็ถไธ้ข็ๅๆตๆฒกๆๅบ้๏ผไฝๆฏๆฌ็คบไพไธญๅคๅ ๅญ็ญ็ฅ็่ฎพ่ฎก็ป็ปๆฌๆฅๆฏๆณ่ฆๅฆไธๆๆ็๏ผ
Step8: ็ฑไธๅๆต็ปๆๅฏไปฅ็ๅฐๆๆๅพๅทฎ๏ผ็ป่ฎกไบคๆๅไธญไธชไนฐๅ
ฅๅ ๅญๆๅฏนๅบ็ๅๅบๅ ๅญ๏ผๅฏไปฅๅ็ฐๅๅบๅ ๅญๅ
จ้จ้ฝไฝฟ็จไบAbuFactorSellNDay๏ผๅณๆๆ็ไบคๆ้ฝๅชๆๆไบไธๅคฉๅฐฑๅจ็ฌฌไบๅคฉๅๅบไบ่ก็ฅจ๏ผๅฆไธๆ็คบ๏ผ
Step10: ่ฟ่ฏๅฎไธๆฏๆดไฝ็ญ็ฅ่ฎพ่ฎก็ๆๆฟ๏ผๅฎ้
ไธๅชๆฏๆณ่ฆ็ญ็บฟๅๅผๅๅคไนฐๅ
ฅ็ญ็ฅAbuFactorBuyWDไฝฟ็จๆๆไธๅคฉๅฐฑๅๅบ็็ญ็ฅ๏ผๅ
ถๅฎ็ไนฐๅ
ฅ็ญ็ฅไธๅบ่ฏฅไฝฟ็จใ
Step11: ไธ้ขไปฃ็ ็ไนฐๅ
ฅ๏ผๅๅบๅ ๅญ็ญ็ฅ็ป็ปๅณๅฎๆไบ็ๆณ็ๅ ๅญ้
ๅๆนๅผ๏ผๅฆไธ๏ผ
|
6,547
|
<ASSISTANT_TASK:>
Python Code:
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
import os, json, math
import numpy as np
import shutil
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # SET TF ERROR LOG VERBOSITY
if PROJECT == "your-gcp-project-here":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
%%bash
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${PROJECT}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not re-create it. \n\nHere are your buckets:"
gsutil ls
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${PROJECT}
echo "\nHere are your current buckets:"
gsutil ls
fi
!ls -l ../../data/*.csv
CSV_COLUMNS = ['fare_amount', 'pickup_datetime',
'pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'key']
# TODO 1: Specify the LABEL_COLUMN name you are predicting for below:
LABEL_COLUMN = ''
DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]
def features_and_labels(row_data):
for unwanted_col in ['pickup_datetime', 'key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (
# TODO 1: Complete the four tf.data.experimental.make_csv_dataset options
# Choose from and correctly order: batch_size, CSV_COLUMNS, DEFAULTS, pattern
tf.data.experimental.make_csv_dataset() # <--- fill-in options
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
## Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# TODO 2: Specify the five input columns
INPUT_COLS = []
# input layer
inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in INPUT_COLS
}
feature_columns = {
colname : tf.feature_column.numeric_column(colname)
for colname in INPUT_COLS
}
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [32, 8] just in like the BQML DNN
# TODO 2: Create two hidden layers [32,8] with relu activation. Name them h1 and h2
# Tip: Start with h1 = tf.keras.layers.dense
h1 = # complete
h2 = # complete
# final output is a linear activation because this is regression
# TODO 2: Create an output layer with linear activation and name it 'fare'
output =
# TODO 2: Use tf.keras.models.Model and create your model with inputs and output
model =
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
# TODO 3: Use tf.keras.utils.plot_model() to create a dnn_model.png of your architecture
# Tip: For rank direction, choose Left Right (rankdir='LR')
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 5 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('../../data/taxi-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('../../data/taxi-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
# TODO 4: Pass in the correct parameters to train your model
history = model.fit(
)
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
}, steps=1)
import shutil, os, datetime
OUTPUT_DIR = './export/savedmodel'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
!saved_model_cli show --tag_set serve --signature_def serving_default --dir {EXPORT_PATH}
!find {EXPORT_PATH}
os.environ['EXPORT_PATH'] = EXPORT_PATH
%%bash
PROJECT=${PROJECT}
BUCKET=${BUCKET}
REGION=${REGION}
MODEL_NAME=taxifare
VERSION_NAME=dnn
if [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then
echo "The model named $MODEL_NAME already exists."
else
# create model
echo "Creating $MODEL_NAME model now."
gcloud ai-platform models create --regions=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then
echo "Deleting already the existing model $MODEL_NAME:$VERSION_NAME ... "
gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
# create model
echo "Creating $MODEL_NAME:$VERSION_NAME"
# TODO 5: Create the model using gcloud ai-platform predict
# Refer to: https://cloud.google.com/sdk/gcloud/reference/ai-platform/predict
gcloud ai-platform versions create # complete the missing parameters
%%writefile input.json
{"pickup_longitude": -73.982683, "pickup_latitude": 40.742104,"dropoff_longitude": -73.983766,"dropoff_latitude": 40.755174,"passenger_count": 3.0}
!gcloud ai-platform predict --model taxifare --json-instances input.json --version dnn
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Locating the CSV files
Step2: Use tf.data to read the CSV files
Step3: Next, let's define our features we want to use and our label(s) and then load in the dataset for training.
Step4: Build a DNN with Keras
Step5: Visualize the DNN
Step6: Train the model
Step7: Visualize the model loss curve
Step8: Predict with the model locally
Step9: Of course, this is not realistic, because we can't expect client code to have a model object in memory. We'll have to export our model to a file, and expect client code to instantiate the model from that exported file.
Step10: Deploy the model to AI Platform
Step11: Monitor the model creation at GCP Console > AI Platform and once the model version dnn is created, proceed to the next cell.
|
6,548
|
<ASSISTANT_TASK:>
Python Code:
import vcsn
vcsn.B.trie('''foo
bar
baz''')
%%file words
hello
world
hell
word
vcsn.B.trie(filename='words')
vcsn.Q.trie('''
one
<2>two
<3>three
<13>thirteen
<30>thirty
<51>thirsty''')
vcsn.context('lat<law_char, law_char>, q').trie('''
<1>one|un
<2>two|deux
<3>three|trois
<4>four|quatre
<14>forteen|quatorze
<40>forty|quarante''')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Weighted words (finite series)
Step2: Tuples of words
|
6,549
|
<ASSISTANT_TASK:>
Python Code:
# Load regex package
import re
# Create a variable containing a text string
text = '3829 South Ave Street, Pheonix, AZ 34923'
# Find any ISBN-10 or ISBN-13 number
re.findall(r'[0-9]{5}(?:-[0-9]{4})?', text)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create some text
Step2: Apply regex
|
6,550
|
<ASSISTANT_TASK:>
Python Code:
from collections import namedtuple
import copy
import json
import os
import pathlib
import shutil
import subprocess
import tempfile
import ipyleaflet as ipyl
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import rasterio
from shapely.geometry import shape, mapping
%matplotlib inline
train_scene_id = '20160831_180231_0e0e'
# define and, if necessary, create train data directory
train_dir = os.path.join('data', 'train')
pathlib.Path(train_dir).mkdir(parents=True, exist_ok=True)
# define train scene
train_scene = os.path.join(train_dir, train_scene_id + '_3B_AnalyticMS.tif')
train_scene_metadata = os.path.join(train_dir,
train_scene_id + '_3B_AnalyticMS_metadata.xml')
# First test if scene file exists, if not, use the Planet commandline tool to download the image, metadata, and udm.
# This command assumes a bash shell, available in Unix-based operating systems.
!test -f $train_scene || \
planet data download \
--item-type PSOrthoTile \
--dest $train_dir \
--asset-type analytic,analytic_xml \
--string-in id $train_scene_id
test_scene_id = '20160831_180257_0e26'
# define and, if necessary, create test data directory
test_dir = os.path.join('data', 'test')
pathlib.Path(test_dir).mkdir(parents=True, exist_ok=True)
# define test scene
test_scene = os.path.join(test_dir, test_scene_id + '_3B_AnalyticMS.tif')
test_scene_metadata = os.path.join(test_dir,
test_scene_id + '_3B_AnalyticMS_metadata.xml')
# First test if scene file exists, if not, use the Planet commandline tool to download the image, metadata, and udm.
# This command assumes a bash shell, available in Unix-based operating systems.
!test -f $test_scene || \
planet data download \
--item-type PSOrthoTile \
--dest $test_dir \
--asset-type analytic,analytic_xml \
--string-in id $test_scene_id
predata_dir = 'pre-data'
test_aoi_filename = os.path.join(predata_dir, 'aoi-test.geojson')
assert os.path.isfile(test_aoi_filename)
train_aoi_filename = os.path.join(predata_dir, 'aoi-train.geojson')
assert os.path.isfile(train_aoi_filename)
ground_truth_filename = os.path.join(predata_dir, 'ground-truth.geojson')
assert os.path.isfile(ground_truth_filename)
# Utility functions: cropping polygons
# Uses shapely for geospatial operations
def crop_polygons_to_aoi(polygons, aoi):
Crops polygons to the aoi.
Polygons within aoi are copied. For Polygons that intersect aoi boundary, the
intersection geometry is saved. If the intersection is a MultiPolygon, it is
stored as multiple Polygons.
:param dict aoi: geojson polygon describing crop feature
:param list features: geojson polygons to be cropped
aoi_shp = shape(aoi['geometry'])
cropped_features = []
for f in polygons:
shp = shape(f['geometry'])
assert shp.type == 'Polygon'
if shp.within(aoi_shp):
cropped_features.append(copy.deepcopy(f))
elif shp.intersects(aoi_shp):
# 'cut' features at the aoi boundary by the aoi
cropped_shp = shp.intersection(aoi_shp)
try:
# try to iterate, which only works for MultiPolygon
for s in cropped_shp:
new_f = copy.deepcopy(f)
new_f['geometry'] = mapping(s)
cropped_features.append(new_f)
except TypeError:
# Polygon is not iterable
new_f = copy.deepcopy(f)
new_f['geometry'] = mapping(cropped_shp)
cropped_features.append(new_f)
return cropped_features
# Utility functions: loading and saving geojson
def save_geojson(features, filename):
with open(filename, 'w') as f:
f.write(json.dumps(features))
def load_geojson(filename):
with open(filename, 'r') as f:
return json.load(f)
ground_truth_data = load_geojson(ground_truth_filename)
train_aoi = load_geojson(train_aoi_filename)
train_ground_truth_data = crop_polygons_to_aoi(ground_truth_data, train_aoi)
print(len(train_ground_truth_data))
train_ground_truth_filename = os.path.join(predata_dir, 'ground-truth-train.geojson')
save_geojson(train_ground_truth_data, train_ground_truth_filename)
test_aoi = load_geojson(test_aoi_filename)
test_ground_truth_data = crop_polygons_to_aoi(ground_truth_data, test_aoi)
print(len(test_ground_truth_data))
test_ground_truth_filename = os.path.join(predata_dir, 'ground-truth-test.geojson')
save_geojson(test_ground_truth_data, test_ground_truth_filename)
# Utility functions: crop and project an image
def _gdalwarp_crop_options(crop_filename):
return ['-cutline', crop_filename, '-crop_to_cutline']
def _gdalwarp_project_options(src_proj, dst_proj):
return ['-s_srs', src_proj, '-t_srs', dst_proj]
def _gdalwarp(input_filename, output_filename, options):
commands = ['gdalwarp'] + options + \
['-overwrite',
input_filename,
output_filename]
print(' '.join(commands))
subprocess.check_call(commands)
# lossless compression of an image
def _compress(input_filename, output_filename):
commands = ['gdal_translate',
'-co', 'compress=LZW',
'-co', 'predictor=2',
input_filename,
output_filename]
print(' '.join(commands))
subprocess.check_call(commands)
# uses Rasterio to get image srs if dst_srs is specified
def warp(input_filename,
output_filename,
crop_filename=None,
dst_srs=None,
overwrite=True,
compress=False):
options = []
if crop_filename is not None:
options += _gdalwarp_crop_options(crop_filename)
if dst_srs is not None:
src_srs = rasterio.open(input_filename).crs['init']
options += _gdalwarp_project_options(src_srs, dst_srs)
# check to see if output file exists, if it does, do not warp
if os.path.isfile(output_filename) and not overwrite:
print('{} already exists. Aborting warp of {}.'.format(output_filename, input_filename))
elif compress:
with tempfile.NamedTemporaryFile(suffix='.vrt') as vrt_file:
options += ['-of', 'vrt']
_gdalwarp(input_filename, vrt_file.name, options)
_compress(vrt_file.name, output_filename)
else:
_gdalwarp(input_filename, output_filename, options)
train_scene_cropped = os.path.join(predata_dir, 'train_scene_cropped.tif')
warp(train_scene, train_scene_cropped, crop_filename=train_aoi_filename, overwrite=False, compress=True)
train_scene_cropped_metadata = os.path.join(predata_dir, 'train_scene_cropped_metadata.xml')
shutil.copyfile(train_scene_metadata, train_scene_cropped_metadata)
# Utility functions: loading an image
NamedBands = namedtuple('NamedBands', 'b, g, r, nir')
def load_masked_bands(filename):
Loads a 4-band BGRNir Planet Image file as a list of masked bands.
The masked bands share the same mask, so editing one band mask will
edit them all.
with rasterio.open(filename) as src:
b, g, r, nir = src.read()
mask = src.read_masks(1) == 0 # 0 value means the pixel is masked
bands = NamedBands(b=b, g=g, r=r, nir=nir)
return NamedBands(*[np.ma.array(b, mask=mask)
for b in bands])
print(load_masked_bands(train_scene_cropped).b.shape)
# Utility functions: displaying an image
def _linear_scale(ndarray, old_min, old_max, new_min, new_max):
Linear scale from old_min to new_min, old_max to new_max.
Values below min/max are allowed in input and output.
Min/Max values are two data points that are used in the linear scaling.
#https://en.wikipedia.org/wiki/Normalization_(image_processing)
return (ndarray - old_min)*(new_max - new_min)/(old_max - old_min) + new_min
# print(linear_scale(np.array([1,2,10,100,256,2560, 2660]), 2, 2560, 0, 256))
def _mask_to_alpha(bands):
band = np.atleast_3d(bands)[...,0]
alpha = np.zeros_like(band)
alpha[~band.mask] = 1
return alpha
def _add_alpha_mask(bands):
return np.dstack([bands, _mask_to_alpha(bands)])
def bands_to_display(bands, alpha=True):
Converts a list of bands to a 3-band rgb, normalized array for display.
rgb_bands = np.dstack(bands[:3])
old_min = np.percentile(rgb_bands, 2)
old_max = np.percentile(rgb_bands, 98)
new_min = 0
new_max = 1
scaled = _linear_scale(rgb_bands.astype(np.double),
old_min, old_max, new_min, new_max)
bands = np.clip(scaled, new_min, new_max)
if alpha is True:
bands = _add_alpha_mask(bands)
return bands
plt.figure()
bands = load_masked_bands(train_scene_cropped)
plt.imshow(bands_to_display([bands.r, bands.g, bands.b]))
# Utility functions: creating an image layer for display on a map
def _save_display_image(src_filename, dst_filename):
# convert to rgb and scale to 8-bit
bands = load_masked_bands(src_filename)
img = bands_to_display([bands.r, bands.g, bands.b])
# save as jpeg
if(os.path.isfile(dst_filename)): os.remove(dst_filename)
matplotlib.image.imsave(dst_filename, img)
def create_image_layer(filename):
with tempfile.NamedTemporaryFile(suffix='.tif') as temp_file:
projected_filename = temp_file.name
# project to wgs84
dst_srs = 'epsg:4326' #WGS84
warp(filename, projected_filename, dst_srs=dst_srs)
# save as jpeg
display_image = os.path.join('data', 'display.jpg')
_save_display_image(projected_filename, display_image)
# determine image layer bounds
(minx, miny, maxx, maxy) = rasterio.open(projected_filename).bounds
sw = [miny, minx]
ne = [maxy, maxx]
# Create image layer
return ipyl.ImageOverlay(url=display_image, bounds=[sw, ne])
def create_feature_layer(features):
# Assign colors to classes
# Class descriptions can be found in datasets-identify notebook
agg_classes = ['G', 'R', 'F', 'P', 'T', 'D', 'C', 'V']
# colors determined using [colorbrewer2.org](http://colorbrewer2.org/#type=sequential&scheme=BuGn&n=3)
colors = ['#ffffd9','#edf8b1','#c7e9b4','#7fcdbb','#41b6c4','#1d91c0','#225ea8','#0c2c84']
class_colors = dict((a,c) for a,c in zip(agg_classes, colors))
def get_color(cls):
return class_colors[cls]
feature_collection = {
"type": "FeatureCollection",
"features": features
}
for f in feature_collection['features']:
feature_color = get_color(f['properties']['CLASS1'])
f['properties']['style'] = {
'color': feature_color,
'weight': 1,
'fillColor': feature_color,
'fillOpacity': 0.1}
return ipyl.GeoJSON(data=feature_collection)
zoom = 13
center = [38.30933576918588, -121.55410766601564] # lat/lon
map_tiles = ipyl.TileLayer(url='http://{s}.basemaps.cartocdn.com/light_all/{z}/{x}/{y}.png')
data_map = ipyl.Map(
center=center,
zoom=zoom,
default_tiles = map_tiles
)
data_map.add_layer(create_image_layer(train_scene_cropped))
data_map.add_layer(create_feature_layer(train_ground_truth_data))
# display
data_map
test_scene_cropped = os.path.join(predata_dir, 'test_scene_cropped.tif')
warp(test_scene, test_scene_cropped, crop_filename=test_aoi_filename, overwrite=False, compress=True)
test_scene_cropped_metadata = os.path.join(predata_dir, 'test_scene_cropped_metadata.xml')
shutil.copyfile(test_scene_metadata, test_scene_cropped_metadata)
plt.figure()
bands = load_masked_bands(test_scene_cropped)
plt.imshow(bands_to_display([bands.r, bands.g, bands.b]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Datasets
Step2: Test Scene
Step3: AOI and Ground Truth
Step5: Crop Ground Truth Data to AOI
Step6: Train Ground Truth Data
Step7: Test Ground Truth Data
Step8: Crop Train Image to AOI
Step9: Copy over the image metadata
Step13: Visualize Cropped Image
Step14: <a id='visualize'></a>
Step15: Define layer for ground truth data
Step16: Awesome! The data looks nicely registered to the imagery and the crop outlines don't appear to have changed much over the years.
Step17: Visualize cropped image
|
6,551
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
from pypvcell.solarcell import SQCell,MJCell,TransparentCell
from pypvcell.illumination import Illumination
from pypvcell.spectrum import Spectrum
from pypvcell.metpv_reader import NEDOLocation
from pvlib.location import Location
from pvlib.tracking import SingleAxisTracker
from pvlib.irradiance import total_irrad,aoi_projection
nedo_solar_file='hm51106year.csv'
ngo_loc=NEDOLocation(nedo_solar_file)
df=ngo_loc.main_df
ngo_loc.main_df.head()
%%time
ngo_df=ngo_loc.extract_unstack_hour_data(norm=False)
ngo_df.head()
ngo_df.to_csv("ngo_df.csv")
ngo_df[['GHI','DHI','dHI']].sum()
ngo_dni=ngo_loc.get_DNI()
ngo_dni.head()
plt.plot(ngo_dni)
plt.ylim([0,1000])
ngo_tilt_irr=ngo_loc.tilt_irr(include_solar_pos=True)
ngo_tilt_irr.head()
ngo_tilt_irr.columns
plt.plot(ngo_tilt_irr['poa_direct'],alpha=0.5,label='incidence on tilt surface')
plt.plot(ngo_dni,alpha=0.5,label='DNI')
plt.ylim([0,1000])
plt.legend()
from matplotlib.colors import LogNorm
filtered_df=ngo_tilt_irr.loc[(ngo_tilt_irr['poa_direct']>1) & (ngo_tilt_irr['poa_direct']<500),
["azimuth","zenith",'poa_direct']]
ax = plt.subplot(111, projection='polar')
ax.plot(filtered_df['azimuth'].values*np.pi/180-np.pi/2,
filtered_df['zenith'].values-ngo_loc.latitude,'.')
plt.show()
import matplotlib as mpl
filtered_df=ngo_tilt_irr.loc[(ngo_tilt_irr['poa_direct']>1) & (ngo_tilt_irr['poa_direct']<500),
["azimuth","zenith",'poa_direct']]
ax = plt.subplot(111, projection='polar')
colormap = plt.get_cmap('hsv')
norm = mpl.colors.Normalize(1, 400)
cax=ax.scatter(filtered_df['azimuth'].values*np.pi/180-np.pi/2, filtered_df['zenith'].values-ngo_loc.latitude,
c=filtered_df['poa_direct'].values,s=200,norm=norm,alpha=0.5)
plt.colorbar(cax)
plt.savefig("nagoya_angular.png",dpi=600)
plt.show()
ngo_tilt_irr.columns
plt.hist(ngo_tilt_irr['aoi'],weights=ngo_tilt_irr['poa_direct'],bins=100)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data into a NEDOLocation object
Step2: main_df adds the column names into the raw data and convert it to a pandas.DataFrame object
Step3: Calculate the overall insolation
Step4: Extrat DNI
Step5: Calculate DNI on a tilted surface
Step6: Visualize the sun irradiances in angular plot
Step7: Analyze angle of incidence
|
6,552
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
y = np.array([1,2,3])
x = np.array([2,3,4])
y + x
y-x
y/x
np.dot(y,x)
x * y
a = np.array([[1,2,3],[4,5,6]])
b = np.array([[1,2,3]])
a + 1
a = np.array([[1,2],[3,4]])
b = np.array([[3,4],[5,6]])
a + b
b-a
a*b
def get_derivative(func, x):
Compute the derivative of `func` at the location `x`.
h = 0.0001 # step size
return (func(x+h) - func(x)) / h # rise-over-run
def f(x): return x**2 # some test function f(x)=x^2
x = 3 # the location of interest
computed = get_derivative(f, x)
actual = 2*x
computed, actual # = 6.0001, 6 # pretty close if you ask me...
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.1.1 Elementwise Operations
Step2: 1.1.2 Dot productions
Step3: 1.1.3 Hadamard product
Step4: 2. Matrices
Step5: 2.1 Scalar Operations
Step6: 2.2 Elementwise operations
Step7: 2.3 Hardmard production
Step9: 2.4 Matrix transpose
|
6,553
|
<ASSISTANT_TASK:>
Python Code:
# DON'T FORGET TO RUN THIS CELL
import math
import numpy as np
import pandas as pd
import seaborn as sns
import datascience as ds
import matplotlib.pyplot as plt
sns.set_style('darkgrid')
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
timit = pd.read_csv('data/timitvowels.csv')
timit.head()
IPAdict = {"AO" : "ษ", "AA" : "ษ", "IY" : "i", "UW" : "u", "EH" : "ษ", "IH" : "ษช", "UH":"ส", "AH": "ส", "AX" : "ษ", "AE":"รฆ", "EY" :"eษช", "AY": "aษช", "OW":"oส", "AW":"aส", "OY" :"ษษช", "ER":"ษ"}
timit['vowel'] = [IPAdict[x] for x in timit['vowel']]
timit.head()
timit_avg = timit.groupby(['speaker', 'vowel', 'gender', 'region']).mean().reset_index()
timit_avg.head()
timit_avg.gender.unique()
timit_female = timit_avg[timit_avg['gender'] == 'female']
timit_male = timit_avg[timit_avg['gender'] == 'male']
sns.distplot(timit_female['F1'], kde_kws={"label": "female"})
sns.distplot(timit_male['F1'], kde_kws={"label": "male"})
plt.title('F1')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz');
sns.distplot(timit_female['F2'], kde_kws={"label": "female"})
sns.distplot(timit_male['F2'], kde_kws={"label": "male"})
plt.title('F2')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz');
sns.distplot(timit_female['F3'], kde_kws={"label": "female"})
sns.distplot(timit_male['F3'], kde_kws={"label": "male"})
plt.title('F3')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz');
# reading in the data
class_data = pd.read_csv('data/110_formants.csv')
class_data.head()
# translating the vowel column
class_data['vowel'] = [IPAdict[x] for x in class_data['vowel']]
class_data.head()
class_data['Gender'].unique()
class_female = class_data[class_data['Gender'] == 'Female']
class_male = class_data[class_data['Gender'] == 'Male']
sns.distplot(class_female['F1'], kde_kws={"label": "female"})
sns.distplot(class_male['F1'], kde_kws={"label": "male"})
plt.title('F1')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz');
sns.distplot(class_female['F2'], kde_kws={"label": "female"})
sns.distplot(class_male['F2'], kde_kws={"label": "male"})
plt.title('F2')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz');
sns.distplot(class_female['F3'], kde_kws={"label": "female"})
sns.distplot(class_male['F3'], kde_kws={"label": "male"})
plt.title('F3')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz');
def plot_blank_vowel_chart():
im = plt.imread('images/blankvowel.png')
plt.imshow(im, extent=(plt.xlim()[0], plt.xlim()[1], plt.ylim()[0], plt.ylim()[1]))
def plot_vowel_space(avgs_df):
plt.figure(figsize=(10, 8))
plt.gca().invert_yaxis()
plt.gca().invert_xaxis()
vowels = ['eษช', 'i', 'oส', 'u', 'รฆ', 'ษ', 'ษ', 'ษ', 'ษช', 'ส', 'ส'] + ['ษ']
for i in range(len(avgs_df)):
plt.scatter(avgs_df.loc[vowels[i]]['F2'], avgs_df.loc[vowels[i]]['F1'], marker=r"$ {} $".format(vowels[i]), s=1000)
plt.ylabel('F1')
plt.xlabel('F2')
class_vowel_avgs = class_data.drop('ID', axis=1).groupby('vowel').mean()
class_vowel_avgs.head()
timit_vowel_avgs = timit.groupby('vowel').mean()
timit_vowel_avgs.head()
plot_vowel_space(class_vowel_avgs)
plt.xlabel('F2 (Hz)')
plt.ylabel('F1 (Hz)');
log_timit_vowels = timit_vowel_avgs.apply(np.log)
log_class_vowels = class_vowel_avgs.apply(np.log)
class_data['log(F1)'] = np.log(class_data['F1'])
class_data['log(F2)'] = np.log(class_data['F2'])
log_class_vowels.head()
plot_vowel_space(log_class_vowels)
plt.xlabel('log(F2) (Hz)')
plt.ylabel('log(F1) (Hz)');
plot_vowel_space(log_class_vowels)
plot_blank_vowel_chart()
plt.xlabel('log(F2) (Hz)')
plt.ylabel('log(F1) (Hz)');
plot_vowel_space(log_timit_vowels)
plot_blank_vowel_chart()
plt.xlabel('log(F2) (Hz)')
plt.ylabel('log(F1) (Hz)');
sns.lmplot('log(F2)', 'log(F1)', hue='vowel', data=class_data, fit_reg=False, size=8, scatter_kws={'s':30})
plt.xlim(8.2, 6.7)
plt.ylim(7.0, 5.7);
plt.figure(figsize=(10, 12))
pick_vowel = lambda v: class_data[class_data['vowel'] == v]
colors = ['Greys_r', 'Purples_r', 'Blues_r', 'Greens_r', 'Oranges_r', \
'Reds_r', 'GnBu_r', 'PuRd_r', 'winter_r', 'YlOrBr_r', 'pink_r', 'copper_r']
for vowel, color in list(zip(class_data.vowel.unique(), colors)):
vowel_subset = pick_vowel(vowel)
sns.kdeplot(vowel_subset['log(F2)'], vowel_subset['log(F1)'], n_levels=1, cmap=color, shade=False, shade_lowest=False)
for i in range(1, len(class_data)+1):
plt.scatter(class_data['log(F2)'][i], class_data['log(F1)'][i], color='black', linewidths=.5, marker=r"$ {} $".format(class_data['vowel'][i]), s=40)
plt.xlim(8.2, 6.7)
plt.ylim(7.0, 5.7);
genders = class_data['Gender']
plotting_data = class_data.drop('vowel', axis=1)[np.logical_or(genders == 'Male', genders == 'Female')]
maxes = plotting_data.groupby(['ID', 'Gender']).max().reset_index()[plotting_data.columns[:-2]]
maxes.columns = ['ID', 'Language', 'Gender', 'Height', 'Max F1', 'Max F2', 'Max F3']
maxes_female = maxes[maxes['Gender'] == 'Female']
maxes_male = maxes[maxes['Gender'] == 'Male']
maxes.head()
sns.regplot('Height', 'Max F1', data=maxes)
sns.regplot('Height', 'Max F1', data=maxes_male, fit_reg=False)
sns.regplot('Height', 'Max F1', data=maxes_female, fit_reg=False)
plt.xlabel('Height (cm)')
plt.ylabel('Max F1 (Hz)')
print('female: green')
print('male: orange')
sns.regplot('Height', 'Max F2', data=maxes)
sns.regplot('Height', 'Max F2', data=maxes_male, fit_reg=False)
sns.regplot('Height', 'Max F2', data=maxes_female, fit_reg=False)
plt.xlabel('Height (cm)')
plt.ylabel('Max F2 (Hz)')
print('female: green')
print('male: orange')
sns.regplot('Height', 'Max F3', data=maxes)
sns.regplot('Height', 'Max F3', data=maxes_male, fit_reg=False)
sns.regplot('Height', 'Max F3', data=maxes_female, fit_reg=False)
plt.xlabel('Height (cm)')
plt.ylabel('Max F3 (Hz)')
print('female: green')
print('male: orange')
sns.lmplot('Height', 'Max F1', data=maxes, hue='Gender')
plt.xlabel('Height (cm)')
plt.ylabel('Max F1 (Hz)');
timit_maxes = timit.groupby(['speaker', 'gender']).max().reset_index()
timit_maxes.columns = ['speaker', 'gender', 'region', 'height', 'word', 'vowel', 'Max duration', 'Max F1', 'Max F2', 'Max F3', 'Max f0']
plt.xlim(140, 210)
plt.ylim(500, 1400)
sns.regplot('height', 'Max F1', data=timit_maxes[timit_maxes['gender'] == 'female'], scatter_kws={'alpha':0.3})
sns.regplot('height', 'Max F1', data=timit_maxes[timit_maxes['gender'] == 'male'], scatter_kws={'alpha':0.3})
sns.regplot('height', 'Max F1', data=timit_maxes, scatter=False)
plt.xlabel('Height (cm)')
plt.ylabel('Max F1 (Hz)');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exploring TIMIT Data <a id='timit'></a>
Step2: Look at the dataframe you created and try to figure out what each column measures. Each column represents a different attribute, see the following table for more information.
Step3: Most of the speakers will say the same vowel multiple times, so we are going to average those values together. The end result will be a dataframe where each row represents the average values for each vowel for each speaker.
Step4: Splitting on Gender
Step5: You could see that for this specific dataset there are only "female" and "male" values in the column. Given that information, we'll create two subsets based off of gender.
Step6: Distribution of Formants
Step7: Does there seem to be a notable difference between male and female distributions of F1?
Step8: Finally, we create the same visualization, but for F3.
Step9: Do you see a more pronounced difference across the the different F values? Are they the same throughout? Can we make any meaningful assumptions from these visualizations?
Step10: The ID column contains a unique value for each individual. Each individual has a row for each of the different vowels they measured.
Step11: Splitting on Gender
Step12: Notice that there are three possible values for the column. We do not have a large enough sample size to responsibly come to conclusions for Prefer not to answer, so for now we'll compare Male and Female. We'll call our new split dataframes class_female and class_male.
Step13: Comparing Distributions
Step14: Next is F2.
Step15: And finally F3.
Step16: Do the spread of values appear to be the same for females and males? Do the same patterns that occur in the TIMIT data appear in the class's data?
Step17: We are going to be recreating the following graphic from this website.
Step18: Each of these new tables has a row for each vowel, which comprisises of the averaged values across all speakers.
Step19: Using Logarithmic Axes
Step20: Below we plot the vowel space using these new values.
Step21: What effect does using the logged values have, if any? What advantages does using these values have? Are there any negatives? This paper might give some ideas.
Step22: How well does it match the original?
Step23: How does the TIMIT vowel space compare to the vowel space from our class data? What may be the cause for any differences between our vowel space and the one constructed using the TIMIT data? Do you notice any outliers or do any points that seem off?
Step24: In the following visualization, we replace the colors with the IPA characters and attempt to clump the vowels together.
Step25: Formants vs Height <a id='fvh'></a>
Step26: First we will plot Max F1 against Height.
Step27: Is there a general trend for the data that you notice? What do you notice about the different color dots?
Step28: Finally, Max F3 vs Height.
Step29: Do you notice a difference between the trends for the three formants?
Step30: Is there a noticeable difference between the two? Did you expect this result?
|
6,554
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import os
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import nba_py
sns.set_context('poster')
import plotly.offline as py
import plotly.graph_objs as go
py.init_notebook_mode(connected=True)
data_path = os.path.join(os.getcwd(), os.pardir, 'data', 'interim', 'sleep_data.csv')
df_sleep = pd.read_csv(data_path, index_col='shifted_datetime', parse_dates=True)
df_sleep.index += pd.Timedelta(hours=12)
sleep_day = df_sleep.resample('1D').sum().fillna(0)
from nba_py import league
gswlog = league.GameLog(player_or_team='T')
league_logs = gswlog.json['resultSets'][0]['rowSet']
columns = gswlog.json['resultSets'][0]['headers']
df_league = pd.DataFrame(league_logs, columns=columns)
df_league.columns
gsw_games = df_league[df_league['TEAM_ABBREVIATION'] == 'GSW']
len(gsw_games)
gsw_games.head()
gsw_dates = gsw_games['GAME_DATE']
toplot = sleep_day['minutesAsleep']/60.
data = []
data.append(
go.Scatter(
x=toplot.index,
y=toplot.values,
name='Hours Asleep'
)
)
shapes = []
for idate, gsw_date in enumerate(gsw_dates):
if idate == 0:
showlegend = True
else:
showlegend = False
trace0 = go.Scatter(
x=[gsw_date],
y=[toplot.dropna().min()],
mode='markers',
name='Golden State Warriors Game',
marker=dict(
color='salmon'
),
showlegend=showlegend
)
data.append(trace0)
layout = go.Layout(
title="Daily Sleep Total, 6pm to 6pm",
yaxis=dict(
title='Hours Asleep'
),
)
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='DailySleepTotal_GSWGames')
gsw_dates = pd.to_datetime(gsw_dates)
gswdatedf = pd.DataFrame(index=gsw_dates)
gswdatedf['game_status'] = 1
gswdatedf = gswdatedf.resample('1D').sum().fillna(0)
gswdatedf_next = gswdatedf.copy()
gswdatedf_next.index += pd.Timedelta(hours=24)
sleepgsw = sleep_day.join(gswdatedf_next, how='inner')
sleepgswyes = sleepgsw.groupby('game_status').mean()
sleepgswyes['minutesAsleep'] / 60
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The NBA season starts at the end of October. I got my fitbit near the beginning of November, so there is a lot of overlap.
|
6,555
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-3', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
6,556
|
<ASSISTANT_TASK:>
Python Code:
hymns = {}
for line in open('dimeter-mp.csv', 'r'):
comps = line.strip().split(',')
line, pada, meter = comps[0:3]
mps = comps[3:]
hymn_id = '-'.join(line.split('.')[:2])
if hymn_id not in hymns:
hymns[hymn_id] = []
hymns[hymn_id].append(mps)
print(len(hymns))
from seaborn.distributions import distplot
lens = [len(hymns[h]) for h in hymns]
distplot(lens)
ids, texts = [], []
for h in hymns:
# add book label:
ids.append(h)
# stringify mps:
t = ''
for line in hymns[h]:
t += (''.join(line[:4])+'%'+''.join(line[4:])+'$')
texts.append(t)
from pystyl.corpus import Corpus
corpus = Corpus(language='other')
corpus.add_texts(texts=texts,
titles=ids, #['b'+str(i) for i in range(len(texts))],
target_names=ids)
print(corpus)
mfi = corpus.vectorize(mfi=1000000,
ngram_type='char',
ngram_size=4,
vector_space='tf_std',
min_df=0.0)
print(len(mfi))
print(len(corpus))
from pystyl.analysis import pca
pca_coor, pca_loadings = pca(corpus, nb_dimensions=2)
from pystyl.visualization import scatterplot
scatterplot(corpus, coor=pca_coor,
nb_clusters=3, loadings=pca_loadings,
save=True, outputfile="/Users/mike/Desktop/pca.pdf")
from pystyl.analysis import pca
pca_coor, pca_loadings = pca(corpus, nb_dimensions=2)
from pystyl.visualization import scatterplot
scatterplot(corpus, coor=pca_coor,
nb_clusters=3, loadings=pca_loadings,
save=True, outputfile="/Users/mike/Desktop/pca.pdf")
from pystyl.analysis import distance_matrix
dm = distance_matrix(corpus, 'minmax')
from pystyl.analysis import hierarchical_clustering
cluster_tree = hierarchical_clustering(dm, linkage='ward')
from pystyl.visualization import scipy_dendrogram
scipy_dendrogram(corpus=corpus, tree=cluster_tree,
outputfile='~/Desktop/scipy_dendrogram.pdf',
fontsize=3, save=True, return_svg=False)
from sklearn.cluster import AgglomerativeClustering
clustering = AgglomerativeClustering(linkage='ward',
affinity='euclidean',
n_clusters=2)
clustering.fit(corpus.vectorizer.X.toarray())
for title, label in zip(corpus.titles, clustering.labels_):
if label == 1:
print(title)
# add slice names:
#for x, y, name, cluster_label in zip(x1, x2, labels, clustering.labels_):
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: How many hymns do we have in total?
Step2: How are their lengths distributed?
Step3: Most seem to have around 25 lines, consisting of 8 MPs (metrical position).
Step4: Let us turn this data into a corpus
|
6,557
|
<ASSISTANT_TASK:>
Python Code:
FIX =
def vowels_count(s):
Write a function vowels_count w vowels = "aeiouAEIOU"
n_vowels = sum(c in vowels for c in s)
if s[-1] == 'y' or s[-1] == 'Y':
n_vowels += 1
return n_vowels
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
6,558
|
<ASSISTANT_TASK:>
Python Code:
pd.read_csv("data/demographie/pop_age_sexe_2016.csv").head()
pd.read_csv("data/travail/activite_2015.csv")
pd.read_csv("data/travail/chomage.csv")
pd.read_csv("data/travail/retraite_2012.csv")
pd.read_csv("data/demographie/etudes.csv")
pd.read_csv("data/demographie/handicap_pop.csv")
reference_marital = dict()
for sexe in ['homme', 'femme']:
reference_marital[sexe] = pd.read_csv("data/menages/statut_marital_{0}.csv".format(sexe))
reference_marital['femme'].head()
reference_marital['homme'].head()
pd.read_csv('data/menages/enfants/type_famille.csv')
pd.read_csv('data/menages/enfants/nbr_enfant.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Les tables concernant le statut d'actif.ve
Step2: Les tables concernant le statut d'actif.ve occupรฉe
Step3: Tables concernant les pensions de retraite
Step4: Tables concernant le statut d'รฉtudiant
Step5: Tables concernant le handicap
Step6: Tables concernant le satut marital
Step7: Tables concernant le fait d'avoir un enfant
Step8: Tables concernant le nombre d'enfants
|
6,559
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
alpha0 = 60.0 # fault dip at surface, degrees
z0 = 0.0 # elevation of surface trace
h = 10.0 # detachment depth, km
G0 = np.tan(np.deg2rad(60.0))
x = np.arange(0, 41.0)
z = z0 - h * (1.0 - np.exp(-x * G0 / h))
plt.plot(x, z, 'k')
plt.xlabel('Distance (km)')
plt.ylabel('Fault plane elevation (km)')
dt = 100000.0 # time span, y
xf = 10000.0 # initial location of surface trace of fault, m
u = 0.01 # extension rate, m/y
h = 10000.0 # detachment depth, m
nprofiles = 5
x = np.arange(0.0, 40100.0, 100.0)
dist_from_fault = np.maximum(x - xf, 0.0)
z = z0 - h * (1.0 - np.exp(-dist_from_fault * G0 / h))
plt.plot(x, z, 'r', label='Fault plane')
for i in range(nprofiles):
t = i * dt
shifted_dist_from_fault = np.maximum(dist_from_fault - u * t, 0.0)
# WAIT
# Calculate the surface topography
eta = (h * (np.exp( -dist_from_fault * G0 / h)
- np.exp( -shifted_dist_from_fault * G0 / h)))
# Calculate thickness
#thickness = h * (1.0 - np.exp(-shifted_dist_from_fault * G0 / h))
# eta won't be less than the fault-plane elevation
eta[eta < z] = z[eta < z]
plt.plot(x, eta, 'k', label='Surface elevation ' + str(i))
#plt.plot(x, thickness, 'b', label='Thickness' + str(i))
plt.xlabel('Distance (km)')
plt.ylabel('Elevation (km)')
plt.legend()
import numpy as np
import matplotlib.pyplot as plt
from landlab import RasterModelGrid, imshow_grid
from landlab.components import ListricKinematicExtender
# parameters
nrows = 3
ncols = 51
dx = 1000.0 # grid spacing, m
nsteps = 20 # number of iterations
dt = 2.5e5 # time step, y
extension_rate = 0.001 # m/y
detachment_depth = 10000.0 # m
fault_dip = 60.0 # fault dip angle, degrees
fault_loc = 10000.0 # m from left side of model
# Create grid and elevation field
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
elev = grid.add_zeros('topographic__elevation', at='node')
# Instantiate component
extender = ListricKinematicExtender(grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_loc,
detachment_depth=detachment_depth,
)
# Plot the starting elevations, in cross-section (middle row)
midrow = np.arange(ncols, 2 * ncols, dtype=int)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], 'k')
plt.xlabel('Distance (km)')
plt.ylabel('Elevation (m)')
plt.xlim([10.0, 40.0])
# Add a plot of the fault plane
dist_from_fault = grid.x_of_node - fault_loc
dist_from_fault[dist_from_fault < 0.0] = 0.0
x0 = detachment_depth / np.tan(np.deg2rad(fault_dip))
fault_plane = -(detachment_depth
* (1.0 - np.exp(-dist_from_fault / x0)))
plt.plot(grid.x_of_node[midrow] / 1000.0, fault_plane[midrow], 'r')
for i in range(nsteps):
extender.run_one_step(dt)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], 'k')
# Add the analytic solution
total_time = nsteps * dt
G0 = np.tan(np.deg2rad(fault_dip))
shifted_dist_from_fault = np.maximum(dist_from_fault - extension_rate * total_time, 0.0)
elev_pred = (detachment_depth * (np.exp( -dist_from_fault * G0 / h)
- np.exp( -(shifted_dist_from_fault * G0 / h))))
elev_pred = np.maximum(elev_pred, fault_plane)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev_pred[midrow], 'b:')
period = 15000.0 # period of sinusoidal variations in initial topography, m
ampl = 500.0 # amplitude of variations, m
# Create grid and elevation field
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
elev = grid.add_zeros('topographic__elevation', at='node')
elev[:] = ampl * np.sin(2 * np.pi * grid.x_of_node / period)
# Instantiate component
extender = ListricKinematicExtender(grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_loc,
detachment_depth=detachment_depth,
)
# Plot the starting elevations, in cross-section (middle row)
midrow = np.arange(ncols, 2 * ncols, dtype=int)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], 'k')
plt.xlabel('Distance (km)')
plt.ylabel('Elevation (m)')
plt.grid(True)
# Add a plot of the fault plane
dist_from_fault = grid.x_of_node - fault_loc
dist_from_fault[dist_from_fault < 0.0] = 0.0
x0 = detachment_depth / np.tan(np.deg2rad(fault_dip))
fault_plane = -(detachment_depth
* (1.0 - np.exp(-dist_from_fault / x0)))
plt.plot(grid.x_of_node[midrow] / 1000.0, fault_plane[midrow], 'r')
for i in range(nsteps):
extender.run_one_step(dt)
c = 1.0 - i / nsteps
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], color=[c, c, c])
from landlab import imshow_grid
# parameters
nrows = 31
ncols = 51
dx = 1000.0 # grid spacing, m
nsteps = 20 # number of iterations
dt = 2.5e5 # time step, y
extension_rate = 0.001 # m/y
detachment_depth = 10000.0 # m
fault_dip = 60.0 # fault dip angle, degrees
fault_loc = 10000.0 # m from left side of model
period = 15000.0 # period of sinusoidal variations in initial topography, m
ampl = 500.0 # amplitude of variations, m
# Create grid and elevation field
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
elev = grid.add_zeros('topographic__elevation', at='node')
elev[:] = (ampl * np.sin(2 * np.pi * grid.x_of_node / period)
* np.sin(2 * np.pi * grid.y_of_node / period))
# Instantiate component
extender = ListricKinematicExtender(grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_loc,
detachment_depth=detachment_depth,
)
# Plot the starting topography
imshow_grid(grid, elev)
for i in range(nsteps // 2):
extender.run_one_step(dt)
imshow_grid(grid, elev)
for i in range(nsteps // 2):
extender.run_one_step(dt)
imshow_grid(grid, elev)
imshow_grid(grid, extender._fault_normal_coord)
# Plot a cross-section
start_node = 6 * ncols
end_node = start_node + ncols
midrow = np.arange(start_node, end_node, dtype=int)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], 'k')
plt.xlabel('Distance (km)')
plt.ylabel('Elevation (m)')
plt.grid(True)
# Add a plot of the fault plane
dist_from_fault = grid.x_of_node - fault_loc
dist_from_fault[dist_from_fault < 0.0] = 0.0
x0 = detachment_depth / np.tan(np.deg2rad(fault_dip))
fault_plane = -(detachment_depth
* (1.0 - np.exp(-dist_from_fault / x0)))
plt.plot(grid.x_of_node[midrow] / 1000.0, fault_plane[midrow], 'r')
from landlab import HexModelGrid
# parameters
nrows = 31
ncols = 51
dx = 1000.0 # grid spacing, m
nsteps = 20 # number of iterations
dt = 2.5e5 # time step, y
extension_rate = 0.001 # m/y
detachment_depth = 10000.0 # m
fault_dip = 60.0 # fault dip angle, degrees
fault_loc = 10000.0 # m from left side of model
period = 15000.0 # period of sinusoidal variations in initial topography, m
ampl = 500.0 # amplitude of variations, m
# Create grid and elevation field
grid = HexModelGrid((nrows, ncols), spacing=dx, node_layout='rect')
elev = grid.add_zeros('topographic__elevation', at='node')
elev[:] = (ampl * np.sin(2 * np.pi * grid.x_of_node / period)
* np.sin(2 * np.pi * grid.y_of_node / period))
# Instantiate component
extender = ListricKinematicExtender(grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_loc,
detachment_depth=detachment_depth,
)
# Plot the starting topography
imshow_grid(grid, elev)
for i in range(nsteps // 2):
extender.run_one_step(dt)
imshow_grid(grid, elev)
for i in range(nsteps // 2):
extender.run_one_step(dt)
imshow_grid(grid, elev)
# Plot a cross-section
start_node = 6 * ncols
end_node = start_node + ncols
midrow = np.arange(start_node, end_node, dtype=int)
plt.plot(grid.x_of_node[midrow] / 1000.0, elev[midrow], 'k')
plt.xlabel('Distance (km)')
plt.ylabel('Elevation (m)')
plt.grid(True)
# Add a plot of the fault plane
dist_from_fault = grid.x_of_node - fault_loc
dist_from_fault[dist_from_fault < 0.0] = 0.0
x0 = detachment_depth / np.tan(np.deg2rad(fault_dip))
fault_plane = -(detachment_depth
* (1.0 - np.exp(-dist_from_fault / x0)))
plt.plot(grid.x_of_node[midrow] / 1000.0, fault_plane[midrow], 'r')
from landlab.components import Flexure
# parameters
nrows = 31
ncols = 51
dx = 1000.0 # grid spacing, m
nsteps = 20 # number of iterations
dt = 2.5e5 # time step, y
extension_rate = 0.001 # m/y
detachment_depth = 10000.0 # m
fault_dip = 60.0 # fault dip angle, degrees
fault_loc = 10000.0 # m from left side of model
period = 15000.0 # period of sinusoidal variations in initial topography, m
ampl = 500.0 # amplitude of variations, m
# flexural parameters
eet = 5000.0 # effective elastic thickness, m (here very thin)
crust_datum = -10000.0 # elevation of crustal reference datum, m
rhoc = 2700.0 # crust density, kg/m3
g = 9.8 # guess what?
# Create grid and elevation field
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
elev = grid.add_zeros('topographic__elevation', at='node')
elev[:] = (ampl * np.sin(2 * np.pi * grid.x_of_node / period)
* np.sin(2 * np.pi * grid.y_of_node / period))
thickness = grid.add_zeros('upper_crust_thickness', at='node')
load = grid.add_zeros('lithosphere__overlying_pressure_increment', at='node')
# Instantiate components
extender = ListricKinematicExtender(grid,
extension_rate=extension_rate,
fault_dip=fault_dip,
fault_location=fault_loc,
detachment_depth=detachment_depth,
track_crustal_thickness=True,
)
cum_subs = grid.at_node['cumulative_subsidence_depth']
flexer = Flexure(grid, eet=eet, method='flexure')
deflection = grid.at_node['lithosphere_surface__elevation_increment']
# set up thickness and flexure
unit_wt = rhoc * g
thickness[:] = elev - crust_datum
load[:] = unit_wt * thickness
flexer.update()
init_flex = deflection.copy()
# show initial deflection field (positive downward)
imshow_grid(grid, init_flex)
for i in range(nsteps):
extender.run_one_step(dt)
load[:] = unit_wt * thickness
flexer.update()
net_deflection = deflection - init_flex
elev[:] = crust_datum + thickness - (cum_subs + net_deflection)
imshow_grid(grid, thickness)
imshow_grid(grid, net_deflection)
imshow_grid(grid, cum_subs)
imshow_grid(grid, elev)
plt.plot(elev.reshape(31, 51)[:,10], label='Rift shoulder')
plt.plot(elev.reshape(31, 51)[:, 12], label='Rift basin')
plt.plot(-net_deflection.reshape(31, 51)[:, 10], label='Isostatic uplift profile')
plt.xlabel('North-south distance (km)')
plt.ylabel('Height (m)')
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Describing subsidence due to fault motion
Step2: Numerical implementation
Step3: Example 2
Step4: Example 3
Step5: Example 4
Step6: Example 5
|
6,560
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bash
pip freeze | grep google-cloud-bigquery==1.6.1 || \
pip install google-cloud-bigquery==1.6.1
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_4
OPTIONS (
# TODO: Add DNN options
INPUT_LABEL_COLS=["weight_pounds"],
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
# TODO: Add base features and label
FROM
babyweight.babyweight_data_train
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_4)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_4,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_data_eval
))
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_4,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_data_eval
))
%%bigquery
CREATE OR REPLACE MODEL
babyweight.final_model
TRANSFORM(
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
# TODO: Add FEATURE CROSS of:
# is_male, bucketed_mother_age, plurality, and bucketed_gestation_weeks
OPTIONS (
# TODO: Add DNN options
INPUT_LABEL_COLS=["weight_pounds"],
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
*
FROM
babyweight.babyweight_data_train
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.final_model)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.final_model,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.final_model,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
%%bigquery
SELECT
*
FROM
ML.PREDICT(MODEL babyweight.final_model,
(
SELECT
# TODO Add base features example from original dataset
))
%%bigquery
SELECT
*
FROM
ML.PREDICT(MODEL babyweight.final_model,
(
SELECT
# TODO Add base features example from simulated dataset
))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Verify tables exist
Step2: Lab Task #1
Step3: Get training information and evaluate
Step4: Now let's evaluate our trained model on our eval dataset.
Step5: Let's use our evaluation's mean_squared_error to calculate our model's RMSE.
Step6: Lab Task #2
Step7: Let's first look at our training statistics.
Step8: Now let's evaluate our trained model on our eval dataset.
Step9: Let's use our evaluation's mean_squared_error to calculate our model's RMSE.
Step10: Lab Task #3
Step11: Modify above prediction query using example from simulated dataset
|
6,561
|
<ASSISTANT_TASK:>
Python Code:
import pandas
import numpy
import toyplot
import toyplot.pdf
import toyplot.png
import toyplot.svg
print('Pandas version: ', pandas.__version__)
print('Numpy version: ', numpy.__version__)
print('Toyplot version: ', toyplot.__version__)
column_names = ['MPG',
'Cylinders',
'Displacement',
'Horsepower',
'Weight',
'Acceleration',
'Model Year',
'Origin',
'Car Name']
data = pandas.read_table('auto-mpg.data',
delim_whitespace=True,
names=column_names,
index_col=False)
data['Make'] = data['Car Name'].str.split().str.get(0)
data.ix[data['Make'] == 'chevroelt', 'Make'] = 'chevrolet'
data.ix[data['Make'] == 'chevy', 'Make'] = 'chevrolet'
data.ix[data['Make'] == 'maxda', 'Make'] = 'mazda'
data.ix[data['Make'] == 'mercedes-benz', 'Make'] = 'mercedes'
data.ix[data['Make'] == 'vokswagen', 'Make'] = 'volkswagen'
data.ix[data['Make'] == 'vw', 'Make'] = 'volkswagen'
average_mpg_per_make = data.pivot_table(columns='Make',
values='MPG',
aggfunc='mean')
len(average_mpg_per_make.index)
count_mpg_per_make = data.pivot_table(columns='Make',
values='MPG',
aggfunc='count')
filtered_mpg = \
average_mpg_per_make[count_mpg_per_make >= 10]. \
sort_values(ascending=False)
filtered_mpg
make_to_index = pandas.Series(index=filtered_mpg.index,
data=xrange(0, len(filtered_mpg)))
data['Make Index'] = numpy.array(make_to_index[data['Make']])
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-9,6,-58),
ylabel = 'MPG')
axes.scatterplot(data.dropna()['Make Index'],
data.dropna()['MPG'],
marker='-',
size=15,
opacity=0.75)
# Label the x axis on the make. This is a bit harder than it should be.
axes.x.ticks.locator = \
toyplot.locator.Explicit(labels=filtered_mpg.index)
axes.x.ticks.labels.angle = 45
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
toyplot.pdf.render(canvas, 'Detail.pdf')
toyplot.svg.render(canvas, 'Detail.svg')
toyplot.png.render(canvas, 'Detail.png', scale=5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in the "auto" dataset. This is a fun collection of data on cars manufactured between 1970 and 1982. The source for this data can be found at https
Step2: For this analysis I am going to group data by the car maker. The make is not directly stored in the data, but all the names start with the make, so extract the first word in that column.
Step3: The data has some inconsistencies with the make strings (misspellings or alternate spellings). Do some simple fixes.
Step4: In this plot we are going to show the average miles per gallon (MPG) rating for each car maker. We can use the pivot_table feature of pandas to get this information from the data. (Excel and other spreadsheets have similar functionality.)
Step5: There are many different makers represented in this data set, but several have only a few cars and perhaps are therefore not a signficant sample. Filter out the car makers that have fewer than 10 entries in the data. (Mostly I'm doing this to make these examples fit better even though it works OK with all the data, too.)
Step6: Add a column with a car maker index so that we can plot by index. Note that we have filtered the make by those manufacturers that have at least 10 models, so any make with less than 10 models is filtered out.
Step7: Now use toyplot to plot the MPG of every car (that matches our criteria), organized by manufacturer.
|
6,562
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import HTML
css_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'
HTML(url=css_file)
%matplotlib inline
import math
import numpy
from matplotlib import pyplot, animation
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
rcParams['figure.figsize'] = (12,6)
import scipy
from scipy.optimize import minimize
from __future__ import division
class Agent(object):
def __init__(self, location, velocity, C=1, A=5, S=0.1):
# Complete this
def step(self, dt):
# And this
def steer(self, neighbours):
# And this
class Flock(object):
def __init__(self, locations, velocities, rl=1):
# Complete this
def step(self, dt):
# And this
def locations(self):
# And this
def velocities(self):
# And this
def average_location(self):
# And this
def average_velocity(self):
# And this
def average_width(self):
# And this
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Flocking behaviour
Step2: Define a class called Agent. This should store the 2-dimensional ${\bf z} = (x, y)$ location of the individual, and its 2-dimensional velocity ${\bf v} = (v_x, v_y)$. It should have a method that updates its location over a timestep $\Delta t$ as
Step3: Test that the Agent steers as expected given a list of zero or one local flockmates.
|
6,563
|
<ASSISTANT_TASK:>
Python Code:
from jax import grad, jit, vmap
import jax.numpy as np
import numpy as np2
#2D coordinates of locations of demand or supply
x_n = np.array([[-0.97,-80.7],
[-1.05, -80.45],
[-2.15, -79.92],
[-1.81, -79.53],
[-1.03, -79.47]])
#Quantities of demand/supply for each location
quantities = np.array([[250],[200],[700],[150],[300]])
#Cost/(unit x distance) from each location to depot/distribution center
costs = np.array([[1],[1],[1],[1],[1]])
def distances(x,y):
'''Function that return distance between two locations
Input: Two 2D numpy arrays
Output: Distance between locations'''
x_rp = np.repeat(x,x_n.shape[0],0).reshape(-1,1)
y_rp = np.repeat(y,x_n.shape[0],0).reshape(-1,1)
dist_x = (x_rp - x_n[:,:1])**2
dist_y = (y_rp - x_n[:,1:2])**2
return np.sqrt(dist_x+dist_y).reshape((-1,1))
def cost_function(x_0):
'''Function that calculate total cost due to transport for a depot/distribution center location x_0
Input: 2D numpy array
Output: Total cost'''
x = np.array([[x_0[0,0]]])
y = np.array([[x_0[0,1]]])
dist = distances(x,y)
dist_costo = quantities*costs*dist
return np.sum(dist_costo)
gradient_funcion = jit(grad(cost_function)) #jit (just in time) compile makes faster the evaluation of the gradient.
def optimize(funtion_opt, grad_fun, x_0, method, n_iter):
'''Input:
funtion_opt: Function to minimize
grad_fun: gradient of the function to minimize
x_0: initial 2D coordiantes of depot/distribution center
method: method to use for minimize
n_iter: Number of iterations of the method
--------------
Output:
xs: List of x coordiantes for each iteration
ys: List of y coordiantes for each iteration
fs: List of costs for each iteration'''
#Create empty lists to fill with iteration values
xs = []
ys = []
fs = []
#Add the initial location
xs.append(x_0[0,0])
ys.append(x_0[0,1])
fs.append(cost_function(x_0))
for i in range(n_iter):
if method == 'newton':
loss_val = funtion_opt(x_0)
loss_vec = np.array([[loss_val, loss_val]])
x_0 -= 0.005*loss_vec/grad_fun(x_0)
elif method == 'grad_desc':
step = 0.0001*grad_fun(x_0)
x_0 -= step
xs.append(x_0[0,0])
ys.append(x_0[0,1])
fs.append(cost_function(x_0))
return xs, ys, fs
#Initial locationl of depots/distribution centers
x0=np.array([[4.0,-84.0]])
print("Initial Cost: {:0.2f}".format(cost_function(x0 ) ))
xs, ys, fs = optimize(cost_function, gradient_funcion, x0, 'grad_desc', 100)
print("Final Cost: {:0.2f}".format(fs[-1]))
from mpl_toolkits import mplot3d
import matplotlib.pyplot as plt
#We must modified how we feed the input to the cost function to plot values of x and y coordinates
def cost_function_2(x,y):
dist = distances(x,y)
dist_costo = quantities*costs*dist
return np.sum(dist_costo)
FIGSIZE = (9, 7)
xs = np.array(xs).reshape(-1,)
ys = np.array(ys).reshape(-1,)
fs = np.array(fs)
X, Y = np2.meshgrid(np2.linspace(-5., 5., 50), np2.linspace(-84., -74., 50))
func_vec = np2.vectorize(cost_function_2)
f = func_vec(X,Y)
indices = (slice(None, None, 4), slice(None, None, 4))
fig = plt.figure(figsize=FIGSIZE)
ax = plt.axes(projection='3d', azim=10,elev=10)
ax.plot_surface(X, Y, f, shade=True, linewidth=2, antialiased=True,alpha=0.5)
ax.plot3D(xs, ys, fs, color='black', lw=4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we mus define the function which we want to minimize
Step2: With the defined function we can calculate the gradient with JAX
Step3: Now lets define the procedure to apply gradient descent or newton nethod
Step4: Lets minimize with gradient descent
Step5: Now lets plot the trayectory of the optimization procedure.
|
6,564
|
<ASSISTANT_TASK:>
Python Code:
import sys
import os
from time import time
import re
import pickle
sys.path.append("ud120-projects/tools/")
sys.path.append("ud120-projects/final_project/")
#sys.path.append("ud120-projects/maildir/")
import numpy as np
import pandas as pd
#from matplotlib import pyplot as plt
#import seaborn as sns
#%matplotlib inline
from feature_format import featureFormat, targetFeatureSplit
from tester import test_classifier, dump_classifier_and_data
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import StratifiedShuffleSplit
from sklearn.preprocessing import Imputer
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import RandomizedPCA
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import make_union
from sklearn.pipeline import make_pipeline
from sklearn.grid_search import GridSearchCV
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.cluster import KMeans
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
### Load the dictionary containing the dataset
data_dict = pickle.load(open("ud120-projects/final_project/final_project_dataset.pkl", "r") )
### Task 1: Select what features you'll use.
### features_list is a list of strings, each of which is a feature name.
### The first feature must be "poi".
features_list = ['poi','salary'] # You will need to use more features
### Task 2: Remove outliers
### Task 3: Create new feature(s)
### Store to my_dataset for easy export below.
my_dataset = data_dict
### Extract features and labels from dataset for local testing
data = featureFormat(my_dataset, features_list, sort_keys = True)
labels, features = targetFeatureSplit(data)
### Task 4: Try a varity of classifiers
### Please name your classifier clf for easy export below.
### Note that if you want to do PCA or other multi-stage operations,
### you'll need to use Pipelines. For more info:
### http://scikit-learn.org/stable/modules/pipeline.html
clf = GaussianNB() # Provided to give you a starting point. Try a varity of classifiers.
### Task 5: Tune your classifier to achieve better than .3 precision and recall
### using our testing script.
### Because of the small size of the dataset, the script uses stratified
### shuffle split cross validation. For more info:
### http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.StratifiedShuffleSplit.html
test_classifier(clf, my_dataset, features_list)
### Dump your classifier, dataset, and features_list so
### anyone can run/check your results.
dump_classifier_and_data(clf, my_dataset, features_list)
print my_dataset.keys()[0]
my_dataset.itervalues().next()
### Task 1: Select what features you'll use.
### features_list is a list of strings, each of which is a feature name.
### The first feature must be "poi".
names = np.array(my_dataset.keys())
print "number of names in data: ", names.shape
print "\nfirst five names:\n", names[:5]
features_list = my_dataset.itervalues().next().keys()
features_list.sort()
features_list.remove('poi')
features_list.insert(0, 'poi')
features_list.remove('email_address')
print "\nfeatures:\n", features_list
### convert dictionary to pandas dataframe
df = pd.DataFrame([entry for entry in my_dataset.itervalues()])
df = df.drop('email_address', axis=1)
df = df[features_list]
#df.dtypes
#df.describe()
#df.count()
df.poi = df.poi.astype('int')
df = df.convert_objects(convert_numeric=True)
for col in list(df.columns):
df[col] = df[col].round(decimals=3)
print "POI Count:\n", df.poi.value_counts()
df.head()
df.describe()
# create labels
y = df.poi.values
print y.shape
print y[:5]
# create initial features
X = df.drop('poi', axis=1).values
print X.shape
print X[:3]
### Task 2: Remove outliers
# hand-tuned to remove ~5% (in this case, 7%)
num_rows = X.shape[0]
num_cols = X.shape[1]
rows_to_remove = set()
for i in xrange(num_cols):
point_five_percentile = np.percentile(X[:,i], 0.5)
ninety_nine_point_five_percentile = np.percentile(X[:,i], 99.5)
for j in xrange(num_rows):
if X[j,i] < point_five_percentile or X[j,i] > ninety_nine_point_five_percentile:
rows_to_remove.add(j)
print X.shape
X = np.delete(X, list(rows_to_remove), axis=0)
y = np.delete(y, list(rows_to_remove))
print "names associated with outlier-containing rows to remove:"
for i in rows_to_remove:
print "\t",names[i], " (poi? {})".format(y[i])
names = np.delete(names, list(rows_to_remove))
print "\nnew X shape: ", X.shape
print "\nnew y shape: ", y.shape
print "\ntotal rows removed: ", len(rows_to_remove), "({})".format(round(len(rows_to_remove)/float(num_rows), 2))
# impute 'NaN' values to column means
imp = Imputer(missing_values='NaN', strategy='median', axis=0)
imp.fit(X)
X = imp.transform(X)
print X[:3]
imp_values = imp.statistics_
### Task 3: Create new feature(s)
def selectkbest():
# select K best to explore feature engineering possibilities
selector = SelectKBest().fit(X, y)
features = features_list[1:]
high_scores = []
print "SelectKBest SCORES:"
selectkbest_scores = np.round(selector.scores_, 2)
for i in xrange(len(features)):
print "\t", features[i], ": ", selectkbest_scores[i]
if selectkbest_scores[i] > 2:
high_scores.append(i)
print "\nSelectKBest HIGH SCORES:"
for i in high_scores:
print "\t", features[i], "[{}]".format(i), ": ", selectkbest_scores[i]
selectkbest()
def create_new_feature(X, col1, col2, operation, feature_name):
features_list.append(feature_name)
new_col = []
if operation == '*':
new_col = (X[:,col1] * X[:,col2])
elif operation == '/':
new_col = np.true_divide(X[:,col1], X[:, col2])
new_col.shape = (new_col.shape[0], 1)
#print new_col.shape
X = np.hstack((X, new_col))
#print X.shape
return X
X = create_new_feature(X, 0, 14, '*', 'selectkbest_product')
selectkbest()
corr_df = pd.DataFrame(X)
corr_df['label'] = y
corr_df.corr()['label'].values[:-1]
corr = corr_df.corr()['label'].values[:-1]
for i in xrange(len(corr)):
if abs(corr[i]) > 0.2:
print i, abs(corr[i])
### Task 4: Try a varity of classifiers
### Please name your classifier clf for easy export below.
### Note that if you want to do PCA or other multi-stage operations,
### you'll need to use Pipelines. For more info:
### http://scikit-learn.org/stable/modules/pipeline.html
def grid_searcher(clf, pca_skb, output):
t0 = time()
even_range = range(2,X.shape[1],2)
random_state = [42]
t_or_f = [True, False]
#powers_of_ten = [10**x for x in range(-5,5)]
logspace = np.logspace(-5, 5, 10)
#kernels = ['linear', 'poly', 'rbf', 'sigmoid'] # takes too long, unfortunately
kernels = ['rbf']
criteria = ['gini', 'entropy']
splitters = ['best', 'random']
max_features = ['auto', 'sqrt', 'log2', None]
# modify features, remove features via pipeline
pipeline = []
params = dict()
pipeline_clf = ""
if pca_skb == "pca_skb":
#pipeline = make_pipeline(MinMaxScaler(), make_union(RandomizedPCA(), SelectKBest()), clf)
pipeline = make_pipeline(StandardScaler(), make_union(RandomizedPCA(), SelectKBest()), clf)
params = dict(featureunion__randomizedpca__n_components = even_range,
featureunion__randomizedpca__iterated_power = [1, 2, 3],
featureunion__randomizedpca__whiten = t_or_f,
featureunion__randomizedpca__random_state = random_state,
featureunion__selectkbest__k = even_range)
elif pca_skb == "pca":
#pipeline = make_pipeline(MinMaxScaler(), RandomizedPCA(), clf)
pipeline = make_pipeline(StandardScaler(), RandomizedPCA(), clf)
params = dict(randomizedpca__n_components = [4],
randomizedpca__iterated_power = [1, 2, 3],
randomizedpca__whiten = t_or_f,
randomizedpca__random_state = random_state)
elif pca_skb == "skb":
#pipeline = make_pipeline(MinMaxScaler(), SelectKBest(), clf)
pipeline = make_pipeline(StandardScaler(), SelectKBest(), clf)
params = dict(selectkbest__k = [4])
pipeline_clf = pipeline.steps[2][0]
if pipeline_clf == 'decisiontreeclassifier' or pipeline_clf == 'randomforestclassifier':
params["{}__criterion".format(pipeline_clf)] = criteria
#params["{}__splitter".format(pipeline_clf)] = splitters
params["{}__max_features".format(pipeline_clf)] = max_features
#params["{}__min_samples_split".format(pipeline_clf)] = even_range
params["{}__class_weight".format(pipeline_clf)] = ['auto', None]
params["{}__random_state".format(pipeline_clf)] = random_state
if pipeline_clf == 'svc':
params['svc__C'] = logspace
params['svc__kernel'] = kernels
#params['svc__degree'] = [1,2,3,4,5] # for use with 'poly'
params['svc__gamma'] = logspace
params['svc__random_state'] = random_state
# cross validation
cv = StratifiedShuffleSplit(y, test_size=0.2, random_state=random_state[0])
# tune parameters
grid_search = GridSearchCV(pipeline, param_grid=params, n_jobs=1, cv=cv)
grid_search.fit(X, y)
if output == True:
print "*"*15, pipeline_clf.upper(), "*"*15
print "\nBEST SCORE: ", grid_search.best_score_, "\n"
print "\nBEST PARAMS: ", grid_search.best_params_, "\n"
# split into training and testing data for reporting results
if output == True:
print "#"*50
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=random_state[0])
if output == True:
print "\nBEST ESTIMATOR:"
clf = grid_search.best_estimator_
if output == True:
print clf
clf.fit(X_train, y_train)
if pca_skb == "skb" or pca_skb == "pca_skb":
if output == True:
print "\nSelectKBest SCORES:"
features = features_list[1:]
selectkbest_scores = clf.steps[1][1].scores_ if pca_skb == "skb" else clf.steps[1][1].transformer_list[1][1].scores_
selectkbest_scores = np.round(selectkbest_scores, 2)
for i in xrange(len(features)):
if output == True:
print "\t", features[i], ": ", selectkbest_scores[i]
if pipeline_clf == 'decisiontreeclassifier' or pipeline_clf == 'randomforestclassifier':
if output == True:
print "\n{} FEATURE IMPORTANCES:".format(pipeline_clf.upper())
print clf.steps[2][1].feature_importances_
if output == True:
print "\n", "#"*50
print "\nPREDICTIONS:"
print "\nground truth:\n\t", y_test
y_pred = clf.predict(X_test)
if output == True:
print "\npredictions:\n\t", y_pred
print "\nscore: ", clf.score(X_test, y_test)
print "\nEVALUATIONS:"
print "\nconfusion matrix:\n", confusion_matrix(y_test, y_pred)
print "\nclassification report:\n", classification_report(y_test, y_pred, target_names=["non-poi", "poi"])
print "ELAPSED TIME: ", round(time()-t0,3), "s"
return clf
# prepare for Udacity tester
# remove emails
for key in my_dataset.keys():
my_dataset[key].pop('email_address')
# remove outliers from original data set
for key in my_dataset.keys():
if key not in names:
my_dataset.pop(key)
# replace 'NaN's
for key in my_dataset.keys():
for sub_key in my_dataset[key].keys():
if my_dataset[key][sub_key] == 'NaN':
i = (df.columns.get_loc(sub_key) - 1)
my_dataset[key][sub_key] = imp_values[i]
# add created feature
i = 0
for key in my_dataset.keys():
my_dataset[key]['selectkbest_product'] = X[i,-1]
i += 1
def udacity_tester():
# use Udacity tester
print "\nUDACITY TESTER RESULTS: "
test_classifier(clf, my_dataset, features_list)
for classifier in [GaussianNB(), DecisionTreeClassifier()]:
for transfomer in ['pca', 'skb', 'pca_skb']:
clf = grid_searcher(classifier, transfomer, output=True)
udacity_tester()
#dump_classifier_and_data(clf, my_dataset, features_list)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clfs = dict()
for classifier in [GaussianNB(), DecisionTreeClassifier()]:
clfs[str(classifier)] = dict()
for transformer in ['pca', 'skb', 'pca_skb']:
clf = grid_searcher(classifier, transformer, output=False)
clf.fit(X_train, y_train)
clfs[str(classifier)][transformer] = dict()
y_pred = clf.predict(X_test)
clfs[str(classifier)][transformer]['predictions'] = y_pred
clf_report = classification_report(y_test, y_pred, target_names=["non-poi", "poi"])
clfs[str(classifier)][transformer]['clf_report'] = clf_report
clfs['GaussianNB()']['pca']['predictions']
clfs['GaussianNB()']['pca']['clf_report']
best_predictions = dict()
for clsfr in clfs.keys():
best_predictions[clsfr] = dict()
#print clsfr
best_f1_score = 0.0
for transformer in clfs[clsfr].keys():
clf_report = clfs[clsfr][transformer]['clf_report']
clf_report = [re.sub(r"[a-z]|\n", '', x) for x in clf_report.split(" ")]
clf_report = filter(None, clf_report)
#print "\t", transformer
f1_score = float(clf_report[-2])
#print f1_score
if f1_score > best_f1_score:
best_f1_score = f1_score
best_predictions[clsfr]['transformer'] = transformer
best_predictions[clsfr]['f1_score'] = f1_score
best_predictions[clsfr]['predictions'] = clfs[clsfr][transformer]['predictions']
averaged_best_recall_predictions = np.zeros((28,))
for clsfr in best_predictions.keys():
print clsfr[:10]
print "\tbest transformer: ", best_predictions[clsfr]['transformer']
print "\tbest f1 score: ", best_predictions[clsfr]['f1_score']
print "\tbest predictions: ", best_predictions[clsfr]['predictions']
averaged_best_recall_predictions = np.maximum(averaged_best_recall_predictions, best_predictions[clsfr]['predictions'])
print "\naveraged best predictions: {}".format(averaged_best_recall_predictions.astype('int'))
print "\nresulting confusion matrix:\n", confusion_matrix(y_test, averaged_best_recall_predictions)
print "\nresulting classification report:\n", classification_report(y_test, averaged_best_recall_predictions, target_names=["non-poi", "poi"])
# compare individuals represented by the financial data and by the email-corpus data
directory_names = []
poi_directory_names = []
true_count = 0
false_count = 0
for key in my_dataset.keys():
names = key.lower().split(' ')
dirname = names.pop(0)
if len(names) > 0:
dirname = dirname + "-" + names[0][0]
exist = os.path.exists('/Users/excalibur/Dropbox/datasets/maildir/{}'.format(dirname))
#print dirname, "\n\temails exist: ", exist, names
directory_names.append(dirname)
if exist:
true_count += 1
if my_dataset[key]['poi'] == True:
poi_directory_names.append(dirname)
else:
false_count += 1
print "email directories matching individuals represented by financial data:"
print "\texist: ", true_count, "(POIs: {})".format(len(poi_directory_names))
print "\tdon't exist: ", false_count
#print sorted(directory_names)
email_corpus_dir = '/Users/excalibur/Dropbox/datasets/maildir/'
email_dirs = os.listdir(email_corpus_dir)
print "number of email directories: ", len(email_dirs)
print "\nfirst five email directories:\n", email_dirs[:5]
email_text = dict()
for email_dir in email_dirs:
for dirpath, dirnames, filenames in os.walk(email_corpus_dir + email_dir):
for dirname in dirnames:
for filename in filenames:
path = dirpath + "/" + dirname + "/" + filename
if os.path.isfile(path):
with open(dirpath + "/" + dirname + "/" + filename, 'r') as f:
read_data = f.readlines()
if email_dir not in email_text:
email_text[email_dir] = ''.join(read_data[15:])
else:
email_text[email_dir] += ''.join(read_data[15:])
#print email_text['white-s']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: sklearn imports
Step2: load data
Step3: original classifier
Step4: data-record snapshot
Step5: feature selection
Step6: data-format conversion
Step7: separate labels from features
Step8: outlier removal
Step9: 'NaN' imputation
Step10: feature creation
Step11:
Step12: machine learn!
Step13: reminder
Step14: new classifier results
Step15: data dump for Udacity
Step16: average best-scoring predictions
Step17: Epilogue
Step18: Unfortunately, it seems clear that it would be difficult to join the two data sets in a meaningful way due to their lack of overlap.
|
6,565
|
<ASSISTANT_TASK:>
Python Code:
import os
import re
import sys
import numpy as np
from eva_cttv_pipeline.clinvar_xml_utils import *
from eva_cttv_pipeline.clinvar_identifier_parsing import *
%matplotlib inline
import matplotlib.pyplot as plt
PROJECT_ROOT = '/home/april/projects/opentargets/complex-events'
# dump of all records with no functional consequences and no complete coordinates
# uses June consequence pred + ClinVar 6/26/2021
no_consequences_path = os.path.join(PROJECT_ROOT, 'no-conseq_no-coords.xml.gz')
dataset = ClinVarDataset(no_consequences_path)
def count_hgvs(dataset, regex_dict, exclusive=False, limit=None, include_no_hgvs=True, include_none=True):
Counts records in dataset with HGVS matching a collection of regexes.
Can be exclusive or non-exclusive counts (see below).
If limit is provided, will count at most that many records (useful for testing).
Notes:
* records with multiple HGVS expressions need at least one matching a given regex to be counted once
* can also count measures with no HGVS and ones that match none of the regexes (only if not exclusive)
* non-exclusive => record has an HGVS expression that matches this regex.
"If we do support X, how many records could we get?"
* exclusive => record _only_ has HGVS expressions that match this regex (out of this collection).
"If we don't support X, how many records must we lose?"
n = 0
# just use a dict instead of a counter, so we have a predictable key order
results = {k: 0 for k in regex_dict}
if include_no_hgvs:
results['no hgvs'] = 0
if not exclusive and include_none:
results['none'] = 0
for record in dataset:
if not record.measure:
continue
if not record.measure.hgvs:
if include_no_hgvs:
results['no hgvs'] += 1
continue
hs = [h for h in record.measure.hgvs if h is not None]
n += 1
if limit and n > limit:
break
temp_results = {
k: any(r.match(h) for h in hs)
for k, r in regex_dict.items()
}
any_match = False
for k in regex_dict:
if exclusive:
if temp_results[k] and not any(temp_results[j] for j in regex_dict if j != k):
results[k] += 1
else:
if temp_results[k]:
results[k] += 1
any_match = True
if not exclusive and include_none and not any_match:
results['none'] += 1
return results
def print_example_matches(dataset, regex_dict, size=1, limit=None, include_none=True):
Like count_hgvs but returns (size) example matches for each regex where possible.
n = 0
all_matches = {k: [] for k in regex_dict}
if include_none:
all_matches['none'] = []
for record in dataset:
if not record.measure or not record.measure.hgvs:
continue
hs = [h for h in record.measure.hgvs if h is not None]
n += 1
if limit and n > limit:
break
for h in hs:
any_match = False
for k, r in regex_dict.items():
if r.match(h):
all_matches[k].append(h)
any_match = True
if not any_match and include_none:
all_matches['none'].append(h)
result = {
k: [v[i] for i in np.random.choice(len(v), size=min(len(v), size), replace=False)] if v else []
for k, v in all_matches.items()
}
for k in result:
print(k)
for s in result[k]:
print(f' {s}')
print('\n==========\n')
# be more lenient than what we currently in identifier_parsing
# for example this allows things like `chr11` or `LRG_199p1`
sequence_identifier = r'[a-zA-Z0-9_.]+:'
seq_type_dict = {
'coding': re.compile(sequence_identifier + r'c\.'),
'genomic': re.compile(sequence_identifier + r'g\.'),
'non-coding': re.compile(sequence_identifier + r'n\.'), # transcript but not coding for a protein
'protein': re.compile(sequence_identifier + r'p\.'),
'mitochondrial': re.compile(sequence_identifier + r'm\.'),
'circular': re.compile(sequence_identifier + r'o\.'),
'RNA': re.compile(sequence_identifier + r'r\.'),
}
print_example_matches(dataset, seq_type_dict, size=5)
seq_type_counts = count_hgvs(dataset, seq_type_dict, exclusive=False)
plt.figure(figsize=(15,7))
plt.title('Sequence type (non-exclusive)')
plt.bar(seq_type_counts.keys(), seq_type_counts.values())
seq_type_counts
# coding or non-coding
2192 + 225
# have hgvs in general
17649 - 4030
seq_type_counts_exclusive = count_hgvs(dataset, seq_type_dict, exclusive=True)
plt.figure(figsize=(15,7))
plt.title('Sequence type (exclusive)')
plt.bar(seq_type_counts_exclusive.keys(), seq_type_counts_exclusive.values())
seq_type_counts_exclusive
genomic_sequence = f'^{sequence_identifier}g\.'
all_other_sequence = f'^{sequence_identifier}[a-fh-z]\.'
# double-counts hybrid things, e.g.
# * NC_000013.9:g.93703239_93802554del99316insCTA
# * NC_000016.9:g.2155486_2155487ins2145304_2155487inv
variant_regex = {
'substitution (genomic)': re.compile(f'{genomic_sequence}.*?>.*?'),
'deletion (genomic)': re.compile(f'{genomic_sequence}.*?del(?!ins).*?'),
'duplication (genomic)': re.compile(f'{genomic_sequence}.*?dup.*?'),
'insertion (genomic)': re.compile(f'{genomic_sequence}.*?(?<!del)ins.*?'),
'inversion (genomic)': re.compile(f'{genomic_sequence}.*?inv.*?'),
'delins (genomic)': re.compile(f'{genomic_sequence}.*?delins.*?'),
'substitution (other)': re.compile(f'{all_other_sequence}.*?>.*?'),
'deletion (other)': re.compile(f'{all_other_sequence}.*?del(?!ins).*?'),
'duplication (other)': re.compile(f'{all_other_sequence}.*?dup.*?'),
'insertion (other)': re.compile(f'{all_other_sequence}.*?(?<!del)ins.*?'),
'inversion (other)': re.compile(f'{all_other_sequence}.*?inv.*?'),
'delins (other)': re.compile(f'{all_other_sequence}.*?delins.*?'),
}
print_example_matches(dataset, variant_regex, size=5)
variant_counts = count_hgvs(dataset, variant_regex, include_no_hgvs=False, exclusive=False)
plt.figure(figsize=(15,7))
plt.title('Variant type')
plt.xticks(rotation='vertical')
plt.bar(variant_counts.keys(), variant_counts.values())
variant_counts
genomic_sequence = f'^{sequence_identifier}g\.'
coding_sequence = f'^{sequence_identifier}c\.'
noncoding_sequence = f'^{sequence_identifier}n\.'
other_sequence = f'^{sequence_identifier}[abdefh-mo-z]\.' # r'^' + sequence_identifier + r'[a-fh-z]\.'
num_range = r'[0-9]+_[0-9]+'
unk_range = r'[0-9?]+_[0-9?]+'
ch = r'[^?_+-]' # we allow characters on either side of the range, but none of this guff
# g.12345_12678del
def definite_range(sequence_type):
return re.compile(f'{sequence_type}{ch}*?{num_range}{ch}*?$')
# g.(123456_234567)_(345678_456789)del
def variable_range(sequence_type):
return re.compile(f'{sequence_type}{ch}*?\({num_range}\)_\({num_range}\){ch}*?$')
# g.(?_234567)_(345678_?)del
def unknown_range(sequence_type):
return re.compile(f'{sequence_type}{ch}*?(?=.*?\?.*?)\({unk_range}\)_\({unk_range}\){ch}*?$')
range_regex = {
'definite (genomic)': definite_range(genomic_sequence),
'variable (genomic)': variable_range(genomic_sequence),
'unknown (genomic)': unknown_range(genomic_sequence),
'definite (coding)': definite_range(coding_sequence),
'variable (coding)': variable_range(coding_sequence),
'unknown (coding)': unknown_range(coding_sequence),
'definite (noncoding)': definite_range(noncoding_sequence),
'variable (noncoding)': variable_range(noncoding_sequence),
'unknown (noncoding)': unknown_range(noncoding_sequence),
'definite (other)': definite_range(other_sequence),
'variable (other)': variable_range(other_sequence),
'unknown (other)': unknown_range(other_sequence),
}
print_example_matches(dataset, range_regex, size=5)
range_counts = count_hgvs(dataset, range_regex, include_no_hgvs=False)
plt.figure(figsize=(15,7))
plt.xticks(rotation='vertical')
plt.title('Ranges')
plt.bar(range_counts.keys(), range_counts.values())
range_counts
# genomic ranges
1735 + 559 + 9311
# coding / noncoding ranges
264 + 79 + 58
def span_lengths(dataset, regex, limit=None):
Returns all span lengths for a given regex.
This will take the first two captured groups of the regex, convert to integers, and subtract the two.
It will NOT be smart.
n = 0
all_spans = []
for record in dataset:
if not record.measure or not record.measure.hgvs:
continue
hs = [h for h in record.measure.hgvs if h is not None]
n += 1
if limit and n > limit:
break
for h in hs:
m = regex.match(h)
if m and m.group(1) and m.group(2):
span = int(m.group(2)) - int(m.group(1)) + 1
if span < 0:
print('negative span!!!', h)
else:
all_spans.append(span)
# presumably all hgvs expressions for one record have the same span, don't double count
break
return all_spans
# same as previous but with capturing groups added
def_range = r'([0-9]+)_([0-9]+)'
var_range = r'\([0-9?]+_([0-9]+)\)_\(([0-9]+)_[0-9?]+\)'
def_span_regex = re.compile(f'{genomic_sequence}{ch}*?{def_range}{ch}*?$')
var_span_regex = re.compile(f'{genomic_sequence}{ch}*?{var_range}{ch}*?$')
spans = span_lengths(dataset, def_span_regex) + span_lengths(dataset, var_span_regex)
# This is everything with a known minimum span - genomic reference sequence, X_Y or (?_X)_(Y_?)
print(len(spans))
print('Mean:', np.mean(spans))
print('Median:', np.median(spans))
print('Min:', np.min(spans))
print('Max:', np.max(spans))
# actually reasonable spans...
MAX_REASONABLE_SPAN = 20000 #100000
smaller_spans = [x for x in spans if x < MAX_REASONABLE_SPAN]
print(len(smaller_spans))
plt.figure(figsize=(15,10))
plt.grid(visible=True)
plt.title(f'Minimum Spans (less than {MAX_REASONABLE_SPAN})')
# first array is counts per bin
# second array is left edges of bins, plus last right edge
plt.hist(smaller_spans, bins=100)
# VEP acceptable spans
vep_spans = [x for x in spans if x < 5000]
print(len(vep_spans))
coding_sequence = r'^' + sequence_identifier + r'c\.'
other_sequence = r'^' + sequence_identifier + r'[abd-z]\.'
pivot = r'[-*]?[0-9]+'
offset = r'[+-][0-9]+'
endpoint = pivot + offset
num_range = f'{endpoint}_{endpoint}'
unk_range = f'(?:{endpoint}|\?)_(?:{endpoint}|\?)'
ch = r'[^?_+-]' # we allow characters on either side of the range, but none of this guff
irange_regex = {
'definite intron (coding)': re.compile(coding_sequence + f'{ch}*?{num_range}{ch}*?$'),
'variable intron (coding)': re.compile(coding_sequence + f'{ch}*?\({num_range}\)_\({num_range}\){ch}*?$'),
'unknown intron (coding)': re.compile(coding_sequence + f'{ch}*?(?=.*?\?.*?)\({unk_range}\)_\({unk_range}\){ch}*?$'),
'definite intron (other)': re.compile(other_sequence + f'{ch}*?{num_range}{ch}*?$'),
'variable intron (other)': re.compile(other_sequence + f'{ch}*?\({num_range}\)_\({num_range}\){ch}*?$'),
'unknown intron (other)': re.compile(other_sequence + f'{ch}*?(?=.*?\?.*?)\({unk_range}\)_\({unk_range}\){ch}*?$'),
}
print_example_matches(dataset, irange_regex, size=5, include_none=False)
irange_counts = count_hgvs(dataset, irange_regex, include_no_hgvs=False, include_none=False)
plt.figure(figsize=(15,7))
plt.title('Ranges')
plt.bar(irange_counts.keys(), irange_counts.values())
irange_counts
sum(irange_counts.values())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Detailed stats of HGVS in ClinVar
Step3: Sequence types
Step4: Variant types
Step5: Ranges
Step7: Span lengths
Step8: Intronic numbering
|
6,566
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-3', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
6,567
|
<ASSISTANT_TASK:>
Python Code:
# This whole business is totally unnecessary if you're path is setup right. But if it's not,
# this is probably easier than actually fixing it.
%load_ext autoreload
import os
wireshark_path = "C:\\Program Files\\Wireshark\\" + os.pathsep
# or, if it's under 'program files(x86)'...
# wireshark_path = "C:\\Program Files (x86)\\Wireshark\\" + os.pathsep
os.environ['path'] += wireshark_path
from utilities import *
from pprint import *
%autoreload
pcap_folder = 'C:\\Users\\william.george\\Desktop\\SUA-Test-Data\\'
os.chdir(pcap_folder)
os.getcwd()
!dir
pcap_file = pcap_folder + 'test_2_merge.pcap'
output_file = pcap_folder + 'frame.len'
!tshark -n -r $pcap_file -T fields -Eheader=y -e frame.number -e frame.len > $output_file
import pandas as pd
%pylab inline
figsize(17,10)
import subprocess
import datetime
import pandas as pd
def read_pcap(filename, fields=[], display_filter=[],
timeseries=False, strict=False, outfile=None):
Read PCAP file into Pandas DataFrame object.
Uses tshark command-line tool from Wireshark.
filename: Name or full path of the PCAP file to read
fields: List of fields to include as columns
display_filter: Additional filter to restrict frames
strict: Only include frames that contain all given fields
(Default: false)
timeseries: Create DatetimeIndex from frame.time_epoch
(Default: false)
Syntax for fields and display_filter is specified in
Wireshark's Display Filter Reference:
http://www.wireshark.org/docs/dfref/
if timeseries:
fields = ["frame.time_epoch"] + fields
fieldspec = " ".join("-e %s" % f for f in fields)
display_filters = fields if strict else ['']
if display_filter:
display_filters += display_filter
display_filters = list(filter(None, display_filters))
# display_filter is concatenated with ' and '. If one or more filters
# need to be 'ORed' togeather, then supply them as a single string
# e.g. ['frame.len > 60', '(ip.addr == 10.10.10.10 or ip.addr == 20.20.20.20)']
# gives '-2 -R "frame.len > 60 and (ip.addr == 10.10.10.10 or ip.addr == 20.20.20.20)"'
filterspec = '-2 -R "%s"' % " and ".join(f for f in display_filters)
options = "-r %s -n -T fields -Eheader=y" % filename
cmd = "tshark %s %s %s" % (options, filterspec, fieldspec)
print('filterspec:{0}\n'.format(filterspec),
'display_filters:{0}\n'.format(display_filters),
'options:{0}\n'.format(options),
'cmd:{0}\n'.format(cmd)
)
proc_arguments = {'shell': True}
if outfile is not None:
with open(outfile, 'w') as f:
proc_arguments['stdout'] = f
proc = subprocess.Popen(cmd, **proc_arguments)
return outfile
else:
proc_arguments['stdout'] = subprocess.PIPE
proc = subprocess.Popen(cmd, **proc_arguments)
if timeseries:
df = pd.read_table(proc.stdout,
index_col = "frame.time_epoch",
parse_dates=True,
date_parser=datetime.datetime.fromtimestamp)
else:
df = pd.read_table(proc.stdout,
parse_dates='frame.time_epoch',
date_parser=datetime.datetime.fromtimestamp)
return df
# # original read call
# df=read_pcap(pcap_file, fields = ["frame.len", "ip.src", "ip.dst", 'tcp.stream', 'tcp.srcport', 'tcp.dstport'], timeseries=True).dropna()
# df
df=read_pcap(pcap_file, fields = ["frame.len", "ip.src", "ip.dst", 'tcp.stream', 'tcp.srcport', 'tcp.dstport'], display_filter=['ip', 'tcp'], timeseries=True, outfile=output_file)
df = pd.read_table(output_file, names=['time','len','ip.src','ip.dst','stream','tcp.src', 'tcp.dst'], skiprows=1)
import dateutil
sample_time = 1429133053.239977000
print(pd.to_datetime(sample_time, unit='s'))
df.time = pd.to_datetime(df.time, unit='s')
df[[True if x not in [0,1,2,3, 145, 141] else False for x in df['stream']]]
df2 = df.head(100)
df.head(100).to_json(date_unit='us')
df[df.stream == 1]
# THIS WHOLE BLOCK IS COMMENTED OUT BECAUSE I DON'T TRUST IT RIGHT NOW. THIS IS THE OLD WAY.
# flows = framelen.groupby(('tcp.stream', 'ip.src'))
# keys = sorted(list(flows.groups.keys()), key=lambda x: x[0])
# #list_streams = []
# #for key in keys:( # zip (iter(x),...)
# def f(x):
# print('running one time!')
# return pd.Series({'frame.len':x[0],'ip.src':x[1]})
# def extract_flow(flow):
# ipdst = flow['ip.dst'][0]
# tcpstrm = flow['tcp.stream'][0]
# tcpsrc = flow['tcp.srcport'][0]
# tcpdst = flow['tcp.dstport'][0]
# flow_Bps = flow.resample("S", how="sum")
# flow_filter = np.isnan(flow_Bps['tcp.dstport']) == False
# flow_Bps.loc[flow_filter, "tcp.stream" : "tcp.dstport"] = (tcpstrm, tcpsrc, tcpdst)
# return flow_Bps.loc[flow_filter]
# flow_list = []
# for key in keys:
# flow_list.append(extract_flow(flows.get_group(key)))
# pprint(flow_list[0].head(2))
# #stream_df = pd.DataFrame.from_records(stream_list)
# # stream1 = streams.get_group(keys[4])
# # extract_stream(stream1)
# # stream1 = streams.get_group(keys[3])
# # ostrm = stream1['tcp.stream'][0]
# # tcpsrc = stream1['tcp.srcport'][0]
# # tcpdst = stream1['tcp.dstport'][0]
# # ipdst = stream1['ip.dst'][0]
# # stream_Bps = stream1.resample("S", how="sum")
# # stream_filter = np.isnan(stream_Bps['tcp.dstport']) == False
# # stream_filter# is np.float64(np.nan))
# # #stream_Bps['tcp.srcport'] = 80
# # stream_Bps.loc[stream_filter, "tcp.stream" :"tcp.dstport"] = (ostrm, tcpsrc, tcpdst)
# # stream_Bps.loc[stream_filter]
# # # #help(streams)
# # # #stream1
bytes_per_second=framelen.resample("S", how="sum")
help(framelen.resample)
bytes_per_second.sort('tcp.stream')
framelen.sort('tcp.stream', inplace=False).dropna()
#bytes_per_second.groupby("tcp.stream")["frame.len"].sum().sort('tcp.len',ascending=False,inplace=False).head(10)
#bytes_per_second.groupby('tcp.stream')['frame.len'].sum()
plt = (bytes_per_second.groupby('tcp.stream')).plot()
ylabel('kbps')
xlabel('Time')
axhline(linewidth=2, color='r', y=2048)
time_zero = bytes_per_second.index[0]
annotate("2048 kbps",xy=(time_zero,2048), xycoords='data', xytext=(-30,30), textcoords='offset points', size=10,
bbox=dict(boxstyle="round", fc="0.8"),
arrowprops=dict(arrowstyle="simple"))
#plt.set_xlim(-1,100)
filters = []
fields=["tcp.stream", "ip.src", "ip.dst", "tcp.seq", "tcp.ack", "tcp.window_size", "tcp.len"]
#filters=["ip.addr eq 161.217.20.5"]
ts=read_pcap(pcap_file, fields, display_filter = filters, timeseries=True, strict=True)
ts
stream=ts[ts["tcp.stream"] == 0]
stream
print(stream.to_string())
stream["type"] = stream.apply(lambda x: "client" if x["ip.src"] == stream.irow(0)["ip.src"] else "server", axis=1)
print(stream.to_string())
client_stream=stream[stream.type == "client"]
client_stream["tcp.seq"].plot(style="r-o")
client_stream.index = arange(len(client_stream))
client_stream["tcp.seq"].plot(style="r-o")
def most_bytes_per_stream(df):
return (df.groupby("tcp.stream"))["tcp.len"].sum().sort('tcp.len',ascending=False,inplace=False).head(10)
bytes_per_stream = most_bytes_per_stream(ts)
print(bytes_per_stream.index)
df_filter = ts['tcp.stream'].isin(bytes_per_stream.index)#[row in bytes_per_stream.index for row in ts['tcp.stream']]
streams = ts[df_filter]
streams.pivot(index=streams.index, columns='tcp.stream', values='tcp.seq')
#df[str(df.index) in str(bytes_per_stream.index)]
#bytes_per_stream.sort('tcp.len', inplace=False,ascending=False).head(5)
per_stream=ts.groupby("tcp.stream")
per_stream.head()
bytes_per_stream = per_stream["tcp.len"].sum()
bytes_per_stream.head()
bytes_per_stream.plot(kind='bar')
bytes_per_stream.max()
biggest_stream=bytes_per_stream.idxmax()
biggest_stream
bytes_per_stream.ix[biggest_stream]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's have a look at the file
Step2: Plotting
Step3: Set a figure size in inches
Step5: Pandas automatically uses Matplotlib for plotting. We plot with small dots and an alpha channel of 0.2
Step6: We will use this function in my further analysis.
Step7: Then we re-sample the timeseries into buckets of 1 second, summing over the lengths of all frames that were captured in that second
Step8: Here are the first 5 rows. We get NaN for those timestamps where no frames were captured
Step9: TCP Time-Sequence Graph
Step10: Now we have to select a TCP stream to analyse. As an example, we just pick stream number 10
Step11: Pandas only print the overview because the table is to wide. So we force a display
Step12: Add a column that shows who sent the packet (client or server).
Step13: Notice that the x-axis shows the real timestamps.
Step14: Looks different of course.
|
6,568
|
<ASSISTANT_TASK:>
Python Code:
from typing import List
def intersperse(numbers: List[int], delimeter: int) -> List[int]:
if not numbers:
return []
result = []
for n in numbers[:-1]:
result.append(n)
result.append(delimeter)
result.append(numbers[-1])
return result
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
6,569
|
<ASSISTANT_TASK:>
Python Code:
import geopy
from geopy.geocoders import Nominatim
geocoder = Nominatim()
adresse = "22 rue Saint Lo, Rouen, France"
location = geocoder.geocode(adresse, True, 30)
print("longitude = ",location.longitude,"latitude = ",location.latitude)
import geopy
from geopy.geocoders import Nominatim
geocoder = Nominatim()
adresse = input("saissez l'adresse du lieu, la ville, le pays ")
location = geocoder.geocode(adresse, True, 30)
print("Les coordonnรฉes du point situรฉ au ",adresse,"sont : ")
print("longitude = ",location.longitude,"ยฐ","latitude = ",location.latitude,"ยฐ")
import csv
OuvrirEnEcriture = open("Lieux.csv","w")
fname = "Lieux.csv"
writer = csv.writer(OuvrirEnEcriture)
writer.writerow( ("ID","adresse","ville","pays","longitude","latitude") )
OuvrirEnEcriture.close()
import csv
import geopy
from geopy.geocoders import Nominatim
fichier="Lieux.csv"
OuvrirEnEcritureAjout = open("Lieux.csv","a")
writer = csv.writer(OuvrirEnEcritureAjout)
geocoder = Nominatim()
adresse = input("saissez l'adresse du lieu : ")
ville = input("saisissez le nom de la ville : ")
pays = input("saisissez le pays : ")
requete = adresse +","+ville+","+ pays
ID = int(input("indiquez l'id du lieu"))
#print(requete)
location = geocoder.geocode(requete, True, 30)
print("Les coordonnรฉes du point situรฉ au ",adresse," ร ",ville," en ",pays,"sont : ")
print("longitude = ",location.longitude,"ยฐ","latitude = ",location.latitude,"ยฐ")
writer.writerow((ID,adresse,ville,pays,location.longitude,location.latitude))
OuvrirEnEcritureAjout.close()
import sqlite3
conn = sqlite3.connect("Lieux.db")
cursor = conn.cursor()
cursor.execute(CREATE TABLE IF NOT EXISTS lieux (
id INTEGER PRIMARY KEY AUTOINCREMENT UNIQUE,
adresse TEXT,
ville TEXT,
pays TEXT,
longitude FLOAT,
latitude FLOAT)
)
conn.commit()
#import sqlite3
#cursor = conn.cursor()
#cursor.execute(
#DROP TABLE lieux
#)
#conn.commit()
import sqlite3
import geopy
from geopy.geocoders import Nominatim
geocoder = Nominatim()
#On ouvre la base de donnรฉes
conn = sqlite3.connect("Lieux.db")
cursor = conn.cursor()
# On saisit les รฉlรฉments ร stocker
adresse = input("saissez l'adresse du lieu : ")
ville = input("saisissez le nom de la ville : ")
pays = input("saisissez le pays : ")
requete = adresse +","+ville+","+ pays
ID = int(input("indiquez l'id du lieu : "))
# on cherche les coordonnรฉes su lieu
location = geocoder.geocode(requete, True, 30)
# on publie le rรฉsultat
print("Les coordonnรฉes du point situรฉ au ",adresse," ร ",ville," en ",pays,"sont : ")
print("longitude = ",location.longitude,"ยฐ","latitude = ",location.latitude,"ยฐ")
# On stocke les donnรฉes dans la base
cursor.execute('''INSERT INTO lieux VALUES (?,?,?,?,?,?);''', (ID, adresse, ville, pays, location.longitude, location.latitude))
conn.commit()
cursor.execute('''SELECT * FROM lieux;''')
enregistrements = cursor.fetchall()
for lieux in enregistrements:
print("Le lieu n %s est situรฉ : %s , ร , %s en %s ร %s ยฐ de longitude et %s ยฐ de latitude" % (lieux[0], lieux[1], lieux[2], lieux[3], lieux[4], lieux[5]))
conn.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Localisation d' adresse quelconque.
Step2: Saisie des rรฉsultats dans un fichier csv
Step3: Saisie des rรฉsultats dans un fichier csv
Step5: Saisie des rรฉsultats dans une base de donรฉes sqlite
Step7: Supprimer une table
Step8: Saisie des rรฉsultats dans une base de donnรฉes sqlite
|
6,570
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib
import sys
print(f'Python version {sys.version}')
print(f'Matplotlib version {matplotlib.__version__}')
print(f'NumPy version {np.__version__}')
x = np.arange(0,8*np.pi,0.1)
y1 = np.sin(x)
y2 = np.exp(x)
fig, ax = plt.subplots()
ax.plot(x,y1)
ax.set_title('sin(x)')
plt.show()
fig, ax = plt.subplots()
ax.plot(x,y2)
ax.set_title('exp(x)')
plt.show()
fig, ax = plt.subplots()
ax.plot(x,y1)
ax.plot(x,y2)
ax.set_title('sin(x) and exp(x)')
ax.legend(['sin(x)','exp(x)'])
plt.show()
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12,4))
ax1.plot(x,y1)
ax1.set_title('sin(x)')
ax2.plot(x,y2,'C1')
ax2.set_title('exp(x)')
plt.show()
fig, ax1 = plt.subplots()
ax1.plot(x,y1)
ax1.set_title('sin(x) and exp(x)')
ax2 = ax1.twinx()
ax2.plot(x,y2,'C1')
plt.show()
fig, ax1 = plt.subplots()
ax1.plot(x,y1)
ax1.set_ylabel('sin(x)', color='C0')
ax1.tick_params(axis='y', color='C0', labelcolor='C0')
ax1.set_title('sin(x) and exp(x)')
ax2 = ax1.twinx()
ax2.plot(x,y2,'C1')
ax2.set_ylabel('exp(x)', color='C1')
ax2.tick_params(axis='y', color='C1', labelcolor='C1')
ax2.spines['right'].set_color('C1')
ax2.spines['left'].set_color('C0')
plt.show()
fig, ax1 = plt.subplots()
ax1.plot(x,y1)
ax1.set_ylabel('sin(x)', color='C0')
ax1.tick_params(axis='y', color='C0', labelcolor='C0')
ax1.set_title('sin(x) and exp(x)')
ax2 = ax1.twinx()
ax2.plot(x,y2,'C1')
ax2.set_ylabel('exp(x)', color='C1')
ax2.tick_params(axis='y', color='C1', labelcolor='C1')
ax2.spines['right'].set_color('C1')
ax2.spines['left'].set_color('C0')
fig.legend(['sin(x)','exp(x)'], bbox_to_anchor=(0.9, 0.8))
plt.show()
fig, ax1 = plt.subplots()
line1 = ax1.plot(x,y1)
ax1.set_ylabel('sin(x)', color='C0')
ax1.tick_params(axis='y', color='C0', labelcolor='C0')
ax1.set_title('sin(x) and exp(x)')
ax2 = ax1.twinx()
line2 = ax2.plot(x,y2,'C1')
ax2.set_ylabel('exp(x)', color='C1')
ax2.tick_params(axis='y', color='C1', labelcolor='C1')
ax2.spines['right'].set_color('C1')
ax2.spines['left'].set_color('C0')
lines = line1 + line2
ax2.legend(lines, ['sin(x)','exp(x)'])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The versions of Python, NumPy, and Matplotlib can be printed out using the following code
Step2: Data
Step3: Plot the two functions
Step4: Above, we see a plot of the sine function. Note the y-values in the plot are all between -1 and 1. We can build a line plot of the exp function using very similar code.
Step5: We see a plot of the exponential (exp) function. Note the y-values are between 0 and around 8e10. The range of y-values in the exp plot is much larger in the exp(x) plot than the range of values in the sin(x) plot.
Step6: We see a plot with two lines. The blue line represents the sine function and the orange line represents the exp function. Notice how the blue line looks flat. We can't see the variations in the sine wave.
Step7: We see two plots side by side. Each plot shows a different function. The plot on the left shows the sine function, the plot on the right shows the exp function.
Step8: We see a plot of our two functions. In the plot above, we can clearly see both the sine function and the exp function. The y-axis that corresponds to the sine function is on the left. The y-axis that corresponds to the exp function is on the right.
Step9: We see a plot with colored y-axis lines and colored y-axis labels. The blue line corresponds to the blue left-hand axis and labels. The orange line corresponds to the orange right-hand axis and labels.
Step10: We see a plot with two lines, two different color y-axes, and a legend. One more thing we can try is to create the legend in an alternate way.
|
6,571
|
<ASSISTANT_TASK:>
Python Code:
# instantiate 2 exceptions
exc1 = Exception()
exc2 = Exception('Hier lief was schief.')
print('Type: ',type(exc1),'Str: ', str(exc1))
print('Type: ', type(exc2),'Str: ', str(exc2))
raise exc2
def divide_print(a, b):
if b == 0:
print('b must not be zero, idiot!')
return a / b
print(divide_print(5, 0))
def divide_exc(a, b):
if b == 0:
raise Exception('b must not be zero, idiot!')
print(divide_exc(5,0))
import nonexistingpackage
5 / 0
min(5, 4, 'straรenbahn')
open()
foo
# this will only raise an Error on Windows machines
open(5)
fs = open('afile.txt', 'w')
fs.superpower
open('afile.txt', 'k')
def divide_exc(a,b):
if not isinstance(a, (int, float)) or not isinstance(b, (int, float)):
raise ValueError('a and b must be numerical. a: {0} b: {1}'.format(type(a), type(b)))
if b == 0:
raise ZeroDivisionError('b must not be 0, idiot')
return a / b
divide_exc(5, 0)
divide_exc(5, None)
try:
divide_exc(5, 0)
except:
print('Just like something happend.\n')
try:
divide_exc(5,None)
except ValueError:
print('This was a ValueError.\n')
try:
divide_exc(5,0)
except ZeroDivisionError:
print('definitely a ZeroDivisionError.\n')
try:
divide_exc(5,None)
except ValueError as e:
print('ALERT! ALERT! ALERT!\nAn critical ValueError occured.\nIt said something like:\n%s.\n-----\n' % str(e))
def run_divide(a, b):
try:
result = 'a / b = {}'.format(divide_exc(a, b))
except ValueError as e:
result = 'Error. wrong input.'
except ZeroDivisionError:
result = 'a / 1 = {} (b must not be 0, using b=1...)'.format(divide_exc(a, 1))
return result
print('1.: ', run_divide(9, 5))
print('2.: ', run_divide(9, 0))
print('3.: ', run_divide(9, 'five'))
def run(a, b):
try:
print(divide_exc(a,b))
except ValueError:
print('Wrong Input')
except ZeroDivisionError:
print('b must not be 0')
else:
print('No error occured')
finally:
print('---------------\n')
run(5,6)
run(3, 'five')
run(4, 0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: There is one more key statement in Python that can only be used for Exceptions
Step2: You could use that to raise your own Exceptions in case the user of your scripts give you wrong input to your functions. Or in case you want to change the sometimes very generic error messages produced by the standard classes.
Step3: Print the function
Step4: This time we were able to print a custom error message, but the default Exception is still raised as the program is not interrupted. One option would be to move the return statement into a <span style='color
Step5: Now the Exception uses the correct Message. Unfortunately it's just an <span style='color
Step6: Inspecting the error types above, either the <span style='color
Step7: Exception handling
Step8: Instead of just decorating the error message any handling is possible. One could exit the application, produce a graphical error message, log the exception into a file, continue using default values and so on.<br>
Step9: Any <span style='color
|
6,572
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from utils import *
import random
data = open('dinos.txt', 'r').read()
data= data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) }
ix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) }
print(ix_to_char)
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (โ2 lines)
for gradient in [dWax, dWaa, dWya, db, dby]:
import numpy as np
np.clip(gradient,-maxValue,maxValue,out=gradient)
### END CODE HERE ###
gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, 10)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (โ1 line)
x = np.zeros((vocab_size,1))
# Step 1': Initialize a_prev as zeros (โ1 line)
a_prev = np.zeros((n_a,1))
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (โ1 line)
indices = []
# Idx is a flag to detect a newline character, we initialize it to -1
idx = -1
# Loop over time-steps t. At each time-step, sample a character from a probability distribution and append
# its index to "indices". We'll stop if we reach 50 characters (which should be very unlikely with a well
# trained model), which helps debugging and prevents entering an infinite loop.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = np.tanh(np.dot(Wax,x)+np.dot(Waa,a_prev)+b)
z = np.dot(Wya,a)+by
y = softmax(z)
# for grading purposes
np.random.seed(counter+seed)
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
idx = np.random.choice(list(range(vocab_size)),p=y.ravel())
# Append the index to "indices"
indices.append(idx)
# Step 4: Overwrite the input character as the one corresponding to the sampled index.
x = np.zeros((vocab_size,1))
x[idx] = 1
# Update "a_prev" to be "a"
a_prev = a
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices
np.random.seed(2)
_, n_a = 20, 100
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:", indices)
print("list of sampled characters:", [ix_to_char[i] for i in indices])
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
### START CODE HERE ###
# Forward propagate through time (โ1 line)
loss, cache = rnn_forward(X, Y, a_prev, parameters)
# Backpropagate through time (โ1 line)
gradients, a = rnn_backward(X, Y, parameters, cache)
# Clip your gradients between -5 (min) and 5 (max) (โ1 line)
gradients = clip(gradients, 5)
# Update parameters (โ1 line)
parameters = update_parameters(parameters, gradients, learning_rate)
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text, size of the vocabulary
Returns:
parameters -- learned parameters
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss, don't worry about it)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
np.random.seed(0)
np.random.shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Use the hint above to define one training example (X,Y) (โ 2 lines)
index = j % len(examples)
X = [None] + [char_to_ix[ch] for ch in examples[index]]
Y = X[1:] + [char_to_ix["\n"]]
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
### END CODE HERE ###
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result for grading purposed, increment the seed by one.
print('\n')
return parameters
parameters = model(data, ix_to_char, char_to_ix)
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import io
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])
# Run this cell to try with different inputs without having to re-train the model
generate_output()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 - Problem Statement
Step2: The characters are a-z (26 characters) plus the "\n" (or newline character), which in this assignment plays a role similar to the <EOS> (or "End of sentence") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, char_to_ix and ix_to_char are the python dictionaries.
Step3: 1.2 - Overview of the model
Step5: Expected output
Step7: Expected output
Step12: Expected output
Step13: Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
Step14: Conclusion
Step15: To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called "The Sonnets".
|
6,573
|
<ASSISTANT_TASK:>
Python Code:
from config import PYMICRO_EXAMPLES_DATA_DIR # import file directory path
import os
dataset_file = os.path.join(PYMICRO_EXAMPLES_DATA_DIR, 'example_microstructure') # test dataset file path
tar_file = os.path.join(PYMICRO_EXAMPLES_DATA_DIR, 'example_microstructure.tar.gz') # dataset archive path
# Save current directory
cwd = os.getcwd()
# move to example data directory
os.chdir(PYMICRO_EXAMPLES_DATA_DIR)
# unarchive the dataset
os.system(f'tar -xvf {tar_file}')
# get back to UserGuide directory
os.chdir(cwd)
# import SampleData class
from pymicro.core.samples import SampleData as SD
# import Numpy
import numpy as np
# Create a copy of the existing dataset
data = SD.copy_sample(src_sample_file=dataset_file, dst_sample_file='Test_compression', autodelete=True,
get_object=True, overwrite=True)
print(data)
data.get_file_disk_size()
data.print_node_info('grain_map')
compression_options = {'complib':'zlib', 'complevel':1}
data.set_chunkshape_and_compression(nodename='grain_map', compression_options=compression_options)
data.get_node_disk_size('grain_map')
data.print_node_compression_info('grain_map')
# No `shuffle` option:
print('\nUsing the shuffle option, with the zlib compressor and a compression level of 1:')
compression_options = {'complib':'zlib', 'complevel':1, 'shuffle':True}
data.set_chunkshape_and_compression(nodename='grain_map', compression_options=compression_options)
data.get_node_disk_size('grain_map')
# No `shuffle` option:
print('\nUsing no shuffle option, with the zlib compressor and a compression level of 9:')
compression_options = {'complib':'zlib', 'complevel':9, 'shuffle':False}
data.set_chunkshape_and_compression(nodename='grain_map', compression_options=compression_options)
data.get_node_disk_size('grain_map')
# No `shuffle` option:
print('\nUsing the shuffle option, with the lzo compressor and a compression level of 1:')
compression_options = {'complib':'lzo', 'complevel':1, 'shuffle':True}
data.set_chunkshape_and_compression(nodename='grain_map', compression_options=compression_options)
data.get_node_disk_size('grain_map')
# No `shuffle` option:
print('\nUsing no shuffle option, with the lzo compressor and a compression level of 1:')
compression_options = {'complib':'lzo', 'complevel':1, 'shuffle':False}
data.set_chunkshape_and_compression(nodename='grain_map', compression_options=compression_options)
data.get_node_disk_size('grain_map')
data.print_node_info('Amitex_stress_1')
# No `shuffle` option:
print('\nUsing the shuffle option, with the zlib compressor and a compression level of 1:')
compression_options = {'complib':'zlib', 'complevel':1, 'shuffle':True}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', compression_options=compression_options)
data.get_node_disk_size('Amitex_stress_1')
# No `shuffle` option:
print('\nUsing no shuffle option, with the zlib compressor and a compression level of 1:')
compression_options = {'complib':'zlib', 'complevel':1, 'shuffle':False}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', compression_options=compression_options)
data.get_node_disk_size('Amitex_stress_1')
# Print current size of disks and their compression settings
data.get_node_disk_size('grain_map_raw')
data.print_node_compression_info('grain_map_raw')
data.get_node_disk_size('uncertainty_map')
data.print_node_compression_info('uncertainty_map')
data.get_node_disk_size('mask')
data.print_node_compression_info('mask')
# Compress datasets
compression_options = {'complib':'zlib', 'complevel':9, 'shuffle':True}
data.set_nodes_compression_chunkshape(node_list=['grain_map_raw', 'uncertainty_map', 'mask'],
compression_options=compression_options)
# Print new size of disks and their compression settings
data.get_node_disk_size('grain_map_raw')
data.print_node_compression_info('grain_map_raw')
data.get_node_disk_size('uncertainty_map')
data.print_node_compression_info('uncertainty_map')
data.get_node_disk_size('mask')
data.print_node_compression_info('mask')
import numpy as np
print(f"Data array `grain_map` has {data['grain_map'].size} elements,"
f"and {np.unique(data['grain_map']).size} different values.\n")
print(f"Data array `Amitex_stress_1` has {data['Amitex_stress_1'].size} elements,"
f"and {np.unique(data['Amitex_stress_1']).size} different values.\n")
# We will store a value of an array to verify how it evolves after compression
original_value = data['Amitex_stress_1'][20,20,20]
# Apply lossy compression
data.get_node_disk_size('Amitex_stress_1')
# Set up compression settings with lossy compression: truncate after third digit adter decimal point
compression_options = {'complib':'zlib', 'complevel':9, 'shuffle':True, 'least_significant_digit':3}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', compression_options=compression_options)
data.get_node_disk_size('Amitex_stress_1')
# Get same value after lossy compression
new_value = data['Amitex_stress_1'][20,20,20]
print(f'Original array value: {original_value} \n'
f'Array value after lossy compression: {new_value}')
# Set up compression settings with lossy compression: truncate after third digit adter decimal point
compression_options = {'complib':'zlib', 'complevel':9, 'shuffle':True, 'least_significant_digit':2}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', compression_options=compression_options)
data.get_node_disk_size('Amitex_stress_1')
# Get same value after lossy compression
new_value = data['Amitex_stress_1'][20,20,20]
print(f'Original array value: {original_value} \n'
f'Array value after lossy compression 2 digits: {new_value}')
# removing dataset to recreate a copy
del data
# creating a copy of the dataset to try out lossy compression methods
data = SD.copy_sample(src_sample_file=dataset_file, dst_sample_file='Test_compression', autodelete=True,
get_object=True, overwrite=True)
# Set up compression settings with lossy compression: truncate after third digit adter decimal point
compression_options = {'complib':'zlib', 'complevel':9, 'shuffle':True, 'least_significant_digit':2,
'normalization':'standard'}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', compression_options=compression_options)
data.get_node_disk_size('Amitex_stress_1')
# Get same value after lossy compression
new_value = data['Amitex_stress_1'][20,20,20,:]
# Get in memory value of the node
memory_value = data.get_node('Amitex_stress_1', as_numpy=False)[20,20,20,:]
print(f'Original array value: {original_value} \n'
f'Array value after normalization and lossy compression 2 digits: {new_value}',
f'Value in memory: {memory_value}')
del data
data = SD.copy_sample(src_sample_file=dataset_file, dst_sample_file='Test_compression', autodelete=True,
get_object=True, overwrite=True)
# Set up compression settings with lossy compression: truncate after third digit adter decimal point
compression_options = {'complib':'zlib', 'complevel':9, 'shuffle':True, 'least_significant_digit':2,
'normalization':'standard_per_component'}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', compression_options=compression_options)
data.get_node_disk_size('Amitex_stress_1')
# Get same value after lossy compression
new_value = data['Amitex_stress_1'][20,20,20,:]
# Get in memory value of the node
memory_value = data.get_node('Amitex_stress_1', as_numpy=False)[20,20,20,:]
print(f'Original array value: {original_value} \n'
f'Array value after normalization per component and lossy compression 2 digits: {new_value}\n',
f'Value in memory: {memory_value}')
data.print_xdmf()
data.get_node_disk_size('Amitex_stress_1')
data.get_node_disk_size('Amitex_stress_1_norm_std')
data.get_node_disk_size('Amitex_stress_1_norm_mean')
data.print_node_compression_info('Amitex_stress_1')
data.get_node_disk_size('Amitex_stress_1')
# Change chunkshape of the array
compression_options = {'complib':'zlib', 'complevel':9, 'shuffle':True, 'least_significant_digit':2,
'normalization':'standard_per_component'}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', chunkshape=(10,10,10,6),
compression_options=compression_options)
data.get_node_disk_size('Amitex_stress_1')
data.print_node_compression_info('Amitex_stress_1')
# removing dataset to recreate a copy
del data
# creating a copy of the dataset to try out lossy compression methods
data = SD.copy_sample(src_sample_file=dataset_file, dst_sample_file='Test_compression', autodelete=True,
get_object=True, overwrite=True)
# getting the `orientation_map` array
array = data['Amitex_stress_1']
# create a new field for the CellData image group with the `orientation_map` array and add compression settings
compression_options = {'complib':'zlib', 'complevel':1, 'shuffle':True, 'least_significant_digit':2,
'normalization':'standard'}
new_cshape = (10,10,10,3)
# Add data array as field of the CellData Image Group
data.add_field(gridname='CellData', fieldname='test_compression', indexname='testC', array=array,
chunkshape=new_cshape, compression_options=compression_options, replace=True)
# Check size and settings of new field
data.print_node_info('testC')
data.get_node_disk_size('testC')
data.print_node_compression_info('testC')
# removing dataset to recreate a copy
del data
# creating a copy of the dataset to try out lossy compression methods
data = SD.copy_sample(src_sample_file=dataset_file, dst_sample_file='Test_compression', autodelete=True,
get_object=True, overwrite=True)
compression_options1 = {'complib':'zlib', 'complevel':9, 'shuffle':True, 'least_significant_digit':2,
'normalization':'standard'}
compression_options2 = {'complib':'zlib', 'complevel':9, 'shuffle':True}
data.set_chunkshape_and_compression(nodename='Amitex_stress_1', compression_options=compression_options1)
data.set_nodes_compression_chunkshape(node_list=['grain_map', 'grain_map_raw','mask'],
compression_options=compression_options2)
data.print_dataset_content(short=True)
data.get_file_disk_size()
data.repack_h5file()
data.get_file_disk_size()
# remove SampleData instance
del data
os.remove(dataset_file+'.h5')
os.remove(dataset_file+'.xdmf')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This file is zipped in the package to reduce its size. We will have to unzip it to use it and learn how to reduce its size with the SampleData methods. If you are just reading the documentation and not executing it, you may just skip this cell and the next one.
Step2: Dataset presentation
Step3: As you can see, this dataset already contains a rich content. It is a digital twin of a real polycristalline microstructure of a grade 2 Titanium sample, gathering both experimental and numerical data obtained through Diffraction Contrast Tomography imaging, and FFT-based mechanical simulation.
Step4: We can see above that this data item is not compressed (complevel=0), and has a disk size of almost 2 Mb.
Step5: use the SampleData set_chunkshape_and_compression method with the dictionary and the name of the data item as arguments
Step6: As you can see, the storage size of the data item has been greatly reduced, by more than 10 times (126 Kb vs 1.945 Mb), using this compression settings. Let us see what will change if we use different settings
Step7: As you may observe, is significantly affected by the choice of the compression level. The higher the compression level, the higher the compression ratio, but also the lower the I/O speed. On the other hand, you can also remark that, in the present case, using the shuffle filter deteriorates the compression ratio.
Step8: On the opposite, for this second array, the shuffle filter improves significantly the compression ratio. However, in this case, you can see that the compression ratio achieved is much lower than for the grain_map array.
Step9: Lossy compression and data normalization
Step10: Lossy compression
Step11: As you may observe, the compression ratio has been improved, and the retrieved values after lossy compression are effectively equal to the original array up to the third digit after the decimal point.
Step12: As you can see, the compression ratio has again been improved, now close to 75%. Know, you know how to do to choose the best compromise between lost precision and compression ratio.
Step13: Standard Normalization
Step14: As you can see, the compression ratio has been strongly improved by this normalization operation, reaching 90%.
Step15: As you can see, the error in the retrieved array is now less than 1% for each component of the field value. However, the cost was a reduced improvement of the compression ratio.
Step16: As you can see, the Amitex_stress_1 Attribute node data in the dataset XDMF file is now provided by a Function item type, involving three data array with the original field shape. This function computes
Step17: Changing the chunksize of a node
Step18: As you can see, the chunkshape has been changed, which has also affected the memory size of the compressed data array. We have indeed reduced the number of chunks in the dataset, which reduces the number of data to store. This modification can also improve or deteriorate the I/O speed of access to your data array in the dataset. The reader is once again refered to dedicated documents to know more ion this matter
Step19: The node has been created with the desired chunkshape and compression filters.
Step20: Now that we have compressed a few of the items of our dataset, the disk size of its HDF5 file should have diminished. Let us check again the size of its data items, and of the file
Step21: The file size has not changed, surprisingly, even if the large Amitex_stress_1 array has been shrinked from almost 50 Mo to roughly 5 Mo. This is due to a specific feature of HDF5 files
Step22: You see that repacking the file has allowed to free some memory space and reduced its size.
|
6,574
|
<ASSISTANT_TASK:>
Python Code:
import sys # system module
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for pandas
%matplotlib inline
# check versions (overkill, but why not?)
print('Python version: ', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
url = 'https://raw.githubusercontent.com/TheUpshot/chipotle/master/orders.tsv'
chipotle = pd.read_csv(url, sep='\t') # tab (\t) separated values
print('Variable dtypes:\n', chipotle.dtypes, sep='')
chipotle.head()
url1 = "https://raw.githubusercontent.com/NYUDataBootcamp/"
url2 = "Materials/master/Data/entry_poll_spring17.csv"
url = url1 + url2
entry_poll = pd.read_csv(url)
entry_poll.head()
print('Dimensions:', entry_poll.shape)
print('Data types:\n\n', entry_poll.dtypes, sep='')
# (1) create list of strings with the new varnames
newnames = ['time', 'why', 'program', 'programming', 'prob_stats', 'major', 'career', 'data', 'topics']
newnames
# (2) Use the str.title() string method to make the varnames prettier
newnames = [name.title() for name in newnames]
newnames
# (3) assign newnames to the variables
entry_poll.columns = newnames
entry_poll.head(1)
# check multi-response question to see what we're dealing with
entry_poll['Topics'].head(20)
url1 = 'http://www.oecd.org/health/health-systems/'
url2 = 'OECD-Health-Statistics-2016-Frequently-Requested-Data.xls'
oecd = pd.read_excel(url1 + url2)
oecd.head()
oecd = pd.read_excel(url1 + url2, sheetname='Physicians')
oecd.head()
oecd = pd.read_excel(url1 + url2, sheetname='Physicians', skiprows=3)
oecd.head()
oecd = pd.read_excel(url1 + url2,
sheetname='Physicians',
skiprows=3,
index_col=0)
oecd.head()
oecd.shape # drop 57th and 58th columns
# There is no skipcols argument, so let's google "read_excel skip columns" -> usecols
oecd = pd.read_excel(url1 + url2,
sheetname='Physicians',
skiprows=3,
index_col=0,
usecols=range(57))
oecd.head()
oecd.tail() # we are downloading the footnotes too
?pd.read_excel # -> skip_footer
# How many rows to skip??
oecd.tail(25)
oecd = pd.read_excel(url1 + url2,
sheetname='Physicians',
skiprows=3,
index_col=0,
usecols=range(57),
skip_footer=20)
oecd.tail()
oecd.dtypes[:5]
url = 'http://www.imf.org/external/pubs/ft/weo/2016/02/weodata/WEOOct2016all.xls'
# Try
weo = pd.read_excel(url) # NOT an excel file!
# try to open the file with a plain text editor (it is a TSV)
weo = pd.read_csv(url, sep = '\t')
weo.head()
names = list(weo.columns)
names[:8]
# for var details
details_list = names[1:5] + [names[6]]
# for years
years_list = names[9:-6]
details_list
weo = pd.read_csv(url,
sep = '\t',
index_col='ISO',
usecols=details_list + years_list)
weo.head()
weo.tail(3)
weo = pd.read_csv(url,
sep = '\t',
index_col='ISO',
usecols=details_list + years_list,
skipfooter=1, engine='python') # read_csv requires 'python' engine (otherwise warning)
weo.tail()
weo = pd.read_csv(url,
sep = '\t',
index_col='ISO',
usecols=details_list + years_list,
skipfooter=1, engine='python',
na_values='n/a')
weo.head()
weo.dtypes[:10] # still not ok
weo = pd.read_csv(url,
sep = '\t',
index_col='ISO',
usecols=details_list + years_list,
skipfooter=1, engine='python',
na_values='n/a',
thousands =',')
weo.head()
weo.T.head(10)
dollars = '$123.45'
print('Type of variable dollars:', type(dollars))
num = dollars.replace('$', '')
num = float(num)
print('Type of variable num:', type(num))
chipotle.head()
# create a copy of the df to play with
chipotle_num = chipotle.copy()
print('Original dtype:', chipotle_num['item_price'].dtype)
# delete dollar signs (dtype does not change!)
chipotle_num['item_price'].str.replace('$', '').head()
# delete dollar signs, convert to float, AND assign back to chipotle_num in one line
chipotle_num['item_price'] = chipotle_num['item_price'].str.replace('$', '').astype(float)
print('New dtype:', chipotle_num['item_price'].dtype)
# assign back to chp for future use
chipotle = chipotle_num
print('Variable dtypes:\n', chipotle.dtypes, sep='')
chipotle.head()
weo.head(1)
weo.head(1).dtypes
weo['1980'].astype(float)
weo = pd.read_csv(url,
sep = '\t',
index_col='ISO',
usecols=details_list + years_list,
skipfooter=1, engine='python',
na_values=['n/a', '--'],
thousands =',')
weo.head(1)
# With that out of our way, we can do the conversion for one variable
weo['1980'].astype(float)
# or for all numeric variables
years = [str(year) for year in range(1980, 2017)]
weo[years] = weo[years].astype(float)
weo.dtypes
# try this with an example first
country = 'United States 1'
# get documentation for the rsplit method
country.rsplit?
# an example
country.rsplit()
# what about this?
country.rsplit(maxsplit=1)
# one more step, we want the first component of the list
country.rsplit(maxsplit=1)[0]
oecd.index
oecd.index.str.rsplit(maxsplit=1)[0]
#try
oecd.index.str.rsplit?
# Note the TWO str's
oecd.index.str.rsplit(n=1).str[0]
#or use the str.get() method
oecd.index.str.rsplit(n=1).str.get(0)
oecd.index = oecd.index.str.rsplit(n=1).str.get(0)
oecd.head()
docs = oecd
docs.head()
docs.replace(to_replace=['..'], value=[None]).head()
docsna = docs.replace(to_replace=['..'], value=[None])
docsna.dtypes
docs.replace(to_replace=['.'], value=['*']).head()
# grab a variable to play with
var = docsna[2013].head(10)
var
# why not '2013'? check the type
docsna.columns
# which ones are missing ("null")?
var.isnull()
# which ones are not missing ("not null")?
var.notnull()
# drop the missing
var.dropna()
docsna[2013].plot.barh(figsize=(4, 12))
# we create a small dataframe to experiment with
small = weo.head()
small
small[['Country', 'Units']]
small[[0, 4]]
small['2011']
small[1:3]
small[[False, True, True, False, False]]
s1 = pd.Series([5, 6, 7, 8], index=["a", "b", "c", "d"])
s1
s2 = pd.Series([50, 60, 70, 80], index=[0, 4, 2, 999])
s2
# index has dtype object, so using an int returns the value in that row (starting at 0)
s1[1]
# index has dtype int, so using an integer tries to find the that int in the
# index and return the corresponding value and throws an error if it can't find it
s2[1]
s2[0] # no error, 0 is in the index
# index has dtype object, so a list of ints extracts those rows
s1[[0, 3]]
# index has dtype int, so a list of ints tries to match each int to the index
# it returns NaN where it can't find the index. Notice it **did not** return
# `80` for 3
s2[[0, 3, 999]]
# index has type object, so a string finds row with matching index
s1["c"]
# index has dtype int, so using a string causes an error
s2["c"]
# similar behavior for lists of strings
s1[["a", "b", "penguin"]]
# index has dtype int, so list of strings returns NaN's everywhere
s2[["a", "b"]]
# lists of True/False work the same for any dtype of index
bools = [True, False, False, True]
s1[bools]
s2[bools]
weo.head(2)
variable_list = weo[['Country', 'Subject Descriptor', 'Units']].drop_duplicates()
print('Number of variables: ', variable_list.shape[0])
variable_list.head()
country_list = weo['Country'].drop_duplicates()
print('Number of countries: ', country_list.shape[0])
country_list
small
small['Units'] == 'National currency'
small['2011'] >= 200
(small['Units'] == 'National currency') & (small['2011'] >= 100)
(small['Units'] == 'National currency') | (small['2011'] >= 100)
# remind ourslves what we're starting with
small
# two steps: comparison, then selection
ncunits = small['Units'] == 'National currency' # comparison
print(ncunits)
small[ncunits] # selection
# put the steps together in one line
small[small['Units'] == 'National currency']
vlist = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
weo['WEO Subject Code'].isin(vlist)
weo.tail(4)
# this time let's use the result of isin for selection
vlist = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
weo[weo['WEO Subject Code'].isin(vlist)].head(6)
# we've combined several things in one line
comparison = weo['WEO Subject Code'].isin(vlist)
selection = weo[comparison]
selection.head(6)
variables = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
countries = ['Argentina', 'Greece']
weo_sub = weo[weo['WEO Subject Code'].isin(variables) & weo['Country'].isin(countries)]
weo_sub
countries = ['China', 'India', 'United States']
gdp = weo[(weo['WEO Subject Code']=='NGDPD') & weo['Country'].isin(countries)]
gdp
gdp['2015'].plot(kind='bar')
# recall
entry_poll['Topics'].head(10)
# the contains method
entry_poll['Topics'].str.contains('Machine Learning')
entry_poll['Topics'].str.contains('Machine Learning').head(10)*1
topics = ['Web scraping', 'Machine Learning', 'regression']
old_ep = entry_poll.copy()
vnames = []
for x in topics:
newname = 'Topics' + '_' + x
vnames.append(newname)
entry_poll[newname] = entry_poll['Topics'].str.contains(x)*1
vnames
# create new df of just these variables
student_topics = entry_poll[vnames]
student_topics
# count them with the sum method
topics_counts = student_topics.sum()
topics_counts
topics_counts.plot(kind='barh')
topics_counts.plot(kind='pie')
data = {'Size': ['a) 1 to 4', 'b) 5 to 9', 'c) 10 to 19', 'd) 20 to 49', 'e) 50 to 99',
'f) 100 to 249', 'g) 250 to 499', 'h) 500 to 999', 'i) 1000 to 2499',
'j) 2500 to 4999', 'k) 5000 to 9999', 'l) 10000+'],
'Firms': [2846416, 1020772, 598153, 373345, 115544, 63845,
19389, 9588, 6088, 2287, 1250, 1357],
'Emp': [5998912, 6714924, 8151891, 11425545, 8055535, 9788341,
6611734, 6340775, 8321486, 6738218, 6559020, 32556671]}
bds = pd.DataFrame(data)
bds .head(3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id=want></a>
Step2: Comment. Note that the variable item_price has dtype object. The reason is evidently the dollar sign. We want to have it as a number, specifically a float.
Step3: Comments. This is mostly text data, which means it's assigned the dtype object. There are two things that would make the data easier to work with
Step4: str.title() returns a copy of the string in which first characters of all the words are capitalized.
Step5: Second
Step6: Comment. Note the commas separating answers with more than one choice. We want to unpack them somehow.
Step7: This looks bad. But we can always use pd.read_excel?. Let's look into the excel file.
Step8: The first three lines are empty. Skip those
Step9: Would be nice to have the countries as indices
Step10: The last two columns contain junk
Step11: What about the bottom of the table?
Step12: We still have a couple issues.
Step13: Useful columns
Step14: Look at the bottom
Step15: Missing values
Step16: Notice the , for thousands. As we saw before, there is an easy fix
Step17: Comment. This has several issues. Here's what we want
Step18: <a id='strings'></a>
Step19: Pandas string methods. We can do the same thing to all the observations of a variable with so-called string methods. We append .str to a variable in a dataframe and then apply the string method of our choice. If this is part of converting a number-like entry that has mistakenly been given dtype object, we then convert its dtype with the astype method.
Step20: Comment. We did everything here in one line
Step21: Want to convert the year variables into float
Step22: This error indicates that somewhere in weo['1980'] there is a string value --. We want to convert that into NaN. Later we will see how we can do that directly. For now use read_csv() again
Step23: Example. Here we strip off the numbers at the end of the indexes in the OECD docs dataframe. This involves some experimentation
Step24: Comment. Not quite, we only want to split once.
Step25: Comments.
Step26: What to do. We use the replace method on the whole dataframe. To mark something as missing, we replace it as None, which Pandas interprets as missing and labels NaN.
Step27: Comment. Replace automatically updates the dtypes. Here the double dots led us to label the variables as objects. After the replace, they're now floats, as they should be.
Step28: Comment. Unlike the string methods we described earlier, this use of replace affects complete entries, not elements of string entries. For example, suppose we tried to replace the periods in decimal numbers with an asterisk. We could try the following, but it doesn't work
Step29: Working with missing values
Step30: Comment. We usually don't have to worry about this, Pandas takes care of missing values automatically.
Step31: <a id='selection'></a>
Step32: Example. Let's try each of these in a different cell and see what they do
Step33: Series indexing
Step34: <a id='boolean'></a>
Step35: Find variable and country codes. Which ones do we want? Let's start by seeing that's available. Here we create special dataframes that include all the variables and their definitions and all the countries.
Step36: Exercise.
Step37: Boolean selection
Step38: Exercise. Construct dataframes for which
Step39: Comment. We're choosing 2 variables from 45, so there are lots of Falses.
Step40: Comment. We can do the same thing with countries. If we want to choose two variables and three countries, the code looks like
Step41: Comments.
Step42: Exercise (challenging). Plot the variable gdp['2015'] as a bar chart. What would you say it needs?
Step43: <a id='contains'></a>
Step44: Comment. That's pretty good, we now know which students mentioned Machine Learning and which did not. It's more useful, though, to convert this to zeros (False) and ones (True), which we do with this trick
Step45: Comment. Now let's do the same for some of the other entries and save them in new variables.
Step46: Comment. You might want to think about this a minute. Or two.
Step47: Comment. Just for fun, here's a bar graph of the result.
Step48: and a pie chart
Step49: Review
|
6,575
|
<ASSISTANT_TASK:>
Python Code:
trospection or reflection is the ability of software to identify and report their own internal structures, such as types, variabl# Getting some information
# about global objects in the program
from types import ModuleType
def info(n_obj):
# Create a referรชnce to the object
obj = globals()[n_obj]
# Show object information
print ('Name of object:', n_obj)
print ('Identifier:', id(obj))
print ('Typo:', type(obj))
print ('Representation:', repr(obj))
# If it is a module
if isinstance(obj, ModuleType):
print( 'itens:')
for item in dir(obj):
print (item)
print
# Showing information
for n_obj in dir()[:10]: # The slice [:10] is used just to limit objects
info(n_obj)
import types
s = ''
if isinstance(s, types.StringType):
print 's is a string.'
import os.path
# inspect: "friendly" introspection module
import inspect
print 'Object:', inspect.getmodule(os.path)
print 'Class?', inspect.isclass(str)
# Lists all functions that exist in "os.path"
print 'Member:',
for name, struct in inspect.getmembers(os.path):
if inspect.isfunction(struct):
print name,
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Python also has a module called types, which has the definitions of the basic types of the interpreter.
Step2: Through introspection, it is possible to determine the fields of a database table, for example.
|
6,576
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.read_csv('data/train.csv')
df.head()
df.tail()
df.shape
df['Fare'].head()
df[['Fare', 'Sex']].head()
df['Sex'].value_counts()
df['Age'].median()
%matplotlib inline
import seaborn
fig = df['Pclass'].hist()
fig = df.hist(figsize=(15,5))
df.head()
mask = df['Embarked'] == 'C'
mask.head()
df_filter = df[mask] # df_filter = df[df['Embarked'] == 'C']
df_filter.head()
df_filter = df.loc[df['Embarked'] == 'C']
df_filter.head()
test = df.loc[df['Age'] > 30., ['Age', 'Fare', 'Sex']]
test.head()
test = df.loc[(df['Age'] > 30.) & (df['Fare'] < 50.), 'Age':'Fare']
test.head()
df.head()
import numpy as np
df[df['Cabin']!=np.nan]
df.loc[df['Cabin'].notnull()]
df.notnull().head()
df_filter = df.loc[df.notnull().all(axis=1)]
df_filter.shape
df.loc[df['Cabin'].isnull(), 'Cabin'] = 'U0'
df.head()
df.loc[df['Fare'].isnull()].shape
df['Fare'].median()
df.loc[df['Fare'].isnull(), 'Fare'] = df['Fare'].median()
df.loc[df['Embarked'].isnull()].shape
df['Embarked'].mode()
df['Embarked'].mode()[0]
df.loc[df['Embarked'].isnull(), 'Embarked'] = df['Embarked'].dropna().mode()[0]
for col in df.columns:
print("NaNs in column {0} = {1}".format(col, df.loc[df[col].isnull()].shape[0]))
df.loc[df['Age'].isnull(), 'Age'] = df['Age'].mean()
df.head()
print(df['Embarked'].unique())
dummies_df = pd.get_dummies(df['Embarked'])
dummies_df.head()
def addEmbarked(name):
return 'Embarked_' + name
dummies_df = dummies_df.rename(columns=addEmbarked)
dummies_df.head()
df = pd.concat([df, dummies_df], axis=1)
df.head()
df['EmbarkedNum'] = pd.factorize(df['Embarked'])[0]
df.head(6)
pd.factorize(df['Sex'])
df['Female'] = pd.factorize(df['Sex'])[0]
df.head()
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(df[['Age', 'Fare']]) # .fit(df) to scale all numerical columns
print("Means = {0}".format(scaler.mean_))
print("Stdevs = {0}".format(scaler.scale_))
df[['Age', 'Fare']] = scaler.transform(df[['Age', 'Fare']])
df.head()
fares = df['Fare'].values
type(fares)
input_df = pd.read_csv('data/train.csv')
submit_df = pd.read_csv('data/test.csv')
print(input_df.shape)
print(submit_df.shape)
submit_df.head()
input_df.head()
submit_df = pd.read_csv('data/test.csv', index_col=0)
submit_df.head()
input_df = pd.read_csv('data/train.csv', index_col=0)
submit_df = pd.read_csv('data/test.csv', index_col=0)
df = pd.concat([input_df, submit_df])
df.tail()
print(df.shape[1], "columns")
print(df.shape[0], "rows")
print(df.columns.values)
def process_data(df):
df['Female'] = pd.factorize(df['Sex'])[0]
df.loc[df['Age'].isnull(), 'Age'] = df['Age'].mean()
df.loc[df['Fare'].isnull(), 'Fare'] = df['Fare'].median()
df.loc[df['Cabin'].isnull(), 'Cabin'] = 'U0'
df.loc[df['Embarked'].isnull(), 'Embarked'] = df['Embarked'].dropna().mode()[0]
dummies_df = pd.get_dummies(df['Embarked'])
def addEmbarked(name):
return 'Embarked_' + name
dummies_df = dummies_df.rename(columns=addEmbarked)
df = pd.concat([df, dummies_df], axis=1)
df['EmbarkedNum'] = pd.factorize(df['Embarked'])[0]
return df
df = process_data(df)
df.tail()
features = ['Age', 'Fare', 'Parch', 'Pclass', 'SibSp', 'Female', 'EmbarkedNum']
df_test = df.loc[df['Survived'].isnull(), features]
df_train = df.loc[df['Survived'].notnull(), features+['Survived']]
df_train.head()
X_train = df_train[features].values
y_train = df_train['Survived'].values
print(X_train[0:5])
print(y_train[0:5])
print("X has {0} rows".format(X_train.shape[0]))
print("y has {0} rows".format(y_train.shape[0]))
df['Cabin'].unique()
import re
def getDeck(cabin):
match = re.search("([A-Z])", cabin)
return match.group(1) if match is not None else None
def getCabinNum(cabin):
match = re.search("([0-9]+)", cabin)
return match.group(1) if match is not None else None
print(getDeck('C237'))
print(getCabinNum('C237'))
df['Deck'] = df['Cabin'].map(getDeck)
df['CabinNum'] = df['Cabin'].map(getCabinNum)
df.head()
df['CabinNum'].isnull().value_counts()
df.loc[df['CabinNum'].isnull(), 'CabinNum'] = 0
df['Deck'].isnull().value_counts()
df['DeckNum'] = pd.factorize(df['Deck'])[0]
testname = df.loc[1, 'Name']
print(testname)
re.split(' ', testname)
def numNames(name):
return len(re.split(' ', name))
df['NumNames'] = df['Name'].map(numNames)
df.head()
testname
re.search(", (.+?)\.", testname).group(1)
def getTitle(name):
match = re.search(", (.*?)\.", name)
return match.group(1) if match is not None else None
df['Title'] = df['Name'].map(getTitle)
df.head()
df.loc[df['Title'] == 'Jonkheer', 'Title'] = 'Master'
df.loc[df['Title'].isin(['Ms', 'Mlle']), 'Title'] = 'Miss'
df.loc[df['Title'] == 'Mme', 'Title'] = 'Mrs'
df.loc[df['Title'].isin(['Capt', 'Don', 'Major', 'Col', 'Sir']), 'Title'] = 'Sir'
df.loc[df['Title'].isin(['Dona', 'Lady', 'the Countess']), 'Title'] = 'Lady'
df['Title'].value_counts()
df = pd.concat([df, pd.get_dummies(df['Title']).rename(columns=lambda x: 'Title_' + str(x))], axis=1)
df.head()
def process_data(df):
df['Female'] = pd.factorize(df['Sex'])[0]
df.loc[df['Age'].isnull(), 'Age'] = df['Age'].mean()
df.loc[df['Fare'].isnull(), 'Fare'] = df['Fare'].median()
df.loc[df['Cabin'].isnull(), 'Cabin'] = 'U0'
df.loc[df['Embarked'].isnull(), 'Embarked'] = df['Embarked'].dropna().mode()[0]
dummies_df = pd.get_dummies(df['Embarked'])
def addEmbarked(name):
return 'Embarked_' + name
dummies_df = dummies_df.rename(columns=addEmbarked)
df = pd.concat([df, dummies_df], axis=1)
df['EmbarkedNum'] = pd.factorize(df['Embarked'])[0]
import re
def getDeck(cabin):
match = re.search("([A-Z])", cabin)
return match.group(1) if match is not None else None
def getCabinNum(cabin):
match = re.search("([0-9]+)", cabin)
return match.group(1) if match is not None else None
df['Deck'] = df['Cabin'].map(getDeck)
df['DeckNum'] = pd.factorize(df['Deck'])[0]
df['CabinNum'] = df['Cabin'].map(getCabinNum)
df.loc[df['CabinNum'].isnull(), 'CabinNum'] = 0
def numNames(name):
return len(re.split(' ', name))
df['NumNames'] = df['Name'].map(numNames)
def getTitle(name):
match = re.search(", (.*?)\.", name)
return match.group(1) if match is not None else None
df['Title'] = df['Name'].map(getTitle)
df.loc[df['Title'] == 'Jonkheer', 'Title'] = 'Master'
df.loc[df['Title'].isin(['Ms', 'Mlle']), 'Title'] = 'Miss'
df.loc[df['Title'] == 'Mme', 'Title'] = 'Mrs'
df.loc[df['Title'].isin(['Capt', 'Don', 'Major', 'Col', 'Sir']), 'Title'] = 'Sir'
df.loc[df['Title'].isin(['Dona', 'Lady', 'the Countess']), 'Title'] = 'Lady'
df = pd.concat([df, pd.get_dummies(df['Title']).rename(columns=lambda x: 'Title_' + str(x))], axis=1)
return df
input_df = pd.read_csv('data/train.csv', index_col=0)
submit_df = pd.read_csv('data/test.csv', index_col=0)
df = pd.concat([input_df, submit_df])
df = process_data(df)
features = ['Age', 'Fare', 'Parch', 'Pclass', 'SibSp', 'Female', 'EmbarkedNum', 'DeckNum', 'CabinNum', 'NumNames', 'Title_Dr','Title_Lady','Title_Master','Title_Miss','Title_Mr','Title_Mrs','Title_Rev','Title_Sir']
df_test = df.loc[df['Survived'].isnull(), features]
df_train = df.loc[df['Survived'].notnull(), features+['Survived']]
df_train.to_csv('data/train_processed.csv') # NOT FEATURE SCALED!
df_test.to_csv('data/test_processed.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Magic Pandas
Step2: Data at a glance
Step3: Plotting
Step4: Filtering
Step5: Boolean Masks
Step6: Filtering Dataframes
Step7: Memory Issues
Step8: Selecting specific columns
Step9: Combining criteria / columns
Step10: Requirements for working with data in scikit-learn
Step11: Wrong way
Step12: Right Way
Step13: Approach 1
Step14: How much data is left?
Step15: Approach 2
Step16: Approach 3
Step17: Categorical Variables
Step18: Approach 4
Step19: Requirements for working with data in scikit-learn
Step20: Approach 1
Step21: Approach 2
Step22: What are the classes?
Step23: Continuous Features
Step24: Requirements for working with data in scikit-learn
Step25: Getting a pipeline ready for sklearn
Step26: Indexing
Step27: Putting it all together
Step28: Requirements for working with data in scikit-learn
Step29: Feature Engineering
Step30: Regular Expressions
Step31: Apply a function to all rows to generate a new feature
Step32: What to do with the name?
Step33: Title
Step34: Final Data Processing Pipeline
|
6,577
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('../python')
from HTKFeat import MFCC_HTK
import numpy as np
%matplotlib inline
import matplotlib.pyplot as P
mfcc=MFCC_HTK()
signal = mfcc.load_raw_signal('../python-test/file.raw')
def draw_signal(signal, fs):
sig_len=signal.size/fs #in seconds
P.figure(figsize=(15,4))
t=np.linspace(0,sig_len,signal.size)
P.plot(t,signal)
P.figure(figsize=(15,4))
s=P.specgram(signal,Fs=fs)
P.xlim(0,sig_len)
P.figure(figsize=(15,4))
f=np.linspace(0,fs/2,signal.size/2+1)
P.plot(f,np.abs(np.fft.rfft(signal)))
draw_signal(signal,16000)
from ipywidgets import *
from IPython.display import display
import scipy.io.wavfile
def html5audio(signal, fs, filepath):
scipy.io.wavfile.write(filepath,fs,signal.astype('int16'))
src = "<audio controls=\"controls\" style=\"width:600px\" >\
<source src=\"{}\" type=\"audio/wav\" />\
Your browser does not support the audio element.\
</audio>".format(filepath)
display(HTML(src))
html5audio(signal,16000,'../python-test/file.wav')
from scipy.signal import resample
signal8k=resample(signal,signal.size/2)
draw_signal(signal8k,8000)
html5audio(signal8k,8000,'../python-test/file.wav')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here we load the file and display its waveform, spectrorgram and amplitude spectrum.
Step2: Here we also create an HTML5 audio widget to hear how it actually sounds.
Step3: Resampling
Step4: Looking at it, there isn't anything too fishy.
Step5: It doesn't sound too bad either.
|
6,578
|
<ASSISTANT_TASK:>
Python Code:
kmf = KaplanMeierFitter()
T = df['tenure'] #duration
C = df["b_Churn"] #censorship - 1 if death/churn is seen, 0 if censored
palette = ["windows blue", "amber"]
sns.set_palette(sns.xkcd_palette(palette))
##SET UP PLOT
ax = plt.subplot(111)
plt.title('Kaplan-Meier Estimate of Driver Retention by Multiple Lines')
sns.set_context("talk")
d={} #to store the models
vlines = []
i=0
##PLOT FITTED GRAPH
#loop through segmentation variable, plot on same axes
for segment in df.b_MultipleLines.unique():
ix = df.b_MultipleLines == segment
d['kmf{}'.format(i+1)] = kmf.fit(T.loc[ix],C.loc[ix], label=segment)
ax = kmf.plot(ax=ax, figsize=(12,6))
ax.set_xlim([T.min(),T.max()])
ax.set_ylim([0.5,1])
y_ = kmf.survival_function_[kmf.survival_function_.round(2) ==.75].dropna().index[0]
ymax_ = kmf.survival_function_[kmf.survival_function_.round(2) ==.75].dropna()[i][y_]
vlines.append([y_, ymax_])
i+=1
##PLOT ANNOTATION
#for each intercept, plot a horizontal and a vertical line up to the fitted curve
xmin_=0
for i, xy in enumerate(vlines):
xmax_ = xy[0]
color="xkcd:{}".format(palette[i])
plt.axvline(x=xy[0], ymax=.5, alpha=.8, color=color, linestyle='dotted') #axes fractional
plt.hlines(y=.75, xmin=xmin_, xmax=xmax_, alpha=.8, color=color, linestyle='dotted') #axes data coordinates
xmin_ += xmax_ #to avoid overlapping hlines
## position text label
difference = vlines[1][0]-vlines[0][0]
label_frac_x = (vlines[0][0]+(difference)/2)/T.max()-.07 #midpoint (minus .07 centering factor)
label_frac_y = 0.2
#label first line
ax.annotate('Difference:\n{} Months'.format(difference),
xy=(vlines[0][0],.62), xycoords='data', color='white',
xytext=(label_frac_x,label_frac_y), textcoords='axes fraction',
arrowprops=dict(arrowstyle="-|>", #arrows removed for reability
fc="k", ec="k")
)
#label second line
ax.annotate('Difference:\n{} Months'.format(difference),
xy=(vlines[1][0],.62), xycoords='data', color='black',
xytext=(label_frac_x, label_frac_y), textcoords='axes fraction',
arrowprops=dict(arrowstyle="-|>", #arrows removed for reability
fc="k", ec='k')
)
##LEGEND
#override default legend
patches = [ mpatches.Patch(color="xkcd:windows blue", label='Has One Phoneline'),
mpatches.Patch(color="xkcd:amber", label='Has Multiple Phonelines')
]
plt.legend(handles=[patches[0],patches[1]], title="User Segmentation", loc='best');
multi = (df["b_MultipleLines"] == 1)
results = logrank_test(T[multi], T[~multi], C[multi], C[~multi], alpha=0.99 ) #at 99% confidence level
results.print_summary()
# Image Credit: https://en.wikipedia.org/wiki/Log-rank_test
Image(url= "https://wikimedia.org/api/rest_v1/media/math/render/svg/68750c8d50ac1d055b3ddcc67a05de43b40fdebf")
Z = results.test_statistic
D = C.sum() #number of events observed
hazard_ratio = np.exp(Z*np.sqrt(4/D))
print(hazard_ratio)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The graph clearly shows that there is a difference in tenure between "single line" and "multiple line" telco users. Since the confidence interals also appear on the graph, we can see that the difference is statistically significant from t > 0 until approximately t= 52 months, after which the churn rates becomes the same, within the margin of error.
|
6,579
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
X, y, true_coefficient = make_regression(n_samples=200, n_features=30, n_informative=10, noise=100, coef=True, random_state=5)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=5, train_size=60, test_size=140)
print(X_train.shape)
print(y_train.shape)
from sklearn.linear_model import LinearRegression
linear_regression = LinearRegression().fit(X_train, y_train)
print("R^2 on training set: %f" % linear_regression.score(X_train, y_train))
print("R^2 on test set: %f" % linear_regression.score(X_test, y_test))
from sklearn.metrics import r2_score
print(r2_score(np.dot(X, true_coefficient), y))
plt.figure(figsize=(10, 5))
coefficient_sorting = np.argsort(true_coefficient)[::-1]
plt.plot(true_coefficient[coefficient_sorting], "o", label="true")
plt.plot(linear_regression.coef_[coefficient_sorting], "o", label="linear regression")
plt.legend()
from sklearn.model_selection import learning_curve
def plot_learning_curve(est, X, y):
training_set_size, train_scores, test_scores = learning_curve(est, X, y, train_sizes=np.linspace(.1, 1, 20))
estimator_name = est.__class__.__name__
line = plt.plot(training_set_size, train_scores.mean(axis=1), '--', label="training scores " + estimator_name)
plt.plot(training_set_size, test_scores.mean(axis=1), '-', label="test scores " + estimator_name, c=line[0].get_color())
plt.xlabel('Training set size')
plt.legend(loc='best')
plt.ylim(-0.1, 1.1)
plt.figure()
plot_learning_curve(LinearRegression(), X, y)
from sklearn.linear_model import Ridge
ridge_models = {}
training_scores = []
test_scores = []
for alpha in [100, 10, 1, .01]:
ridge = Ridge(alpha=alpha).fit(X_train, y_train)
training_scores.append(ridge.score(X_train, y_train))
test_scores.append(ridge.score(X_test, y_test))
ridge_models[alpha] = ridge
plt.figure()
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [100, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([100, 10, 1, .01]):
plt.plot(ridge_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
plt.figure()
plot_learning_curve(LinearRegression(), X, y)
plot_learning_curve(Ridge(alpha=10), X, y)
from sklearn.linear_model import Lasso
lasso_models = {}
training_scores = []
test_scores = []
for alpha in [30, 10, 1, .01]:
lasso = Lasso(alpha=alpha).fit(X_train, y_train)
training_scores.append(lasso.score(X_train, y_train))
test_scores.append(lasso.score(X_test, y_test))
lasso_models[alpha] = lasso
plt.figure()
plt.plot(training_scores, label="training scores")
plt.plot(test_scores, label="test scores")
plt.xticks(range(4), [30, 10, 1, .01])
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plt.plot(true_coefficient[coefficient_sorting], "o", label="true", c='b')
for i, alpha in enumerate([30, 10, 1, .01]):
plt.plot(lasso_models[alpha].coef_[coefficient_sorting], "o", label="alpha = %.2f" % alpha, c=plt.cm.summer(i / 3.))
plt.legend(loc="best")
plt.figure(figsize=(10, 5))
plot_learning_curve(LinearRegression(), X, y)
plot_learning_curve(Ridge(alpha=10), X, y)
plot_learning_curve(Lasso(alpha=10), X, y)
from figures import plot_linear_svc_regularization
plot_linear_svc_regularization()
from sklearn.datasets import make_blobs
plt.figure()
X, y = make_blobs(random_state=42)
plt.scatter(X[:, 0], X[:, 1], c=plt.cm.spectral(y / 2.));
from sklearn.svm import LinearSVC
linear_svm = LinearSVC().fit(X, y)
print(linear_svm.coef_.shape)
print(linear_svm.intercept_.shape)
plt.scatter(X[:, 0], X[:, 1], c=plt.cm.spectral(y / 2.))
line = np.linspace(-15, 15)
for coef, intercept in zip(linear_svm.coef_, linear_svm.intercept_):
plt.plot(line, -(line * coef[0] + intercept) / coef[1])
plt.ylim(-10, 15)
plt.xlim(-10, 8);
from sklearn.datasets import load_digits
from sklearn.linear_model import LogisticRegression
digits = load_digits()
X_digits, y_digits = digits.data, digits.target
# split the dataset, apply grid-search
# %load solutions/17A_logreg_grid.py
# %load solutions/17B_learning_curve_alpha.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear Regression
Step2: Ridge Regression (L2 penalty)
Step3: Tuning alpha is critical for performance.
Step4: Lasso (L1 penalty)
Step5: Instead of picking Ridge or Lasso, you can also use ElasticNet, which uses both forms of regularization and provides a parameter to assign a weighting between them. ElasticNet typically performs the best amongst these models.
Step6: Similar to the Ridge/Lasso separation, you can set the penalty parameter to 'l1' to enforce sparsity of the coefficients (similar to Lasso) or 'l2' to encourage smaller coefficients (similar to Ridge).
Step7: Points are classified in a one-vs-rest fashion (aka one-vs-all), where we assign a test point to the class whose model has the highest confidence (in the SVM case, highest distance to the separating hyperplane) for the test point.
|
6,580
|
<ASSISTANT_TASK:>
Python Code:
#contributions = pd.read_json(path_or_buf='../data/EGALITE4.brut.json', orient="columns")
def loadContributions(file, withsexe=False):
contributions = pd.read_json(path_or_buf=file, orient="columns")
rows = [];
rindex = [];
for i in range(0, contributions.shape[0]):
row = {};
row['id'] = contributions['id'][i]
rindex.append(contributions['id'][i])
if (withsexe):
if (contributions['sexe'][i] == 'Homme'):
row['sexe'] = 0
else:
row['sexe'] = 1
for question in contributions['questions'][i]:
if (question.get('Reponse')): # and (question['texte'][0:5] != 'Savez') :
row[question['titreQuestion']+' : '+question['texte']] = 1
for criteres in question.get('Reponse'):
# print(criteres['critere'].keys())
row[question['titreQuestion']+'. (Rรฉponse) '+question['texte']+' -> '+str(criteres['critere'].get('texte'))] = 1
rows.append(row)
df = pd.DataFrame(data=rows)
df.fillna(0, inplace=True)
return df
df = loadContributions('../data/EGALITE1.brut.json', True)
df = df.merge(right=loadContributions('../data/EGALITE2.brut.json'), how='outer', right_on='id', left_on='id')
df = df.merge(right=loadContributions('../data/EGALITE3.brut.json'), how='outer', right_on='id', left_on='id')
df = df.merge(right=loadContributions('../data/EGALITE4.brut.json'), how='outer', right_on='id', left_on='id')
df = df.merge(right=loadContributions('../data/EGALITE5.brut.json'), how='outer', right_on='id', left_on='id')
df = df.merge(right=loadContributions('../data/EGALITE6.brut.json'), how='outer', right_on='id', left_on='id')
df.fillna(0, inplace=True)
df.index = df['id']
df.to_csv('consultation_an.csv', format='%d')
#df.columns = ['Q_' + str(col+1) for col in range(len(df.columns) - 2)] + ['id' , 'sexe']
df.head()
df = loadContributions('../data/EGALITE4.brut.json', True)
from sklearn.cluster import KMeans
from sklearn import metrics
import numpy as np
X = df.drop('id', axis=1).values
def train_kmeans(nb_clusters, X):
kmeans = KMeans(n_clusters=nb_clusters, random_state=0).fit(X)
return kmeans
#print(kmeans.predict(X))
#kmeans.cluster_centers_
def select_nb_clusters():
perfs = {};
for nbclust in range(2,10):
kmeans_model = train_kmeans(nbclust, X);
labels = kmeans_model.labels_
# from http://scikit-learn.org/stable/modules/clustering.html#calinski-harabaz-index
# we are in an unsupervised model. cannot get better!
# perfs[nbclust] = metrics.calinski_harabaz_score(X, labels);
perfs[nbclust] = metrics.silhouette_score(X, labels);
print(perfs);
return perfs;
df['clusterindex'] = train_kmeans(4, X).predict(X)
#df
perfs = select_nb_clusters();
# result :
# {2: 341.07570462155348, 3: 227.39963334619881, 4: 186.90438345452918, 5: 151.03979976346525, 6: 129.11214073405731, 7: 112.37235520885432, 8: 102.35994869157568, 9: 93.848315820675438}
optimal_nb_clusters = max(perfs, key=perfs.get);
print("optimal_nb_clusters" , optimal_nb_clusters);
km_model = train_kmeans(optimal_nb_clusters, X);
df['clusterindex'] = km_model.predict(X)
lGroupBy = df.groupby(['clusterindex']).mean();
# km_model.__dict__
cluster_profile_counts = df.groupby(['clusterindex']).count();
cluster_profile_means = df.groupby(['clusterindex']).mean();
global_counts = df.count()
global_means = df.mean()
cluster_profile_counts.head()
#cluster_profile_means.head()
#df.info()
df_profiles = pd.DataFrame();
nbclusters = cluster_profile_means.shape[0]
df_profiles['clusterindex'] = range(nbclusters)
for col in cluster_profile_means.columns:
if(col != "clusterindex"):
df_profiles[col] = np.zeros(nbclusters)
for cluster in range(nbclusters):
df_profiles[col][cluster] = cluster_profile_means[col][cluster]
# row.append(df[col].mean());
df_profiles.head()
#print(df_profiles.columns)
intereseting_columns = {};
for col in df_profiles.columns:
if(col != "clusterindex"):
global_mean = df[col].mean()
diff_means_global = abs(df_profiles[col] - global_mean). max();
# print(col , diff_means_global)
if(diff_means_global > 0.1):
intereseting_columns[col] = True
#print(intereseting_columns)
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
#cols = [ col for col in cluster_profile_counts.columns]
#cluster_profile_means.ix[0].plot.bar()
interesting = list(intereseting_columns.keys())
df_profiles_sorted = df_profiles[interesting].sort_index(axis=1)
df_profiles_sorted.plot.bar(figsize =(1, 1))
df_profiles_sorted.plot.bar(figsize =(16, 8), legend=False)
df_profiles_sorted.T
df_profiles.sort_index(axis=1).T
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build clustering model
Step2: Build the optimal model and apply it
Step3: Cluster Profiles
|
6,581
|
<ASSISTANT_TASK:>
Python Code:
d = pd.read_csv("data/dataset_0.csv")
fig, ax = plt.subplots()
ax.plot(d.x,d.y,'o')
def linear(x,a,b):
return a + b*x
def linear(x,a,b):
return a + b*x
def linear_r(param,x,y):
return linear(x,param[0],param[1]) - y
def linear_r(param,x,y): # copied from previous cell
return linear(x,param[0],param[1]) - y # copied from previous cell
param_guesses = [1,1]
fit = scipy.optimize.least_squares(linear_r,param_guesses,
args=(d.x,d.y))
fit_a = fit.x[0]
fit_b = fit.x[1]
sum_of_square_residuals = fit.cost
x_range = np.linspace(np.min(d.x),np.max(d.x),100)
fig, ax = plt.subplots()
ax.plot(d.x,d.y,"o")
ax.plot(x_range,linear(x_range,fit_a,fit_b))
def linear(x,a,b):
Linear model of x using a (slope) and b (intercept)
return a + b*x
def linear_r(param,x,y):
Residuals function for linear
return linear(x,param[0],param[1]) - y
fig, ax = plt.subplots()
# Read data
d = pd.read_csv("data/dataset_0.csv")
ax.plot(d.x,d.y,'o')
# Perform regression
param_guesses = [1,1]
fit = scipy.optimize.least_squares(linear_r,param_guesses,args=(d.x,d.y))
fit_a = fit.x[0]
fit_b = fit.x[1]
sum_of_square_residuals = fit.cost
# Plot result
x_range = np.linspace(np.min(d.x),np.max(d.x),100)
ax.plot(x_range,linear(x_range,fit_a,fit_b))
fit
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What does the following code do?
Step2: What does the following code do?
Step3: What does the following code do?
Step4: What does the following code do?
Step7: Put together
|
6,582
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pickle
%matplotlib inline
# Read in CBECS data
data = pd.DataFrame.from_csv('C:/F16-12-752-master/projects/thongyi_weijian1/data/CBECS.csv')
data.tail()
energydata=pd.DataFrame()
type_B=[2,13,16,26] # office, inpatient health care, service and public assembly
type_C=[1,3,4,5,6,7,8,9,10,11,12,14,15,17,18,19,20,21,22,23,24,25,91]
data_type=data
data_type=data_type[data_type.NGUSED!=2]
for i in type_C:
data_type=data_type[data_type.PBA!=i]
energydata['Building Type']=data_type.PBA
index=['ELBTU','NGBTU','ELVNBTU','NGHTBTU']
for i in index:
energydata[i]=data_type[i]/data_type.SQFT
energydata = energydata.dropna(how='any')
energydata = energydata[~(energydata == 0).any(axis=1)]
PBA1=energydata['Building Type'].unique()
count=[]
for i in PBA1:
count.append([energydata[energydata['Building Type']==i].shape[0],i])
count
# energydata[energydata['Building Type']==2].boxplot()
fig1 = plt.figure(figsize=(15,15))
times=1
data_seperate=[]
# energydata[energydata['Building Type']==type_B[1]]
for i in range(len(type_B)):
x=energydata[energydata['Building Type']==type_B[i]]
x=x.drop(x.columns[0],axis=1)
data_seperate.append(x)
for i in range(len(type_B)):
plt.subplot(len(type_B),2,times)
data_seperate[i].boxplot()
times=times+1
plt.title(type_B[i])
plt.ylim(0,300)
y=pd.DataFrame()
for i in range(len(type_B)):
y=y.append(data_seperate[i])
X=y.as_matrix().astype(np.float32)
from sklearn.cluster import KMeans
num_clust = 4
clusters = KMeans(n_clusters=num_clust).fit(X)
cluster_assignments = clusters.predict(X)
# plt.subplot(num_clust+1,1,1)
# plt.plot(cluster_assignments[:150])
# plt.ylim([0.2,1.1])
fig2 = plt.figure(figsize=(15,15))
for cluster_id in range(len(clusters.cluster_centers_)):
plt.subplot(num_clust,2,cluster_id+1)
cluster_members = X[cluster_assignments==cluster_id,:]
print(len(cluster_members))
for i in range(len(cluster_members)):
plt.plot(cluster_members[i,:], color='grey', lw='0.1')
plt.plot(clusters.cluster_centers_[cluster_id,:], color='k', lw='1')
# plt.ylim([-2000,2000])
y['assignment']=cluster_assignments
y=y[y.assignment!=1] # The weird samples can locate in two different clusters
# y=y[y.assignment!=3] #and the cluster number may be different every time run the code.
y=y.drop(y.columns[4],axis=1)
X=y.as_matrix().astype(np.float32)
num_clust = 4
clusters = KMeans(n_clusters=num_clust).fit(X)
cluster_assignments = clusters.predict(X)
# plt.subplot(num_clust+1,1,1)
# plt.plot(cluster_assignments[:150])
# plt.ylim([0.2,1.1])
fig2 = plt.figure(figsize=(20,15))
for cluster_id in range(len(clusters.cluster_centers_)):
plt.subplot(num_clust+1,2,cluster_id+1)
cluster_members = X[cluster_assignments==cluster_id,:]
print(len(cluster_members))
for i in range(len(cluster_members)):
plt.plot(cluster_members[i,:], color='grey', lw='0.1')
plt.plot(clusters.cluster_centers_[cluster_id,:], color='k', lw='1')
cluster_assignments
y['assignment']=cluster_assignments
y=y.join(data['PBA'],how='inner')
y['judge']=1
y['judge'].iloc[np.where(np.array(y.PBA)==2)]=1 # This number assignment requires visually judging.
y['judge'].iloc[np.where(np.array(y.PBA)==13)]=3
y['judge'].iloc[np.where(np.array(y.PBA)==16)]=0
y['judge'].iloc[np.where(np.array(y.PBA)==26)]=2
y[y['judge']==y['assignment']].count()
a=324/(139+434+23+58)
a
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Please download the dataset and change the file path.
Step2: In this time, four building types are selected which are office, pubilc assembly, inpatient health center and service.
Step3: There is a cluster only contains 12 samples. We treated them as weird data sample and delete them.
Step4: After deleting the weired data sample, the cluster analysis result is much reasonable.
Step5: The connection between assignment and building type need to be found by visually judging. The connecting can be different every time the file was run.
|
6,583
|
<ASSISTANT_TASK:>
Python Code:
data_path = "data/09/PdNiP_test.hspy"
%matplotlib inline
import pyxem as pxm
import hyperspy.api as hs
pxm.__version__
data = hs.load("./data/09/PdNiP_test.hspy")
data.set_signal_type("electron_diffraction")
data.beam_energy=200
data.unit = "k_nm^-1"
mask =data.get_direct_beam_mask(20)
# Affine correction from fitting an ellipse
import numpy as np
center=(31.2,31.7)
affine=np.array([[ 1.03725511, -0.02662789, 0. ],
[-0.02662789, 1.01903215, 0. ],
[ 0. , 0. , 1. ]])
data.set_ai(center=center)
rad = data.get_azimuthal_integral2d(npt=100)
rad.sum().plot()
summed = rad.sum()
mask = ((summed>4e6)+(summed<3e5))
mask.plot()
rad.plot(vmax=4000)
cor = rad.get_angular_correlation(mask=mask)
cor.plot()
cor = rad.map(pxm.utils.correlation_utils._correlation, inplace=False, axis=1, normalize=True)
cor.isig[:].plot(vmax=1, vmin=-1)
power = cor.get_angular_power()
import matplotlib.pyplot as plt
f = plt.figure(figsize=(15,10))
power.plot_symmetries(k_region = [3.,4.5],fig=f)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='s2'></a>
Step2: Note
Step3: <a id='s3'></a>
Step4: <a id='s4'></a>
|
6,584
|
<ASSISTANT_TASK:>
Python Code:
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import sys
import os
from datetime import datetime
from tqdm import tqdm
from PIL import Image
import imageio
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
# ffmpeg installation location, for creating videos
plt.rcParams['animation.ffmpeg_path'] = str(Path.home() / "Documents/dev_tools/ffmpeg-20190623-ffa64a4-win64-static/bin/ffmpeg.exe")
%load_ext autoreload
%autoreload 2
# StyleGAN Utils
from stylegan_utils import load_network, gen_image_fun, synth_image_fun, create_video, map_latents
import dnnlib
import dataset_tool
import run_projector
import projector
import training.dataset
import training.misc
# Specific of encoder repos, comment out if not needed
#from encoder.perceptual_model import PerceptualModel
#from encoder.generator_model import Generator
# Data Science Utils
sys.path.append(os.path.join(os.pardir, 'data-science-learning'))
from ds_utils import generative_utils
res_dir = Path.home() / 'Documents/generated_data/stylegan'
MODELS_DIR = Path("C:/Users/User/Documents/models/stylegan2")
MODEL_NAME = 'drawing2_1024'
SNAPSHOT_NAME = 'network-snapshot-002048'
Gs, Gs_kwargs, noise_vars = load_network(str(MODELS_DIR / MODEL_NAME / SNAPSHOT_NAME) + '.pkl')
Z_SIZE = Gs.input_shape[1:][0]
IMG_SIZE = Gs.output_shape[2:]
IMG_SIZE
def project_images(images_dir, tfrecord_dir, data_dir, num_steps, num_snapshots, pure_projector=False,
lr=0.1, dlatent_init_dir=None):
# setup projector
print('Setting up projector')
proj = projector.Projector(num_steps=num_steps, pure_projector=pure_projector, initial_learning_rate=lr)
proj.set_network(Gs)
# generate tfrecords
images_paths = dataset_tool.create_from_images(str(tfrecord_dir), str(images_dir), False)
nb_images = len(images_paths)
# loading images from tfrecords
dataset_obj = training.dataset.load_dataset(data_dir=str(data_dir), tfrecord_dir=tfrecord_dir,
max_label_size=0, verbose=True, repeat=False, shuffle_mb=0)
assert dataset_obj.shape == Gs.output_shape[1:]
# project all loaded images
print('=======================')
for image_idx in tqdm(range(nb_images)):
print(f'Projecting image {image_idx}/{nb_images}')
images, _labels = dataset_obj.get_minibatch_np(1)
images = training.misc.adjust_dynamic_range(images, [0, 255], [-1, 1])
# loading init dlatent, if given
if dlatent_init_dir is not None:
dlatent_init = np.load(dlatent_init_dir / (Path(images_paths[image_idx]).stem + '.npy'))
else:
dlatent_init = None
run_path = data_dir / f'out_{image_idx}'
run_path.mkdir()
run_projector.project_image(proj, targets=images,
png_prefix=dnnlib.make_run_dir_path(str(run_path / 'image_')),
num_snapshots=num_snapshots,
dlatent_init=dlatent_init)
data_dir = res_dir / 'projection' / MODEL_NAME / SNAPSHOT_NAME / datetime.now().strftime("%Y%m%d_%H%M%S") # where the projections results will be saved
images_dir = Path.home() / 'Documents/generated_data/'
tfrecord_dir = data_dir / 'tfrecords'
project_images(images_dir=images_dir, tfrecord_dir=tfrecord_dir, data_dir=data_dir,
num_steps=1000, num_snapshots=10, pure_projector=False,
dlatent_init_dir=None)#data_dir.parent / 'latent_init')
create_video(data_dir / 'out', res_dir / 'projection' / 'out_{}.mp4'.format(''))
from encoder.perceptual_model import PerceptualModel
from encoder.generator_model import Generator
import pretrained_networks
_G, _D, Gs = pretrained_networks.load_networks((str(MODELS_DIR / MODEL_NAME / SNAPSHOT_NAME) + '.pkl'))
BATCH_SIZE = 1
PERCEPTUAL_MODEL_IMG_SIZE = 256
# setup utils generator and perceptual model
generator = Generator(Gs, BATCH_SIZE, randomize_noise=False)
perceptual_model = PerceptualModel(PERCEPTUAL_MODEL_IMG_SIZE, layer=9, batch_size=BATCH_SIZE)
perceptual_model.build_perceptual_model(generator, _D)
def split_to_batches(l, n):
for i in range(0, len(l), n):
yield l[i:i + n]
def encode_images(images_dir, data_dir, iterations, num_snapshots, learning_rate=1.):
# collect images
images_paths = [str(img) for img in images_dir.glob('*')][:3]
run_path = data_dir
run_path.mkdir()
snapshot_steps = set(iterations - np.linspace(0, iterations, num_snapshots, endpoint=False, dtype=int))
# project all loaded images
for images_batch in tqdm(split_to_batches(images_paths, BATCH_SIZE), total=len(images_paths)//BATCH_SIZE):
cur_step = 0
images_names = [os.path.splitext(os.path.basename(img_path))[0] for img_path in images_batch]
perceptual_model.set_reference_images(images_batch)
optimizer = perceptual_model.optimize(generator.dlatent_variable,
iterations=iterations)
pbar = tqdm(optimizer, leave=False, mininterval=9, total=iterations)
for loss in pbar:
cur_step += 1
if cur_step in snapshot_steps:
#print(' '.join(images_names), ' loss:', loss)
# generate images from found dlatents and save them
generated_images = generator.generate_images()
generated_dlatents = generator.get_dlatents()
for img_array, dlatent, img_name in zip(generated_images, generated_dlatents, images_names):
img = Image.fromarray(img_array, 'RGB')
img.save(str(run_path / f'{cur_step}_{img_name}.png'), 'PNG')
np.save(str(run_path / f'{cur_step}_{img_name}.npy'), dlatent)
generator.reset_dlatents()
data_dir = res_dir / 'projection' / MODEL_NAME / SNAPSHOT_NAME / datetime.now().strftime("%Y%m%d_%H%M%S") # where the projections results will be saved
images_dir = Path.home() / 'Documents/generated_data/'
encode_images(images_dir=images_dir, data_dir=data_dir, iterations=1000, num_snapshots=10)
target_latents = np.random.randn(18, Z_SIZE)
img = gen_image_fun(Gs, target_latents, Gs_kwargs, noise_vars, truncation_psi=0.5)
plt.imshow(img)
img = synth_image_fun(Gs, target_latents[np.newaxis,:,:], Gs_kwargs, randomize_noise=True)
plt.imshow(img)
zlatents = np.random.randn(30, Z_SIZE)
truncation_psi = 0.5
data_dir = res_dir / 'projection' / MODEL_NAME / SNAPSHOT_NAME / 'rand_gen' / datetime.now().strftime("%Y%m%d_%H%M%S")
data_dir.mkdir(parents=True, exist_ok=False)
dlatents = map_latents(Gs, zlatents, truncation_psi)
for i, dl in enumerate(dlatents):
img = synth_image_fun(Gs, dl[np.newaxis,:,:], Gs_kwargs, randomize_noise=True)
img = Image.fromarray(img, 'RGB')
img.save(str(data_dir / f'{i}.png'), 'PNG')
np.save(str(data_dir / f'{i}.npy'), dl)
from keras.models import load_model
resnet = load_model(MODELS_DIR / MODEL_NAME / 'resnet' / 'finetuned_resnet.h5')
resnet.summary()
resnet_img_size = (512, 512)
resnet.summary()
target_img = Image.open("")
target_img = target_img.resize(resnet_img_size)
plt.imshow(target_img)
predicted_latent = resnet.predict(np.array(target_img)[np.newaxis,:])
img = synth_image_fun(Gs, predicted_latent, Gs_kwargs, randomize_noise=True)
plt.imshow(img)
def generate_latent_init(resnet_path, images_dir, out_dir):
# setup projector
print('Setting up resnet model')
resnet = load_model(resnet_path)
resnet_img_size = (512, 512)
# project all images
print('=======================')
all_images = list(images_dir.glob('*.png')) + list(images_dir.glob('*.jpg'))
for img_path in tqdm(all_images):
target_img = Image.open(str(img_path))
target_img = target_img.resize(resnet_img_size)
predicted_latent = resnet.predict(np.array(target_img)[np.newaxis,:])[0]
out_dir.mkdir(parents=True, exist_ok=True)
np.save(out_dir / (img_path.stem + '.npy'), predicted_latent)
latent_init_dir = res_dir / 'projection' / MODEL_NAME / SNAPSHOT_NAME / 'latent_init'
images_dir = Path.home() / 'Documents/generated_data/'
generate_latent_init(resnet_path=MODELS_DIR / MODEL_NAME / 'resnet' / 'finetuned_resnet.h5',
images_dir=images_dir, out_dir=latent_init_dir)
load_dir = res_dir / 'projection' / 'original_ffhq' / 'stylegan2-ffhq-config-f'
out_dir = res_dir / 'projection' / MODEL_NAME / SNAPSHOT_NAME / 'dlatent_mix' / datetime.now().strftime("%Y%m%d_%H%M%S")
out_dir.mkdir(parents=True, exist_ok=False)
for i, dl_path in enumerate(load_dir.glob('*.npy')):
dl = np.load(dl_path)
img = synth_image_fun(Gs, dl[np.newaxis,:,:], Gs_kwargs, randomize_noise=False)
other_img = imageio.imread(str(dl_path).replace('npy', 'png'))
w = h = img.shape[0]
canvas = Image.new('RGBA', (w*2,h), 'white')
canvas.paste(Image.fromarray(img), (0, 0))
canvas.paste(Image.fromarray(other_img), (w, 0))
canvas.save(str(out_dir / f'{i}.png'), 'PNG')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Network
Step2: Project
Step3: Encode
Step4: Generate Images
Step5: generate some random samples and save to disk
Step6: Projected Latent Initialization
Step7: Load latents from other network and generate with current
|
6,585
|
<ASSISTANT_TASK:>
Python Code:
from pynq.overlays.base import BaseOverlay
from pynq.lib.video import *
base = BaseOverlay("base.bit")
# monitor configuration: 640*480 @ 60Hz
Mode = VideoMode(640,480,24)
hdmi_out = base.video.hdmi_out
hdmi_out.configure(Mode,PIXEL_BGR)
hdmi_out.start()
# monitor (output) frame buffer size
frame_out_w = 1920
frame_out_h = 1080
# camera (input) configuration
frame_in_w = 640
frame_in_h = 480
# initialize camera from OpenCV
import cv2
videoIn = cv2.VideoCapture(0)
videoIn.set(cv2.CAP_PROP_FRAME_WIDTH, frame_in_w);
videoIn.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_in_h);
print("Capture device is open: " + str(videoIn.isOpened()))
# Capture webcam image
import numpy as np
ret, frame_vga = videoIn.read()
# Display webcam image via HDMI Out
if (ret):
outframe = hdmi_out.newframe()
outframe[0:480,0:640,:] = frame_vga[0:480,0:640,:]
hdmi_out.writeframe(outframe)
else:
raise RuntimeError("Failed to read from camera.")
# Output webcam image as JPEG
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
plt.imshow(frame_vga[:,:,[2,1,0]])
plt.show()
import cv2
np_frame = frame_vga
face_cascade = cv2.CascadeClassifier(
'/home/xilinx/jupyter_notebooks/base/video/data/'
'haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier(
'/home/xilinx/jupyter_notebooks/base/video/data/'
'haarcascade_eye.xml')
gray = cv2.cvtColor(np_frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv2.rectangle(np_frame,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = np_frame[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
# Output OpenCV results via HDMI
outframe[0:480,0:640,:] = frame_vga[0:480,0:640,:]
hdmi_out.writeframe(outframe)
# Output OpenCV results via matplotlib
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
plt.imshow(np_frame[:,:,[2,1,0]])
plt.show()
videoIn.release()
hdmi_out.stop()
del hdmi_out
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2
Step2: Step 3
Step3: Step 4
Step4: Step 5
Step5: Step 6
Step6: Step 7
Step7: Step 8
|
6,586
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import macrodensity as md
import math
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
extrema = md.vasp_tools.get_band_extrema('OUTCAR_ZnO')
print(extrema)
extrema = md.vasp_tools.get_band_extrema('OUTCAR_ZnS')
print(extrema)
input_file = 'LOCPOT'
lattice_vector = 3.6
output_file = 'planar.dat'
if os.path.isfile('LOCPOT'):
print('LOCPOT already exists')
else:
os.system('bunzip2 LOCPOT.bz2')
vasp_pot, NGX, NGY, NGZ, Lattice = md.read_vasp_density(input_file)
vector_a,vector_b,vector_c,av,bv,cv = md.matrix_2_abc(Lattice)
resolution_x = vector_a/NGX
resolution_y = vector_b/NGY
resolution_z = vector_c/NGZ
grid_pot, electrons = md.density_2_grid(vasp_pot,NGX,NGY,NGZ)
## POTENTIAL
planar = md.planar_average(grid_pot, NGX, NGY, NGZ)
## MACROSCOPIC AVERAGE
macro = md.macroscopic_average(planar, lattice_vector/2, resolution_z)
fig, ax1 = plt.subplots(1, 1, sharex=True)
textsize = 22
mpl.rcParams['xtick.labelsize'] = textsize
mpl.rcParams['ytick.labelsize'] = textsize
mpl.rcParams['figure.figsize'] = (15, 8)
ax1.plot(planar,label="Planar",lw=3)
ax1.plot(macro,label="Macroscopic",lw=3)
ax1.set_ylabel('V/V', fontsize=22)
ax1.set_xlabel('Grid position', fontsize=22)
ax1.set_xlim(0,len(planar))
ax1.set_facecolor((0.95,0.95,0.95))
ax1.grid(True)
ax1.legend(fontsize=22)
plt.savefig('hj-offset.png')
plt.show()
np.savetxt(output_file,macro)
extrema = md.vasp_tools.get_band_extrema('OUTCAR_ZnO_av')
print(extrema)
extrema = md.vasp_tools.get_band_extrema('OUTCAR_ZnS_av')
print(extrema)
input_file = 'LOCPOT'
lattice_vector = 3.6
output_file = 'planar.dat'
vasp_pot, NGX, NGY, NGZ, Lattice = md.read_vasp_density(input_file)
vector_a,vector_b,vector_c,av,bv,cv = md.matrix_2_abc(Lattice)
resolution_x = vector_a/NGX
resolution_y = vector_b/NGY
resolution_z = vector_c/NGZ
grid_pot, electrons = md.density_2_grid(vasp_pot,NGX,NGY,NGZ)
## POTENTIAL
planar = md.planar_average(grid_pot,NGX,NGY,NGZ)
## MACROSCOPIC AVERAGE
macro = md.macroscopic_average(planar,lattice_vector/2,resolution_z)
fig, ax1 = plt.subplots(1, 1, sharex=True)
textsize = 22
mpl.rcParams['xtick.labelsize'] = textsize
mpl.rcParams['ytick.labelsize'] = textsize
mpl.rcParams['figure.figsize'] = (10, 6)
ax1.plot(planar,label="Planar",lw=3)
ax1.plot(macro,label="Macroscopic",lw=3)
ax1.set_xlim(0,len(planar))
ax1.set_facecolor((0.95,0.95,0.95))
ax1.grid(True)
ax1.legend(fontsize=22)
plt.show()
np.savetxt(output_file,macro)
dlnVa = (47.55 - 61.04) / 61.04 # ZnO
dlnVb = (76.88 - 61.04) / 61.04 # ZnS
VBO_natural = (-2.464 + 0.48 * dlnVa ) - (4.8288 + 0.83 * dlnVb) + 5.25
print 'Natural offset: %3.1f eV' % (VBO_natural)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The code below is usually set in the PlanarAverage.py file; you don't need to edit it
Step2: The code below will prompt you to say which axis you want to average along
Step3: Now we can plot the results
Step4: Get $\Delta V$
Step5: The deformation potential
Step6: As before the value of $\Delta V$ is 5.25 V.
|
6,587
|
<ASSISTANT_TASK:>
Python Code:
P = 1.2 # weight current errors more
I = 1
D = 0.0 # ignore future potential errors
L = 50 # number of iterations
pid = PID.PID(P, I, D)
pid.SetPoint=0.0
pid.setSampleTime(0.01)
END = L
feedback = 0
feedback_list = []
time_list = []
setpoint_list = []
for i in range(1, END):
pid.update(feedback)
output = pid.output
if pid.SetPoint > 0:
feedback += (output - (1/i))
if i>9:
pid.SetPoint = 1
time.sleep(0.02)
feedback_list.append(feedback)
setpoint_list.append(pid.SetPoint)
time_list.append(i)
time_sm = np.array(time_list)
time_smooth = np.linspace(time_sm.min(), time_sm.max(), 300)
feedback_smooth = spline(time_list, feedback_list, time_smooth)
plt.plot(time_smooth, feedback_smooth)
plt.plot(time_list, setpoint_list)
plt.xlim((0, L))
plt.ylim((min(feedback_list)-0.5, max(feedback_list)+0.5))
plt.xlabel('time (s)')
plt.ylabel('PID (PV)')
plt.title('TEST PID')
plt.ylim((1-0.5, 1+0.5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: how quickly does it converge?
|
6,588
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
people = graphlab.SFrame('people_wiki.gl/')
people.head()
len(people)
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
clooney = people[people['name'] == 'George Clooney']
clooney['text']
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
print obama['word_count']
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
obama_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
tfidf = graphlab.text_analytics.tf_idf(people['word_count'])
# Earlier versions of GraphLab Create returned an SFrame rather than a single SArray
# This notebook was created using Graphlab Create version 1.7.1
if graphlab.version <= '1.6.1':
tfidf = tfidf['docs']
tfidf
people['tfidf'] = tfidf
obama = people[people['name'] == 'Barack Obama']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
clinton = people[people['name'] == 'Bill Clinton']
beckham = people[people['name'] == 'David Beckham']
graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
knn_model.query(obama)
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load some text data - from wikipedia, pages on people
Step2: Data contains
Step3: Explore the dataset and checkout the text it contains
Step4: Exploring the entry for actor George Clooney
Step5: Get the word counts for Obama article
Step6: Sort the word counts for the Obama article
Step7: Sorting the word counts to show most common words at the top
Step8: Most common words include uninformative words like "the", "in", "and",...
Step9: Examine the TF-IDF for the Obama article
Step10: Words with highest TF-IDF are much more informative.
Step11: Is Obama closer to Clinton than to Beckham?
Step12: Build a nearest neighbor model for document retrieval
Step13: Applying the nearest-neighbors model for retrieval
Step14: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
|
6,589
|
<ASSISTANT_TASK:>
Python Code:
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import matplotlib.pyplot as plt
plt.xkcd()
# if this is true, all images are saved to disk
global_print_flag = False
!mkdir tmp_figures
# Choose one of the two following data sets, the larger one gives better results, but might clutter the visualization depending on resolution
# !curl -O https://raw.githubusercontent.com/DJCordhose/ai/master/notebooks/scipy/data/insurance-customers-1500.csv
# !curl -O https://raw.githubusercontent.com/DJCordhose/ai/master/notebooks/scipy/data/insurance-customers-300.csv
import pandas as pd
# df = pd.read_csv('./insurance-customers-300.csv', sep=';')
df = pd.read_csv('./insurance-customers-1500.csv', sep=';')
# we deliberately decide this is going to be our label, it is often called lower case y
y=df['group']
# since 'group' is now the label we want to predict, we need to remove it from the training data
df.drop('group', axis='columns', inplace=True)
# input data often is named upper case X, the upper case indicates, that each row is a vector
X = df.as_matrix()
# ignore this, it is just technical code to plot decision boundaries
# Adapted from:
# http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
# http://jponttuset.cat/xkcd-deep-learning/
from matplotlib.colors import ListedColormap
cmap_print = ListedColormap(['#AA8888', '#004000', '#FFFFDD'])
cmap_bold = ListedColormap(['#AA4444', '#006000', '#EEEE44'])
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#FFFFDD'])
font_size=25
title_font_size=40
def meshGrid(x_data, y_data):
h = 1 # step size in the mesh
x_min, x_max = x_data.min() - 1, x_data.max() + 1
y_min, y_max = y_data.min() - 1, y_data.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return (xx,yy)
def plotPrediction(clf, x_data, y_data, x_label, y_label, ground_truth, title="",
mesh=True, fname=None, print=False):
xx,yy = meshGrid(x_data, y_data)
fig, ax = plt.subplots(figsize=(20,10))
if clf and mesh:
Z = clf.predict(np.c_[yy.ravel(), xx.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.pcolormesh(xx, yy, Z, cmap=cmap_light)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
if print:
ax.scatter(x_data, y_data, c=ground_truth, cmap=cmap_print, s=200, marker='o', edgecolors='k')
else:
ax.scatter(x_data, y_data, c=ground_truth, cmap=cmap_bold, s=100, marker='o', edgecolors='k')
ax.set_xlabel(x_label, fontsize=font_size)
ax.set_ylabel(y_label, fontsize=font_size)
ax.set_title(title, fontsize=title_font_size)
if fname and global_print_flag:
fig.savefig('tmp_figures/'+fname)
def plot_keras_prediction(clf, x_data, y_data, x_label, y_label, ground_truth, title="",
mesh=True, fixed=None, fname=None, print=False):
xx,yy = meshGrid(x_data, y_data)
fig, ax = plt.subplots(figsize=(20,10))
if clf and mesh:
grid_X = np.array(np.c_[yy.ravel(), xx.ravel()])
if fixed:
fill_values = np.full((len(grid_X), 1), fixed)
grid_X = np.append(grid_X, fill_values, axis=1)
Z = clf.predict(grid_X)
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
ax.pcolormesh(xx, yy, Z, cmap=cmap_light)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
if print:
ax.scatter(x_data, y_data, c=ground_truth, cmap=cmap_print, s=200, marker='o', edgecolors='k')
else:
ax.scatter(x_data, y_data, c=ground_truth, cmap=cmap_bold, s=100, marker='o', edgecolors='k')
ax.set_xlabel(x_label, fontsize=font_size)
ax.set_ylabel(y_label, fontsize=font_size)
ax.set_title(title, fontsize=title_font_size)
if fname and global_print_flag:
fig.savefig('tmp_figures/'+fname)
from sklearn.model_selection import train_test_split
# using stratefy we get a balanced number of samples per category (important!)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42, stratify=y)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
X_train_2_dim = X_train[:, :2]
X_test_2_dim = X_test[:, :2]
from sklearn.linear_model import LogisticRegression
lg_clf = LogisticRegression()
%time lg_clf.fit(X_train_2_dim, y_train)
plotPrediction(lg_clf, X_train_2_dim[:, 1], X_train_2_dim[:, 0],
'Age', 'Max Speed', y_train,
title="Train Data, Logistic Regression",
fname='logistic-regression-train.png')
lg_clf.score(X_train_2_dim, y_train)
plotPrediction(lg_clf, X_test_2_dim[:, 1], X_test_2_dim[:, 0],
'Age', 'Max Speed', y_test,
title="Test Data, Logistic Regression",
fname='logistic-regression-test.png')
lg_clf.score(X_test_2_dim, y_test)
# http://scikit-learn.org/stable/modules/cross_validation.html
from sklearn.model_selection import cross_val_score
# cross_val_score?
scores = cross_val_score(lg_clf, X_train_2_dim, y_train, n_jobs=-1)
scores
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading and exploring our data set
Step2: Logistic Regression using the one-vs-rest (OvR) scheme
Step3: Cross Validation splits the train data in different ways and performs a number of training runs (3 in this case)
|
6,590
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# spatial grid
nx = 41 # try changing from 41 to 81
dx = 2./(nx-1) #dx = delta x
nt = 20
dt = nt/1000. #dt = delta t
c = 1. # wavespeed
u = np.ones(nx)
u[.5/dx : 1./dx+1] = 2
print u
#visually
plt.plot(np.linspace(0, 2, nx), u);
un = np.ones(nx) #temporary n-th velocity array
for n in range(nt):
un = u.copy() # store the previous time-step's values
for i in range(1,nx):
#for i in range(nx): # next breakout question
u[i] = un[i] - c*(dt/dx)*(un[i] - un[i-1])
plt.plot(np.linspace(0, 2, nx), u);
def linearconv(nx):
dx = 2./(nx-1)
nt = 20
dt = 0.025
c = 1.
u = np.ones(nx)
u[.5/dx : 1/dx+1] = 2
un = np.ones(nx)
for n in range(nt):
un = u.copy()
for i in range(1,nx):
u[i] = un[i] - c*(dt/dx)*(un[i] - un[i-1])
plt.plot(np.linspace(0,2,nx), u);
linearconv(41) # 41 grid points, same as Step 1
linearconv(71)
linearconv(85)
def linearconv(nx):
dx = 2./(nx-1)
nt = 20
c = 1.
sigma = .5 # for nx=41, we get sigma=0.5 from dt/dx = 0.025/(2./(nx-1))
dt = sigma*dx
u = np.ones(nx)
u[.5/dx : 1/dx+1] = 2
un = np.ones(nx)
for n in range(nt):
un = u.copy()
for i in range(1,nx):
u[i] = un[i] - c*(dt/dx)*(un[i] - un[i-1])
plt.plot(np.linspace(0,2,nx), u);
linearconv(85)
linearconv(201) # as we increase nx, our time window shortens due to a smaller dt
nx = 41
dx = 2./(nx-1)
nt = 20
nu = 0.3 #the value of viscosity
sigma = .2 # notice the different sigma value
dt = sigma*dx**2/nu
u = np.ones(nx)
u[.5/dx : 1/dx+1] = 2
un = np.ones(nx)
for n in range(nt):
un = u.copy()
for i in range(1,nx-1):
u[i] = un[i] + nu*dt/dx**2*(un[i+1]-2*un[i]+un[i-1])
plt.plot(np.linspace(0,2,nx), u);
import numpy as np
import sympy
from sympy import init_printing
init_printing(use_latex=True) # output to be rendered as LaTeX
x,nu,t = sympy.symbols('x nu t')
phi = sympy.exp(-(x-4*t)**2/(4*nu*(t+1))) + sympy.exp(-(x-4*t-2*np.pi)**2/(4*nu*(t+1)))
phi
phiprime = phi.diff(x)
phiprime
print phiprime # shows Pythonic version
from sympy.utilities.lambdify import lambdify
u = -2*nu*(phiprime/phi)+4
# we are sending variables to a function
ufunc = lambdify((t, x, nu), u)
print ufunc(1,4,3)
###variable declarations
nx = 101
nt = 100
dx = 2*np.pi/(nx-1)
nu = .07
dt = dx*nu
x = np.linspace(0, 2*np.pi, nx)
un = np.empty(nx)
t = 0
u = np.asarray([ufunc(t, x0, nu) for x0 in x])
u
plt.figure(figsize=(11,7), dpi=100)
plt.plot(x, u, marker='o', lw=2)
plt.xlim([0,2*np.pi])
plt.ylim([0,10]);
for n in range(nt):
un = u.copy()
for i in range(nx-1):
u[i] = un[i] - un[i] * dt/dx *(un[i] - un[i-1]) + nu*dt/dx**2*\
(un[i+1]-2*un[i]+un[i-1])
u[-1] = un[-1] - un[-1] * dt/dx * (un[-1] - un[-2]) + nu*dt/dx**2*\
(un[0]-2*un[-1]+un[-2])
u_analytical = np.asarray([ufunc(nt*dt, xi, nu) for xi in x])
plt.figure(figsize=(11,7), dpi=100)
plt.plot(x,u, marker='o', lw=2, label='Computational')
plt.plot(x, u_analytical, label='Analytical')
plt.xlim([0,2*np.pi])
plt.ylim([0,10])
plt.legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We need to give an initial wave which is a function of $x$ (remember, $u(x,0)=u_0(x)$). We can easily choose a step-function for the velocity
Step2: Breakout
Step3: Note
Step4: Breakout
Step5: Breakout
Step6: Shown above, we see that this does not look like our original step-function wave. What happened? We broke stability (wave travels a distance in one time step, $\Delta t$, that is greater than our spatial step, $dx$). To maintain this, we need to enforce stability
Step7: Step 4
Step8: Step 5
Step9: Continued
Step10: This is a "saw-tooth function" to which we have applied our periodic boundary conditions. Steps 1 & 2 continually move the plot off the screen, but with periodic boundary conditions, it will wrap around to the front again.
|
6,591
|
<ASSISTANT_TASK:>
Python Code:
import rebound
import numpy as np
def setupSimulation():
sim = rebound.Simulation()
sim.add(m=1., hash="Sun")
sim.add(x=0.4,vx=5., hash="Mercury")
sim.add(a=0.7, hash="Venus")
sim.add(a=1., hash="Earth")
sim.move_to_com()
return sim
sim = setupSimulation()
sim.status()
sim = setupSimulation() # Resets everything
sim.exit_max_distance = 50.
Noutputs = 1000
times = np.linspace(0,20.*2.*np.pi,Noutputs)
xvenus, yvenus = np.zeros(Noutputs), np.zeros(Noutputs)
for i,time in enumerate(times):
try:
sim.integrate(time)
except rebound.Escape as error:
print(error)
for j in range(sim.N):
p = sim.particles[j]
d2 = p.x*p.x + p.y*p.y + p.z*p.z
if d2>sim.exit_max_distance**2:
index=j # cache index rather than remove here since our loop would go beyond end of particles array
sim.remove(index=index)
xvenus[i] = sim.particles[2].x
yvenus[i] = sim.particles[2].y
print("Went down to {0} particles".format(sim.N))
%matplotlib inline
import matplotlib.pyplot as plt
fig,ax = plt.subplots(figsize=(15,5))
ax.plot(xvenus, yvenus)
ax.set_aspect('equal')
ax.set_xlim([-2,10]);
sim = setupSimulation() # Resets everything
sim.exit_max_distance = 50.
Noutputs = 1000
times = np.linspace(0,20.*2.*np.pi,Noutputs)
xvenus, yvenus = np.zeros(Noutputs), np.zeros(Noutputs)
for i,time in enumerate(times):
try:
sim.integrate(time)
except rebound.Escape as error:
print(error)
for j in range(sim.N):
p = sim.particles[j]
d2 = p.x*p.x + p.y*p.y + p.z*p.z
if d2>sim.exit_max_distance**2:
index=j # cache index rather than remove here since our loop would go beyond end of particles array
sim.remove(index=index)
xvenus[i] = sim.get_particle_by_hash("Venus").x
yvenus[i] = sim.get_particle_by_hash("Venus").y
fig,ax = plt.subplots(figsize=(15,5))
ax.plot(xvenus, yvenus)
ax.set_aspect('equal')
ax.set_xlim([-2,10]);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's run a simulation for 20 years (in default units where $G=1$, and thus AU, yr/2$\pi$, and $M_\odot$, see Units.ipynb for how to change units), and set up a 50 AU sphere beyond which we remove particles from the simulation. We can do this by setting the exit_max_distance flag of the simulation object. If a particle's distance (from the origin of whatever inertial reference frame chosen) exceeds sim.exit_max_distance, an exception is thrown.
Step2: So this worked as expected. Now let's plot what we got
Step3: This doesn't look right. The problem here is that when we removed particles[1] from the simulation, all the particles got shifted down in the particles array. So following the removal, xvenus all of a sudden started getting populated by the values for Earth (the new sim.particles[2]). A more robust way to access particles is using hashes (see UniquelyIdentifyingParticles.ipynb)
|
6,592
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
from IPython.display import Image
from mpl_toolkits.mplot3d import Axes3D
import track_simulator
from astropy.io import fits
import aplpy
#Disable astropy/aplpy logging
import logging
logger0 = logging.getLogger('astropy')
logger0.setLevel(logging.CRITICAL)
logger1 = logging.getLogger('aplpy')
logger1.setLevel(logging.CRITICAL)
HTML('../style/code_toggle.html')
NO_ANTENNA = 4
NO_BASELINES = NO_ANTENNA * (NO_ANTENNA - 1) / 2 + NO_ANTENNA
CENTRE_CHANNEL = 1e9 / 299792458 #Wavelength of 1 GHz
#Create a perfectly planar array with both a perfectly East-West baseline and 2 2D baselines
ENU_2d = np.array([[5,0,0],
[-5,0,0],
[10,0,0],
[0,23,0]]);
ENU_ew = np.array([[5,0,0],
[-5,0,0],
[10,0,0],
[0,0,0]]);
ARRAY_LATITUDE = 30 #Equator->North
ARRAY_LONGITUDE = 0 #Greenwitch->East, prime -> local meridian
fig = plt.figure(figsize=(10, 5))
ax=fig.add_subplot(121)
ax.set_title("2D Array")
ax.plot(ENU_2d[:,0],ENU_2d[:,1],"ko")
ax.set_xlabel("East")
ax.set_ylabel("North")
ax.set_xlim(-30,30)
ax.set_ylim(-30,30)
ax=fig.add_subplot(122)
ax.set_title("East-west array")
ax.plot(ENU_ew[:,0],ENU_ew[:,1],"ko")
ax.set_xlabel("East")
ax.set_ylabel("North")
ax.set_xlim(-30,30)
ax.set_ylim(-30,30)
plt.show()
DECLINATION = 0
T_OBS = 12
T_INT = 1/60.0
uw_2hr_2d = track_simulator.sim_uv(0.0,DECLINATION,T_OBS,T_INT,ENU_2d,ARRAY_LATITUDE,False)/CENTRE_CHANNEL
uv_2hr_ew = track_simulator.sim_uv(0.0,DECLINATION,T_OBS,T_INT,ENU_ew,ARRAY_LATITUDE,False)/CENTRE_CHANNEL
fig = plt.figure(figsize=(10, 5))
ax=fig.add_subplot(121)
ax.set_title("2D Array")
ax.plot(uw_2hr_2d[:,0],uw_2hr_2d[:,1],'k.')
ax.set_xlabel("u")
ax.set_ylabel("v")
ax.set_xlim(-10,10)
ax.set_ylim(-10,10)
ax=fig.add_subplot(122)
ax.set_title("East-west Array")
ax.plot(uv_2hr_ew[:,0],uv_2hr_ew[:,1],'k.')
ax.set_xlabel("u")
ax.set_ylabel("v")
ax.set_xlim(-10,10)
ax.set_ylim(-10,10)
plt.show()
DECLINATION = 45
T_INT = 1/60.0
T_OBS = 12
uvw_2d = track_simulator.sim_uv(0.0,DECLINATION,T_OBS,T_INT,ENU_2d,ARRAY_LATITUDE,False)/CENTRE_CHANNEL
uvw_ew = track_simulator.sim_uv(0.0,DECLINATION,T_OBS,T_INT,ENU_ew,ARRAY_LATITUDE,False)/CENTRE_CHANNEL
fig=plt.figure(figsize=(10, 5))
ax=fig.add_subplot(121,projection='3d')
ax.set_title("2D Array")
ax.view_init(elev=10, azim=160)
ax.plot(uvw_2d[:,0],uvw_2d[:,1],uvw_2d[:,2],'k.')
ax.set_xlabel("u")
ax.set_ylabel("v")
ax.set_zlabel("w")
ax=fig.add_subplot(122,projection='3d')
ax.set_title("East-west array")
ax.view_init(elev=10, azim=160)
ax.plot(uvw_ew[:,0],uvw_ew[:,1],uvw_ew[:,2],'k.')
ax.set_xlabel("u")
ax.set_ylabel("v")
ax.set_zlabel("w")
plt.show()
fig = plt.figure(figsize=(10, 5))
ax=fig.add_subplot(121)
ax.set_title("2D Array")
ax.plot(uvw_2d[:,0],uvw_2d[:,1],'k.')
ax.set_xlabel("u")
ax.set_ylabel("v")
ax=fig.add_subplot(122)
ax.set_title("East-west array")
ax.plot(uvw_ew[:,0],uvw_ew[:,1],'k.')
ax.set_xlabel("u")
ax.set_ylabel("v")
plt.show()
Image(filename="figures/tilted_interferometer.png")
Image(filename="figures/vla_uncorrected.png")
Image(filename="figures/vla_wproj.png")
gc1 = aplpy.FITSFigure('../data/fits/wterm/MeerKAT_6h60s_dec-30_10MHz_10chans_uniform_n3000_w0-image.fits')
cpx = gc1.pixel2world(256, 256)
gc1.recenter(cpx[0], cpx[1], radius=0.2)
gc1.show_colorscale(vmin=-0.2, vmax=1., cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('MeerKAT Observation (Not Corrected)')
gc1.add_colorbar()
gc1 = aplpy.FITSFigure('../data/fits/wterm/MeerKAT_6h60s_dec-30_10MHz_10chans_uniform_n3000-image.fits')
cpx = gc1.pixel2world(256, 256)
gc1.recenter(cpx[0], cpx[1], radius=0.2)
gc1.show_colorscale(vmin=-0.2, vmax=1., cmap='viridis')
gc1.hide_axis_labels()
gc1.hide_tick_labels()
plt.title('MeerKAT Observation (W-Corrected)')
gc1.add_colorbar()
Image(filename="figures/coplanar-faceting.png")
Image(filename="figures/non-coplanar-faceting.png")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
Step2: 5.5 The Break Down of the Small Angle Approximation and the W-Term
Step3: Figure
Step4: Figure
Step5: Figure
Step6: Figure
Step7: Figure
Step8: Figure
Step9: Figure
Step10: Figure
Step11: Figure
|
6,593
|
<ASSISTANT_TASK:>
Python Code:
x = [1,2,3]
y = [4,5,6]
# Zip the lists together
zip(x,y)
x = [1,2,3]
y = [4,5,6,7,8]
# Zip the lists together
zip(x,y)
d1 = {'a':1,'b':2}
d2 = {'c':4,'d':5}
zip(d1,d2)
zip(d2,d1.itervalues())
def switcharoo(d1,d2):
dout = {}
for d1key,d2val in zip(d1,d2.itervalues()):
dout[d1key] = d2val
return dout
switcharoo(d1,d2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note how tuples are returned. What if one iterabel is longer than the other?
Step2: Note how the zip is defined by the shortest iterable length. Its generally advised not to zip uneqal length iterables unless your very sure you only need partial tuple pairings.
Step3: This makes sense because simply iterating through the dictionaries will result in just the keys. We would have to call methods to mix keys and values
Step4: Great! Finally lets use zip a to switch the keys and values of the two dictionaries
|
6,594
|
<ASSISTANT_TASK:>
Python Code:
%%R
m <- prophet(df, interval.width = 0.95)
forecast <- predict(m, future)
forecast = Prophet(interval_width=0.95).fit(df).predict(future)
%%R
m <- prophet(df, mcmc.samples = 300)
forecast <- predict(m, future)
m = Prophet(mcmc_samples=300)
forecast = m.fit(df).predict(future)
%%R -w 9 -h 6 -u in
prophet_plot_components(m, forecast)
fig = m.plot_components(forecast)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Again, these intervals assume that the future will see the same frequency and magnitude of rate changes as the past. This assumption is probably not true, so you should not expect to get accurate coverage on these uncertainty intervals.
Step2: This replaces the typical MAP estimation with MCMC sampling, and can take much longer depending on how many observations there are - expect several minutes instead of several seconds. If you do full sampling, then you will see the uncertainty in seasonal components when you plot them
|
6,595
|
<ASSISTANT_TASK:>
Python Code:
%%bigquery df
WITH rawnumbers AS (
SELECT
departure_delay,
COUNT(1) AS num_flights,
COUNTIF(arrival_delay < 15) AS num_ontime
FROM
`bigquery-samples.airline_ontime_data.flights`
GROUP BY
departure_delay
HAVING
num_flights > 100
),
totals AS (
SELECT
SUM(num_flights) AS tot_flights,
SUM(num_ontime) AS tot_ontime
FROM rawnumbers
),
bayes AS (
SELECT
departure_delay,
num_flights / tot_flights AS prob_D,
num_ontime / tot_ontime AS prob_D_theta,
tot_ontime / tot_flights AS prob_theta
FROM
rawnumbers, totals
WHERE
num_ontime > 0
)
SELECT
*, (prob_theta * prob_D_theta / prob_D) AS prob_ontime
FROM
bayes
ORDER BY
departure_delay ASC
df.plot(x='departure_delay', y='prob_ontime');
%%bigquery df
SELECT
departure_delay,
COUNT(1) AS num_flights,
APPROX_QUANTILES(arrival_delay, 10) AS arrival_delay_deciles
FROM
`bigquery-samples.airline_ontime_data.flights`
GROUP BY
departure_delay
HAVING
num_flights > 100
ORDER BY
departure_delay ASC
import pandas as pd
percentiles = df['arrival_delay_deciles'].apply(pd.Series)
percentiles = percentiles.rename(columns = lambda x : str(x*10) + "%")
df = pd.concat([df['departure_delay'], percentiles], axis=1)
df.head()
without_extremes = df.drop(['0%', '100%'], 1)
without_extremes.plot(x='departure_delay', xlim=(-30,50), ylim=(-50,50));
%%bigquery
CREATE OR REPLACE MODEL ch09eu.bicycle_model_dnn
OPTIONS(input_label_cols=['duration'],
model_type='dnn_regressor', hidden_units=[32, 4])
TRANSFORM(
duration
, start_station_name
, CAST(EXTRACT(dayofweek from start_date) AS STRING)
as dayofweek
, CAST(EXTRACT(hour from start_date) AS STRING)
as hourofday
)
AS
SELECT
duration, start_station_name, start_date
FROM
`bigquery-public-data`.london_bicycles.cycle_hire
%%bigquery
SELECT * FROM ML.EVALUATE(MODEL ch09eu.bicycle_model_dnn)
%%bigquery
SELECT * FROM ML.PREDICT(MODEL ch09eu.bicycle_model_dnn,(
SELECT
'Park Street, Bankside' AS start_station_name
,CURRENT_TIMESTAMP() AS start_date
))
%%bigquery
CREATE OR REPLACE MODEL advdata.txtclass_tf
OPTIONS (model_type='tensorflow',
model_path='gs://cloud-training-demos/txtclass/export/exporter/1549825580/*')
%%bigquery
SELECT
input,
(SELECT AS STRUCT(p, ['github', 'nytimes', 'techcrunch'][ORDINAL(s)]) prediction FROM
(SELECT p, ROW_NUMBER() OVER() AS s FROM
(SELECT * FROM UNNEST(dense_1) AS p))
ORDER BY p DESC LIMIT 1).*
FROM ML.PREDICT(MODEL advdata.txtclass_tf,
(
SELECT 'Unlikely Partnership in House Gives Lawmakers Hope for Border Deal' AS input
UNION ALL SELECT "Fitbit\'s newest fitness tracker is just for employees and health insurance members"
UNION ALL SELECT "Show HN: Hello, a CLI tool for managing social media"
))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: But is it right, though? What's with the weird hump for early departures (departure_delay less than zero)?
Step2: Note the crazy non-linearity for top half of of the flights that leave more than 20 minutes early. Most likely, these are planes that try to beat some weather situation. About half of such flights succeed (the linear bottom) and the other half don't (the non-linear top). The average is what we saw as the weird hump in the probability plot. So yes, the hump is real. The rest of the distribution is clear-cut and the Bayes probabilities are quite valid.
Step3: BigQuery and TensorFlow
|
6,596
|
<ASSISTANT_TASK:>
Python Code:
import gambit
g = gambit.Game.new_tree()
g.title = "A simple poker example"
alice = g.players.add("Alice")
bob = g.players.add("Bob")
move = g.root.append_move(g.players.chance, 2)
move.actions[0].label = "A"
move.actions[0].prob = gambit.Rational(1, 2)
move.actions[1].label = "K"
move.actions[1].prob = gambit.Rational(1, 2)
move = g.root.children[0].append_move(alice, 2)
move.label = 'a'
move.actions[0].label = "R"
move.actions[1].label = "F"
move = g.root.children[1].append_move(alice, 2)
move.label = 'k'
move.actions[0].label = "R"
move.actions[1].label = "F"
move = g.root.children[0].children[0].append_move(bob, 2)
move.label = 'b'
move.actions[0].label = "M"
move.actions[1].label = "P"
g.root.children[1].children[0].append_move(move)
g
alice_big = g.outcomes.add("Alice wins big")
alice_big[0] = 2
alice_big[1] = -2
alice_small = g.outcomes.add("Alice wins")
alice_small[0] = 1
alice_small[1] = -1
bob_small = g.outcomes.add("Bob wins")
bob_small[0] = -1
bob_small[1] = 1
bob_big = g.outcomes.add("Bob wins big")
bob_big[0] = -2
bob_big[1] = 2
g.root.children[0].children[0].children[0].outcome = alice_big
g.root.children[0].children[0].children[1].outcome = alice_small
g.root.children[0].children[1].outcome = bob_small
g.root.children[1].children[0].children[0].outcome = bob_big
g.root.children[1].children[0].children[1].outcome = alice_small
g.root.children[1].children[1].outcome = bob_small
g
gambit.Game.read_game("poker.efg")
g.write() == gambit.Game.read_game("poker.efg").write()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: One can build up extensive games from scratch and manipulate them. This example shows one way to do that for the simple one-card poker game.
Step2: Define the players; save them as variables for convenience laster.
Step3: The root node is a chance move, between Ace and King. Chance moves default to uniform randomisation, but I set the probabilities here anyway just for explicitness.
Step4: After an Ace, Alice can Raise or Fold.
Step5: She can also Raise or Fold after the King. Calling append_move with the player, rather than the previously created move, creates a new information set for Alice.
Step6: After Alice raises with the Ace, Bob can Meet or Pass.
Step7: Likewise after Alice raises with the King. Using the same move here adds the move to the same information set for Bob.
Step8: The game so far, as an .efg file. We see the tree structure is in place; now to deal with payoffs.
Step9: Attach the outcomes to the corresponding terminal nodes. Notice how we can re-use the outcomes to denote that different terminal nodes may correspond to the same logical outcome.
Step10: Here's the game we've just built
Step11: For comparison, here's the game we started with for the Poker example.
Step12: We can confirm they're the same
|
6,597
|
<ASSISTANT_TASK:>
Python Code:
import graphistry
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
edges = [{'src': 0, 'dst': 1}, {'src': 1, 'dst': 0}]
g = graphistry.edges(pd.DataFrame(edges)).bind(source='src', destination='dst').settings(url_params={'play': 1000})
url = g.plot(render=False)
url
json_data = {
"name": "myUniqueGraphName",
"type": "edgelist",
"bindings": {
"sourceField": "src",
"destinationField": "dst",
"idField": "node"
},
"graph": [
{"src": "myNode1", "dst": "myNode2",
"myEdgeField1": "I'm an edge!", "myCount": 7},
{"src": "myNode2", "dst": "myNode3",
"myEdgeField1": "I'm also an edge!", "myCount": 200}
],
"labels": [
{"node": "myNode1",
"myNodeField1": "I'm a node!",
"pointColor": 5},
{"node": "myNode2",
"myNodeField1": "I'm a node too!",
"pointColor": 4},
{"node": "myNode3",
"myNodeField1": "I'm a node three!",
"pointColor": 4}
]
}
import json
with open('./data/samplegraph.json', 'w') as outfile:
json.dump(json_data, outfile)
! curl -H "Content-type: application/json" -X POST -d @./data/samplegraph.json https://labs.graphistry.com/etl?key=YOUR_API_KEY_HERE
from IPython.display import HTML, display
#skip splash screen
url = url.replace('splashAfter', 'zzz')
display(HTML('<iframe src="' + url + '" style="width: 100%; height: 400px"></iframe>'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Backend APIs
Step2: REST
Step3: 2. Frontend APIs
|
6,598
|
<ASSISTANT_TASK:>
Python Code:
import os, sys
currentdir = os.getcwd(); os.getcwd();
currentdir # I'm on a different computer now
sys.path.append('/home/topolo/PropD/Propulsion/')
import Physique
from Physique import FundConst
print Physique.FundConst.columns
Physique.FundConst
g_0pd = FundConst[ FundConst["Quantity"].str.contains("gravity") ]
# standard acceleration of gravity as a panda DataFrame
g_0pd
# access the values you're interested in
print g_0pd.Quantity
print g_0pd.Value.get_values()[0]
print g_0pd.Unit.get_values()[0]
# you can also grab just the 1 entry from this DataFrame using the .loc module
FundConst[FundConst["Quantity"].str.contains("Boltzmann")].loc[49,:]
g_0pd.loc[303,:]
convDF = Physique.conv
convDF.columns
convDF[convDF['Toconvertfrom'].str.contains("pound-force ")]
convDF[convDF['to'].str.contains("newton ")]
lbf2N = convDF.loc[340,:]; lbf2N
print lbf2N.Toconvertfrom, lbf2N.to, lbf2N.Multiplyby
print 470000*lbf2N.Multiplyby, lbf2N.to
convDF[convDF['Toconvertfrom'].str.match("psi")]
psi2Pa = convDF.loc[372,:]
print 3028*psi2Pa.Multiplyby, psi2Pa.to
convDF[convDF['Toconvertfrom'].str.match("atm")]
atm2Pa = convDF.loc[15,:]
print 3028*psi2Pa.Multiplyby/atm2Pa.Multiplyby, atm2Pa.Toconvertfrom
JPL_SSD_URL = "http://ssd.jpl.nasa.gov/" # JPL NASA Solar System Dynamics webpage
jpl_ssd_BS = Physique.scrape_BS.scraped_BS(JPL_SSD_URL)
# for table in jpl_ssd_BS.soup.find_all("table"):
# for subtable in table.find_all("table"):
# print subtable.find("table") # uncomment this out and run it to see the whole darn thing
jpl_ssd_BS.soup.find('img',{"alt":"PHYSICAL DATA"}).parent['href']
JPL_SSD_PHYS_DATA_URL = JPL_SSD_URL + jpl_ssd_BS.soup.find('img',{"alt":"PHYSICAL DATA"}).parent['href'][1:]
JPL_SSD_PHYS_DATA_URL
jpl_ssd_phys_data_BS = Physique.scrape_BS.scraped_BS(JPL_SSD_PHYS_DATA_URL)
jpl_ssd_phys_data_BS.soup.find('h2',text="Planets").find_next('a')
JPL_SSD_PLANET_PHYS_PAR_URL = JPL_SSD_URL + jpl_ssd_phys_data_BS.soup.find('h2',text="Planets").find_next('a')['href']
jpl_ssd_planet_phys_par_BS = Physique.scrape_BS.scraped_BS(JPL_SSD_PLANET_PHYS_PAR_URL)
jpl_ssd_planet_phys_parTBL = jpl_ssd_planet_phys_par_BS.soup.find("div", {"class":"page_title"}).find_next("table")
data = []
for row in jpl_ssd_planet_phys_parTBL.find_all('tr', recursive=False):
cols = row.find_all('td', recursive=False)
cols = [ele.text if ele.text != u'\xa0' else u'' for ele in cols]
data.append(cols)
hdrs = data[:2] # get the headers first
jpl_ssd_planet_phys_parTBL.find_all('tr')[2].find_all('td')[18].text
data = [[row[0].replace(u'\xa0',''),]+row[1:] for row in data[2:]] # remove the space, \xa0 from each of the planet's names
data = [[row[0],]+[ele.replace('\n','') for ele in row[1:]] for row in data] # remove the '\n' strings
data = [[row[0],]+[ele.split(u'\xb1')[0] for ele in row[1:]] for row in data] # let's just get the values
data = [[row[0],]+[ele.split(u'\xa0')[0] for ele in row[1:]] for row in data] # let's just get the values
data = [hdrs[1],] + data
import pandas as pd
data = pd.DataFrame( data )
data.columns = hdrs[0]
data
data.to_pickle('./rawdata/JPL_NASA_SSD_Planet_Phys_Par_values.pkl') # values only
PlanetParDF = pd.read_pickle('./rawdata/JPL_NASA_SSD_Planet_Phys_Par_values.pkl')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the current directory
Step2: Then append the directory containing the Physique package/library (it's just a folder) with sys.path.append; the absolute path for where I placed it just happened to be "/Users/ernestyeung/Documents/TeslaModelSP85D"
Step3: Programming note
Step4: Find a Fundamental Constant you are interested in using the usual panda modules
Step5: NIST Official Conversions (to metric)
Step6: From the list of columns, search for the quantity you desired by trying out different search terms
Step7: Or we can look up the SI unit we want to convert to.
Step8: Look at what you want and see the index; it happens to be 340 in this example.
Step9: Then the attributes can accessed by the column names.
Step10: So for example, the reusable SSME delivers a vacuum thrust of 470000 lb or
Step11: To obtain the conversion for pressure in psia, which we search for with "psi"
Step12: So for a chamber pressure of 3028 psia for the SSME,
Step13: Also, get the conversion for atmospheres (atm)
Step14: Webscraping example
Step15: Take a look at it with the usual BeautifulSoup modules (i.e. functions).
Step16: Let's just focus on the Physical Data subpage for today. This is the way to find a specific tag (in this case img) with a specific attribute (in this case alt="PHYSICAL DATA"), and then the parent module gets its parent. Then the href index in the square brackets [] gets the web address we desire.
Step17: At this point, I wish there was a rational and civilized manner to scrape all the relevant quantitative data from here for all the links (using Scrapy?) but I need help at this point for that endeavor. Otherwise, I manually look at the webpage itself and manually use Inspect Element to find what I want and then use BeautifulSoup accordingly.
Step18: Time to scrape the actual html code for the table we desire
Step19: I'll add back the units as part of the data (I don't know a sane and civilized way of attaching to each of the column names in pandas, a pandas DataFrame, the units, as extra information)
Step20: Time to save our work as a "pickle'd" pandas DataFrame.
Step21: And so to access this, to use in Python, do the following, using .read_pickle of pandas
|
6,599
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
np.random.seed(10)
A = tf.constant(np.random.randint(100,size=(5, 3)))
def g(A):
return tf.reduce_prod(A, 1)
result = g(A.__copy__())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.