text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load features for training
Step2: Load targets for training
Step3: Load features for testing
Step4: Load targets for testing
Step5: Unpack the Pandas DataFrames to NumPy arrays
Step7: Task I
Step9: Selecting Hyperparameters
Step10: Analysis
Step12: Task III
Step13: Now, with this model, let's try to optimize the regularization factor $\lambda$. This adjusts the strength of the regularizer.
|
<ASSISTANT_TASK:>
Python Code:
# Plots will be show inside the notebook
%matplotlib notebook
import matplotlib.pyplot as plt
# High-level package for creating and training artificial neural networks
import keras
# NumPy is a package for manipulating N-dimensional array objects
import numpy as np
# Pandas is a data analysis package
import pandas as pd
import admin.tools as tools
import problem_unittests as tests
train_features = tools.load_csv_with_dates('resources/bike_training_features.csv', 'dteday')
train_targets = tools.load_csv_with_dates('resources/bike_training_targets.csv', 'dteday')
test_features = tools.load_csv_with_dates('resources/bike_test_features.csv', 'dteday')
test_targets = tools.load_csv_with_dates('resources/bike_test_targets.csv', 'dteday')
test_dates = test_targets.index.strftime('%b %d')
print('\n', test_targets.head(n=5))
# Unpack features
X_train = train_features.values
X_test = test_features.values
# Unpack targets
y_train = train_targets['cnt'].values
y_test = test_targets['cnt'].values
# Record number of inputs and outputs
nb_features = X_train.shape[1]
nb_outputs = 1
# Import what we need
from keras.layers import (Input, Dense)
from keras.models import Model
def simple_model(nb_inputs, nb_outputs):
Return a Keras Model.
model = None
return model
### Do *not* modify the following line ###
# Test and see that the model has been created correctly
tests.test_simple_model(simple_model)
Do not modify the following code. It is to be used as a refence for future tasks.
# Create a simple model
model = simple_model(nb_features, nb_outputs)
#
# Define hyperparameters
#
lr = 0.2
nb_epochs = 10
batch_size = 10
# Fraction of the training data held as a validation set
validation_split = 0.1
# Define optimiser
optimizer = keras.optimizers.sgd(lr=lr)
# Compile model, use mean squared error
model.compile(loss='mean_squared_error', optimizer=optimizer)
# Print model
model.summary()
# Train and record history
logs = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=nb_epochs,
validation_split=validation_split,
verbose=2)
# Plot the error
fig, ax = plt.subplots(1,1)
pd.DataFrame(logs.history).plot(ax=ax)
ax.grid(linestyle='dotted')
ax.legend()
plt.show()
# Estimation on unseen data can be done using the `predict()` function, e.g.:
_y = model.predict(X_test)
# Create a simple model
model = None
#
# Define hyperparameters
#
lr = 0.2
nb_epochs = 10
batch_size = 10
# Fraction of the training data held as a validation set
validation_split = 0.1
# Define optimiser
# Compile model, use mean squared error
### Do *not* modify the following lines ###
# Print model
model.summary()
# Train our network and do live plots of loss
tools.assess_multivariate_model(model, X_train, y_train, X_test, y_test,
test_dates, nb_epochs, batch_size,
validation_split
)
# Import what we need
from keras import regularizers
def simple_model_l2(nb_inputs, nb_outputs, reg_factor):
Return a L2 regularized Keras Model.
model = None
return model
### Do *not* modify the following line ###
# Test and see that the model has been created correctly
tests.test_simple_model_regularized(simple_model_l2)
# Regularization factor (lambda)
reg_factor = 0.005
# Create a simple model
model = None
#
# Define hyperparameters
#
lr = 0.0005
nb_epochs = 100
batch_size = 128
reg_factor = 0.0005
# Fraction of the training data held as a validation set
validation_split = 0.1
# Define optimiser
# Compile model, use mean squared error
### Do *not* modify the following lines ###
# Print model
model.summary()
# Train our network and do live plots of loss
tools.assess_multivariate_model(model, X_train, y_train, X_test, y_test,
test_dates, nb_epochs, batch_size,
validation_split
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 1 - Gradient Descent
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step10: Expected Output
Step12: Expected Output
Step13: Expected Output
Step15: We have already implemented a 3-layer neural network. You will train it with
Step16: You will now run this 3 layer neural network with each of the 3 optimization methods.
Step17: 5.2 - Mini-batch gradient descent with momentum
Step18: 5.3 - Mini-batch with Adam mode
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets
from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# GRADED FUNCTION: update_parameters_with_gd
def update_parameters_with_gd(parameters, grads, learning_rate):
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients to update each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
learning_rate -- the learning rate, scalar.
Returns:
parameters -- python dictionary containing your updated parameters
L = len(parameters) // 2 # number of layers in the neural networks
# Update rule for each parameter
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)]-learning_rate*grads['dW' + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)]-learning_rate*grads['db' + str(l+1)]
### END CODE HERE ###
return parameters
parameters, grads, learning_rate = update_parameters_with_gd_test_case()
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
# GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:, mini_batch_size:(k+1) * mini_batch_size]
mini_batch_Y = shuffled_Y[:, mini_batch_size:(k+1) * mini_batch_size]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size:]
mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size:]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape))
print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3]))
# GRADED FUNCTION: initialize_velocity
def initialize_velocity(parameters):
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
Returns:
v -- python dictionary containing the current velocity.
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
# Initialize velocity
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = np.zeros_like(parameters["W"+str(l+1)])
v["db" + str(l+1)] = np.zeros_like(parameters["W"+str(l+1)])
### END CODE HERE ###
return v
parameters = initialize_velocity_test_case()
v = initialize_velocity(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
# GRADED FUNCTION: update_parameters_with_momentum
def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
Update parameters using Momentum
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- python dictionary containing the current velocity:
v['dW' + str(l)] = ...
v['db' + str(l)] = ...
beta -- the momentum hyperparameter, scalar
learning_rate -- the learning rate, scalar
Returns:
parameters -- python dictionary containing your updated parameters
v -- python dictionary containing your updated velocities
L = len(parameters) // 2 # number of layers in the neural networks
# Momentum update for each parameter
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
# compute velocities
v["dW" + str(l+1)] = beta* v["dW" + str(l+1)] + (1-beta)*grads["dW"+str(l+1)]
v["db" + str(l+1)] = beta * v["db" + str(l + 1)] + (1 - beta) * grads['db' + str(l + 1)]
# update parameters
parameters["W" + str(l+1)] = parameters["W" + str(l + 1)] - learning_rate * v["dW" + str(l + 1)]
parameters["b" + str(l+1)] = parameters["b" + str(l + 1)] - learning_rate * v["db" + str(l + 1)]
### END CODE HERE ###
return parameters, v
parameters, grads, v = update_parameters_with_momentum_test_case()
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
# GRADED FUNCTION: initialize_adam
def initialize_adam(parameters) :
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters["W" + str(l)] = Wl
parameters["b" + str(l)] = bl
Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
s = {}
# Initialize v, s. Input: "parameters". Outputs: "v, s".
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
v["dW" + str(l+1)] = np.zeros_like(parameters["W" + str(l + 1)])
v["db" + str(l+1)] = np.zeros_like(parameters["b" + str(l + 1)])
s["dW" + str(l+1)] = np.zeros_like(parameters["W" + str(l + 1)])
s["db" + str(l+1)] = np.zeros_like(parameters["b" + str(l + 1)])
### END CODE HERE ###
return v, s
parameters = initialize_adam_test_case()
v, s = initialize_adam(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
# GRADED FUNCTION: update_parameters_with_adam
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):
Update parameters using Adam
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
learning_rate -- the learning rate, scalar.
beta1 -- Exponential decay hyperparameter for the first moment estimates
beta2 -- Exponential decay hyperparameter for the second moment estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
Returns:
parameters -- python dictionary containing your updated parameters
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
L = len(parameters) // 2 # number of layers in the neural networks
v_corrected = {} # Initializing first moment estimate, python dictionary
s_corrected = {} # Initializing second moment estimate, python dictionary
# Perform Adam update on all parameters
for l in range(L):
# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = beta1 * v["dW" + str(l + 1)] + (1 - beta1) * grads['dW' + str(l + 1)]
v["db" + str(l+1)] = beta1 * v["db" + str(l + 1)] + (1 - beta1) * grads['db' + str(l + 1)]
### END CODE HERE ###
# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
### START CODE HERE ### (approx. 2 lines)
v_corrected["dW" + str(l+1)] = v["dW" + str(l + 1)] / (1 - np.power(beta1, t))
v_corrected["db" + str(l+1)] = v["db" + str(l + 1)] / (1 - np.power(beta1, t))
### END CODE HERE ###
# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
### START CODE HERE ### (approx. 2 lines)
s["dW" + str(l+1)] = beta2 * s["dW" + str(l + 1)] + (1 - beta2) * np.power(grads['dW' + str(l + 1)], 2)
s["db" + str(l+1)] = beta2 * s["db" + str(l + 1)] + (1 - beta2) * np.power(grads['db' + str(l + 1)], 2)
### END CODE HERE ###
# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
### START CODE HERE ### (approx. 2 lines)
s_corrected["dW" + str(l+1)] = s["dW" + str(l + 1)] / (1 - np.power(beta2, t))
s_corrected["db" + str(l+1)] = s["db" + str(l + 1)] / (1 - np.power(beta2, t))
### END CODE HERE ###
# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l + 1)] - learning_rate * v_corrected["dW" + str(l + 1)] / np.sqrt(s["dW" + str(l + 1)] + epsilon)
parameters["b" + str(l+1)] = parameters["b" + str(l + 1)] - learning_rate * v_corrected["db" + str(l + 1)] / np.sqrt(s["db" + str(l + 1)] + epsilon)
### END CODE HERE ###
return parameters, v, s
parameters, grads, v, s = update_parameters_with_adam_test_case()
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
train_X, train_Y = load_dataset()
def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):
3-layer neural network model which can be run in different optimizer modes.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
layers_dims -- python list, containing the size of each layer
learning_rate -- the learning rate, scalar.
mini_batch_size -- the size of a mini batch
beta -- Momentum hyperparameter
beta1 -- Exponential decay hyperparameter for the past gradients estimates
beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
num_epochs -- number of epochs
print_cost -- True to print the cost every 1000 epochs
Returns:
parameters -- python dictionary containing your updated parameters
L = len(layers_dims) # number of layers in the neural networks
costs = [] # to keep track of the cost
t = 0 # initializing the counter required for Adam update
seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours
# Initialize parameters
parameters = initialize_parameters(layers_dims)
# Initialize the optimizer
if optimizer == "gd":
pass # no initialization required for gradient descent
elif optimizer == "momentum":
v = initialize_velocity(parameters)
elif optimizer == "adam":
v, s = initialize_adam(parameters)
# Optimization loop
for i in range(num_epochs):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# Forward propagation
a3, caches = forward_propagation(minibatch_X, parameters)
# Compute cost
cost = compute_cost(a3, minibatch_Y)
# Backward propagation
grads = backward_propagation(minibatch_X, minibatch_Y, caches)
# Update parameters
if optimizer == "gd":
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
elif optimizer == "momentum":
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
elif optimizer == "adam":
t = t + 1 # Adam counter
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,
t, learning_rate, beta1, beta2, epsilon)
# Print the cost every 1000 epoch
if print_cost and i % 1000 == 0:
print ("Cost after epoch %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('epochs (per 100)')
plt.title("Learning rate = " + str(learning_rate))
plt.show()
return parameters
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a dictionary of all possible 1 to 6-tile leaves. Also, add functionality for sorting by an arbitrary key - allowing us to put rarest letters first
Step2: The bottom creates the full set of leaves for all lengths from 1-5 (6 breaks on my local machine)
Step3: Order rack (originally alphabetical, but now custom key with rarest letters first for maximum efficiency). Note that this is slower than alphabetical organization because it has to use the index function, but should be rewarded with subsequent performance enhancements.
Step4: Set up dataframe for storing EV of all leaves.
Step5: To find all of the racks corresponding to a particular leave, we have added columns to the dataframe of plays df marking each letter (A, B, C...) and also for duplicates (AA, BB, CC...) and triplicates where possible (AAA, DDD, EEE...).
Step6: Benchmark figures
Step7: Calculate leave "synergy", in other words the difference between the EV of the rack and what we'd expect just from adding the individual values of the tiles
|
<ASSISTANT_TASK:>
Python Code:
from itertools import combinations
import numpy as np
import pandas as pd
import seaborn as sns
from string import ascii_uppercase
import time as time
%matplotlib inline
maximum_superleave_length = 5
ev_calculator_max_length = 5
log_file = 'log_games.csv'
tilebag = ['A']*9+['B']*2+['C']*2+['D']*4+['E']*12+\
['F']*2+['G']*3+['H']*2+['I']*9+['J']*1+\
['K']*1+['L']*4+['M']*2+['N']*6+['O']*8+\
['P']*2+['Q']*1+['R']*6+['S']*4+['T']*6+\
['U']*4+['V']*2+['W']*2+['X']*1+['Y']*2+\
['Z']*1+['?']*2
tiles = [x for x in ascii_uppercase] + ['?']
# potential future improvement: calculate optimal order of letters on the fly
rarity_key = 'ZXKJQ?HYMFPWBCVSGDLURTNAOIE'
# alphabetical_key = '?ABCDEFGHIJKLMNOPQRSTUVWXYZ'
sort_func = lambda x: rarity_key.index(x)
t0 = time.time()
leaves = {i:sorted(list(set(list(combinations(tilebag,i))))) for i in
range(1,maximum_superleave_length+1)}
# turn leaves from lists of letters into strings
# algorith runs faster if leaves non-alphabetical!
for i in range(1,maximum_superleave_length+1):
leaves[i] = [''.join(sorted(leave, key=sort_func))
for leave in leaves[i]]
t1 = time.time()
print('Calculated superleaves up to length {} in {} seconds'.format(
maximum_superleave_length,t1-t0))
for i in range(1,maximum_superleave_length+1):
print(i,len(leaves[i]))
column_dict = {
0:'rack',
1:'score',
2:'tiles_remaining'
}
df = pd.read_csv(log_file, header=None, keep_default_na=False)
df.rename(columns=column_dict,inplace=True)
tile_limit = 1
df = df.loc[df['tiles_remaining']>=tile_limit]
df = df.iloc[:2000000]
t0 = time.time()
df['rack'] = df['rack'].apply(lambda x: ''.join(sorted(x, key=sort_func)))
t1 = time.time()
print(t1-t0)
tb = time.time()
df_dict = {'': df}
for multiple in range(1,maximum_superleave_length+1):
t0 = time.time()
# iterate through all 27 tiles
for c in leaves[1]:
if multiple*c in leaves[multiple]:
condition = df_dict[(multiple-1)*c]['rack'].apply(lambda x: multiple*c in x)
df_dict[multiple*c] = df_dict[(multiple-1)*c].loc[condition]
df[multiple*c] = condition
df[multiple*c].fillna(False, inplace=True)
t1 = time.time()
print('Added columns for all duplicates up to length {} in {} seconds'.format(multiple,t1-t0))
te = time.time()
print('Added all necessary columns in {} seconds'.format(te-tb))
all_leaves = []
for i in range(1,ev_calculator_max_length+1):
all_leaves += leaves[i]
df_dict = {leave: pd.DataFrame() for leave in all_leaves}
df_dict[''] = df
ev_df = pd.DataFrame(columns=['mean','std','count','ev','synergy'],
index=all_leaves)
def get_columns(leave):
letters=list(set(leave))
tags = []
for l in letters:
tags += [sum([l==letter for letter in leave])*l]
return tags
for leave_length in range(3,5):
print(leave_length)
t0 = time.time()
for leave in leaves[leave_length]:
print(leave)
print(len(df_dict[leave[:-1]]))
t2 = time.time()
condition = df_dict[leave[:-1]][get_columns(leave)].all(axis=1)
t3 = time.time()
df_dict[leave] = df_dict[leave[:-1]].loc[condition]
t4 = time.time()
ev_df.loc[leave]['mean'] = df_dict[leave]['score'].mean()
t5 = time.time()
ev_df.loc[leave]['std'] = df_dict[leave]['score'].std()
t6 = time.time()
ev_df.loc[leave]['count'] = len(df_dict[leave])
t7 = time.time()
print('condition calc time (ms): {:.5f} ({})'.format(1000*(t3-t2),100*(t3-t2)/(t7-t2)))
print('condition calc time (ms): {:.5f} ({})'.format(1000*(t4-t3),100*(t4-t3)/(t7-t2)))
print('condition calc time (ms): {:.5f} ({})'.format(1000*(t5-t4),100*(t5-t4)/(t7-t2)))
print('condition calc time (ms): {:.5f} ({})'.format(1000*(t6-t5),100*(t6-t5)/(t7-t2)))
print('condition calc time (ms): {:.5f} ({})'.format(1000*(t7-t6),100*(t7-t6)/(t7-t2)))
t1 = time.time()
print('Calculated mean, std and count in {} seconds'.format(t1-t0))
for leave_length in range(1,ev_calculator_max_length+1):
print(leave_length)
t0 = time.time()
for leave in leaves[leave_length]:
condition = df_dict[leave[:-1]][get_columns(leave)].all(axis=1)
df_dict[leave] = df_dict[leave[:-1]].loc[condition]
ev_df.loc[leave]['mean'] = df_dict[leave]['score'].mean()
ev_df.loc[leave]['std'] = df_dict[leave]['score'].std()
ev_df.loc[leave]['count'] = len(df_dict[leave])
t1 = time.time()
print('Calculated mean, std and count in {} seconds'.format(t1-t0))
ev_df['pct'] = 100*ev_df['count']/len(df)
ev_df['ev'] = ev_df['mean']-df['score'].mean()
for leave_length in range(2,ev_calculator_max_length+1):
for leave in leaves[leave_length]:
ev_df.loc[leave]['synergy'] = ev_df.loc[leave]['ev']-\
sum([ev_df.loc[c]['ev'] for c in leave])
ev_df
ev_df.to_csv('leave_values_011219_v7.csv')
ev_df.sort_values('synergy')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab,0)}
int_to_vocab = {ii: word for ii, word in enumerate(vocab,0)}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
d={'.':'||Period||',
',':'||Comma||',
'"':'||QuotationMark||',
';':'||Semicolon||',
'!':'||Exclamationmark||',
'?':'||Questionmark||',
'(':'||LeftParentheses||',
')':'||ComRightParenthesesma||',
'--':'||Dash||',
'\n':'||Return||',
}
return d
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
input_ = tf.placeholder(tf.int32, [None, None], name='input')
targets_ = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return (input_, targets_, learning_rate)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
rnn_layers=1
keep_prob=0.9
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([lstm] * rnn_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state=tf.identity(initial_state,"initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.truncated_normal((vocab_size, embed_dim),stddev=0.08))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs,dtype=tf.float32)
final_state=tf.identity(final_state,"final_state")
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed=get_embed(input_data, vocab_size, embed_dim)
outputs,final_state=build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None,
weights_initializer = tf.truncated_normal_initializer(stddev=0.1),
biases_initializer=tf.zeros_initializer())
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
N=len(int_text)
batch_data_size=batch_size*seq_length
nBatches=int(N/batch_data_size)
last=nBatches*batch_data_size
x=int_text[:last]
if last==N:
y=int_text[1:last]
y.append(0)
else:
y=int_text[1:last+1]
batches=[]
for b in range(nBatches):
xb=x[b*batch_data_size:(b+1)*batch_data_size]
yb=y[b*batch_data_size:(b+1)*batch_data_size]
batch_x=[]
batch_y=[]
for s in range(batch_size):
xs=xb[s*seq_length:(s+1)*seq_length]
ys=yb[s*seq_length:(s+1)*seq_length]
batch_x.append(xs)
batch_y.append(ys)
batches.append([batch_x,batch_y])
#print(batches[1][0][:5])
#print(batches[1][1][:5])
return np.array(batches)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 15
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 100
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
InputTensor=loaded_graph. get_tensor_by_name("input:0")
InitialStateTensor=loaded_graph. get_tensor_by_name("initial_state:0")
FinalStateTensor=loaded_graph. get_tensor_by_name("final_state:0")
ProbsTensor=loaded_graph. get_tensor_by_name("probs:0")
return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
idx=np.random.choice(probabilities.shape[0], 1, p=probabilities)
word=int_to_vocab[idx[0]]
return word
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set Up Verta
Step2: Imports
Step3: Create a helper function to plot graphs
Step4: Setup input pipeline
Step5: As this is a subwords tokenizer, it can be passed any string and the tokenizer will tokenize it.
Step6: The tokenizer encodes the string by breaking it into subwords if the word is not in its dictionary.
Step7: Create the model
Step8: Compile the Keras model to configure the training process
Step9: Train the model
Step10: Testing and Prediction
Step11: The above model does not mask the padding applied to the sequences. This can lead to skewness if we train on padded sequences and test on un-padded sequences. Ideally the model would learn to ignore the padding, but as you can see below it does have a small effect on the output.
Step12: Saving Models
|
<ASSISTANT_TASK:>
Python Code:
# Python 3.6
!pip install verta
!pip install matplotlib==3.1.1
!pip install tensorflow==2.0.0-beta1
!pip install tensorflow-hub==0.5.0
!pip install tensorflow-datasets==1.0.2
HOST = 'app.verta.ai'
PROJECT_NAME = 'Text-Classification'
EXPERIMENT_NAME = 'RNN'
# import os
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST, use_git=False)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
run = client.set_experiment_run()
from __future__ import absolute_import, division, print_function, unicode_literals
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import json
import matplotlib.pyplot as plt
import tensorflow_datasets as tfds
import tensorflow as tf
def plot_graphs(history, string, run, plot_title):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel('Epochs')
plt.ylabel(string)
plt.legend([string, 'val_'+string])
run.log_image(plot_title, plt)
plt.show()
# loading the dataset
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True,
as_supervised=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
tokenizer = info.features['text'].encoder
print ('Vocabulary size: {}'.format(tokenizer.vocab_size))
sample_string = 'The latest Marvel movie - Endgame was amazing!'
tokenized_string = tokenizer.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))
original_string = tokenizer.decode(tokenized_string)
print ('The original string: {}'.format(original_string))
assert original_string == sample_string
for ts in tokenized_string:
print ('{} ----> {}'.format(ts, tokenizer.decode([ts])))
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.padded_batch(BATCH_SIZE, train_dataset.output_shapes)
test_dataset = test_dataset.padded_batch(BATCH_SIZE, test_dataset.output_shapes)
hyperparams = {
'num_epochs': 10,
'optimizer': 'adam',
'loss': 'binary_crossentropy',
'vocab_size': tokenizer.vocab_size,
'metrics': 'accuracy'
}
# logging hyperparameters
run.log_hyperparameters(hyperparams)
model = tf.keras.Sequential([
tf.keras.layers.Embedding(tokenizer.vocab_size, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss=hyperparams['loss'],
optimizer=hyperparams['optimizer'],
metrics=[hyperparams['metrics']])
# called at the end of each epoch - logging loss, accuracy as observations for the run
class LossAndErrorLoggingCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
print('The average loss for epoch {} is {:7.2f}, accuracy is {:7.2f}.'.format(epoch, logs['loss'], logs['accuracy']))
run.log_observation("train_loss", float(logs['loss']))
run.log_observation("train_acc", float(logs['accuracy']))
run.log_observation("val_loss", float(logs['val_loss']))
run.log_observation("val_acc", float(logs['val_accuracy']))
history = model.fit(train_dataset,
epochs=hyperparams['num_epochs'],
validation_data=test_dataset,
callbacks=[LossAndErrorLoggingCallback()])
test_loss, test_acc = model.evaluate(test_dataset)
# logging metrics
run.log_metric('test_loss', float(test_loss))
run.log_metric('test_accuracy', float(test_acc))
print('Test Loss: {}'.format(test_loss))
print('Test Accuracy: {}'.format(test_acc))
def pad_to_size(vec, size):
zeros = [0] * (size - len(vec))
vec.extend(zeros)
return vec
def sample_predict(sentence, pad):
tokenized_sample_pred_text = tokenizer.encode(sample_pred_text)
if pad:
tokenized_sample_pred_text = pad_to_size(tokenized_sample_pred_text, 64)
predictions = model.predict(tf.expand_dims(tokenized_sample_pred_text, 0))
return (predictions)
# predict on a sample text without padding.
sample_pred_text = ('Spiderman: Far From Home did not disappoint! I loved it!')
predictions = sample_predict(sample_pred_text, pad=False)
print (predictions)
predictions = sample_predict(sample_pred_text, pad=True)
print (predictions)
# plotting graphs to see variation in accuracy and loss
plot_graphs(history, 'accuracy', run, 'epochs_vs_acc')
plot_graphs(history, 'loss', run, 'epochs_vs_loss')
# serialize model to JSON
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model.h5")
print("Saved model to disk")
run.log_artifact('model_summary_json', 'model.json')
run.log_artifact('model_weights', 'model.h5')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Get Cloud Project ID
Step2: 3. Get Client Credentials
Step3: 4. Enter CM360 Conversion Upload From Sheets Parameters
Step4: 5. Execute CM360 Conversion Upload From Sheets
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/google/starthinker
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
FIELDS = {
'dcm_account': '',
'auth_read': 'user', # Credentials used for reading data.
'floodlight_activity_id': '',
'floodlight_conversion_type': 'encryptedUserId',
'encryption_entity_id': '',
'encryption_entity_type': 'DCM_ACCOUNT',
'encryption_entity_source': 'DATA_TRANSFER',
'sheet_url': '',
'sheet_tab': '',
'sheet_range': '',
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'conversion_upload': {
'auth': 'user',
'account_id': {'field': {'name': 'dcm_account','kind': 'string','order': 0,'default': ''}},
'activity_id': {'field': {'name': 'floodlight_activity_id','kind': 'integer','order': 1,'default': ''}},
'conversion_type': {'field': {'name': 'floodlight_conversion_type','kind': 'choice','order': 2,'choices': ['encryptedUserId','encryptedUserIdCandidates','gclid','mobileDeviceId'],'default': 'encryptedUserId'}},
'encryptionInfo': {
'encryptionEntityId': {'field': {'name': 'encryption_entity_id','kind': 'integer','order': 3,'default': ''}},
'encryptionEntityType': {'field': {'name': 'encryption_entity_type','kind': 'choice','order': 4,'choices': ['ADWORDS_CUSTOMER','DBM_ADVERTISER','DBM_PARTNER','DCM_ACCOUNT','DCM_ADVERTISER','ENCRYPTION_ENTITY_TYPE_UNKNOWN'],'default': 'DCM_ACCOUNT'}},
'encryptionSource': {'field': {'name': 'encryption_entity_source','kind': 'choice','order': 5,'choices': ['AD_SERVING','DATA_TRANSFER','ENCRYPTION_SCOPE_UNKNOWN'],'default': 'DATA_TRANSFER'}}
},
'sheets': {
'url': {'field': {'name': 'sheet_url','kind': 'string','order': 9,'default': ''}},
'tab': {'field': {'name': 'sheet_tab','kind': 'string','order': 10,'default': ''}},
'range': {'field': {'name': 'sheet_range','kind': 'string','order': 11,'default': ''}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Feature engineering
Step2: Eventually, we will tokenize the bid information on spaces, so we remove any additional spaces from the data.
Step3: A key part of the approach is dealing with the bids in chronological order. Hence, we sort the bids in ascending order of bidder_ids and time.
Step4: Then we initialize a bids-dataframe where the aggregated bid information for each bidder_id will be gathered.
Step5: As a first entry into the bids-dataframe, we count the number of auctions for each bidder_id.
Step6: Then, we compute the time differences between bids into the df_bids_sorted-dataframe. These time differences are included in both numeric and string form. I noticed that there were some time differences that occur quite frequently, and a text processing of time differences should be able to identify these types of patterns in the data.
Step7: In the following, the main aggregation step for the bid data is performed. For each column, the data is first converted into slightly more readable form. For example for urls the data is transformed from the form 0esea7scvgr82he to the form url_0esea7scvgr82he. This was done to make the different entries more identifiable in the case of evaluating feature importances, but in the end this was not utilized to any significant extent. The entries for each bidder were concatenated with a space-delimiter to generate the aggregated text data. The result is a long string of space-delimited entries for each column. E.g. for the device-column, we can have a string of the type device_phone167 device_phone172 device_phone167 etc.
Step8: One idea I had was that the distribution of time differences between bids could be a significant predictor of bot activity. Hence, I computed a number of different descriptive statistics related to the times and time differences. In the following, the min and max times and time differences are computed along with the time difference range, mean, median and the first to ninth deciles.
Step9: This completes the bid aggregation step. After this, we read the train- and test-data and merge it with the bid data.
Step10: We delete the redundant dataframes and run garbage collection.
Step11: Using regular expressions, the text and numeric columns are identified. For missing values in numeric columns, we fill in the column mean; for missing values in text columns, we fill in an empty string.
Step12: Now, we split the df_combo-dataframe into train and test data. First, we grab the columns for the test data and check that the order matches that in the sample submission.
Step13: We put the numeric columns into matrices xtrain and xtest. The processed sparse text frequency matrices will then be concatenated to them. We will also keep track of the feature names for model interpretation.
Step14: The data is scaled using scikit-learn's MinMaxScaler. We use the MinMaxScaler as it leads to non-negative values, which is useful later on as scikit-learn's chi2-feature selection function only works on non-negative data.
Step15: As a final step before text processing, the numeric xtrain- and xtest-matrices are converted into sparse matrices to prepare for the concatenation with tfidf-matrices later on.
Step16: Text processing
Step17: We use slightly different tfidf-parameters for the different columns. These are stored in a text_params-dictionary.
Step18: The parameter choices were based on some brief univariate tests of auc-performance. The address and account-fields were not found to be very useful and were dropped from the data.
Step19: Finally, we pickle and save the xtrain-matrix and the y-vector. These objects are required for running the grid search script hyperopt_xgb.py.
Step20: Model
Step21: This submission should score around 0.93698 on the private leaderboard for a rank of 18th. The hyperparameters were determined through a grid search. In the Github-repository, there is a script called hyperopt_xgb.py that shows how this type of search could be performed.
Step22: Then we fit an xgboost-model using the standard python interface
Step23: Then we grab the feature importances from the booster and place them in a dataframe
Step24: We do some further processing of the df_imp-dataframe to get the dataframe ready for plotting. We will plot only the 20 most significant features.
Step25: Finally, we plot the feature importances
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import re
import gc
import numpy as np
from scipy import sparse
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.preprocessing import MinMaxScaler
from sklearn.feature_selection import SelectPercentile, chi2
from sklearn.externals import joblib
import xgboost as xgb
df_bids = pd.read_csv('data/bids.csv')
df_bids = df_bids.replace({' ': ''}, regex = True) #remove spaces
df_bids_sorted = df_bids.sort(['bidder_id', 'time'], ascending = [True, True])
#dataframe for aggregated bid data
bids = pd.DataFrame(data = df_bids_sorted['bidder_id'].unique(), columns = ['bidder_id'],
index = df_bids_sorted['bidder_id'].unique())
#auction counts
counts = df_bids_sorted.groupby('bidder_id')['bidder_id'].agg('count')
bids['auction_count_num'] = counts
timediff = df_bids_sorted.groupby('bidder_id')['time'].diff()
timediff_str = timediff.astype(str).fillna('')
df_bids_sorted['timediff_num'] = timediff
df_bids_sorted['timediff'] = timediff_str
#turn feature sequences into text
text_cols = ['auction', 'merchandise', 'device', 'timediff', 'country', 'ip', 'url']
for var in text_cols:
df_bids_sorted[var] = var + "_" + df_bids_sorted[var].fillna("")
text_str = var + '_text'
count_str = var + '_nunique_num'
bids[text_str] = df_bids_sorted.groupby('bidder_id')[var].apply(lambda x: "%s" % ' '.join(x))
bids[count_str] = df_bids_sorted.groupby('bidder_id')[var].nunique()
max_time = df_bids_sorted.groupby('bidder_id')['time'].max()
bids['maxtime_num'] = max_time
min_time = df_bids_sorted.groupby('bidder_id')['time'].min()
bids['mintime_num'] = min_time
max_diff = df_bids_sorted.groupby('bidder_id')['timediff_num'].max()
max_diff = max_diff.fillna(max_diff.mean())
bids['maxdiff_num'] = max_diff
min_diff = df_bids_sorted.groupby('bidder_id')['timediff_num'].max()
min_diff = min_diff.fillna(min_diff.mean())
bids['mindiff_num'] = min_diff
range_diff = max_diff - min_diff
bids['rangediff_num'] = range_diff
mean_diff = df_bids_sorted.groupby('bidder_id')['timediff_num'].mean()
mean_diff = mean_diff.fillna(mean_diff.mean())
bids['meandiff_num'] = mean_diff
median_diff = df_bids_sorted.groupby('bidder_id')['timediff_num'].median()
median_diff = median_diff.fillna(median_diff.mean())
bids['mediandiff_num'] = median_diff
for q in [0.1, 0.2, 0.3, 0.4, 0.6, 0.7, 0.8, 0.9]:
q_string = 'diff_quantile_num_' + str(q).replace('.', '_')
q_temp = df_bids_sorted.groupby('bidder_id')['timediff_num'].quantile(q)
q_temp = q_temp.fillna(q_temp.mean())
bids[q_string] = q_temp
df_train = pd.read_csv('data/train.csv')
df_test = pd.read_csv('data/test.csv')
df_combo = df_train.append(df_test)
df_combo['address_text'] = 'address_' + df_combo['address'].fillna('')
df_combo['account_text'] = 'account_' + df_combo['payment_account'].fillna('')
df_combo = df_combo.merge(bids, how = 'left', left_on = ['bidder_id'], right_on = ['bidder_id'])
del df_train
del df_test
del df_bids
del df_bids_sorted
del bids
gc.collect();
num_cols = filter(re.compile('num').search, df_combo.columns)
text_cols = filter(re.compile('text').search, df_combo.columns)
for col in num_cols:
df_combo[col] = df_combo[col].fillna(df_combo[col].mean())
for col in text_cols:
df_combo[col] = df_combo[col].fillna('')
sample = pd.read_csv('submissions/sampleSubmission.csv')
test_dat = df_combo[df_combo.bidder_id.isin(sample.bidder_id)]
#test
print (sample.bidder_id.values==test_dat['bidder_id'].values).all()
train_dat = df_combo[~pd.isnull(df_combo.outcome)]
y = train_dat.outcome.values
xtrain = train_dat[num_cols].values
xtest = test_dat[num_cols].values
col_names = num_cols
sc = MinMaxScaler().fit(np.vstack((xtrain, xtest)))
xtrain = sc.transform(xtrain)
xtest = sc.transform(xtest)
xtrain = sparse.csr_matrix(xtrain)
xtest = sparse.csr_matrix(xtest)
def tokens(x):
return x.split(' ')
text_params = {}
text_params['address_text'] = {'include':False}
text_params['account_text'] = {'include':False}
text_params['auction_text'] = {'include':True, 'mindf':5, 'ngram':(1,3), 'token':'tokens'}
text_params['merchandise_text'] = {'include':True, 'mindf':5, 'ngram':(1,3), 'token':'tokens'}
text_params['device_text'] = {'include':True, 'mindf':5, 'ngram':(1,3), 'token':'tokens'}
text_params['timediff_text'] = {'include':True, 'mindf':1, 'ngram':(1,1), 'token':'tokens'}
text_params['country_text'] = {'include':True, 'mindf':5, 'ngram':(1,3), 'token':'tokens'}
text_params['ip_text'] = {'include':True, 'mindf':1, 'ngram':(1,1), 'token':'nottokens'}
text_params['url_text'] = {'include':True, 'mindf':5, 'ngram':(1,3), 'token':'tokens'}
for col in text_cols:
if not text_params[col]['include']:
continue
else:
if text_params[col]['token'] == 'tokens':
vect = TfidfVectorizer(tokenizer = tokens, min_df = text_params[col]['mindf'],
ngram_range = text_params[col]['ngram'])
else:
vect = TfidfVectorizer(min_df = text_params[col]['mindf'],
ngram_range = text_params[col]['ngram'])
documents = df_combo[col].values
vect.fit(documents)
col_names = col_names + vect.get_feature_names()
xtr_tmp = vect.transform(train_dat[col].values)
xte_tmp = vect.transform(test_dat[col].values)
xtrain = sparse.hstack((xtrain, xtr_tmp))
xtest = sparse.hstack((xtest, xte_tmp))
joblib.dump(xtrain, 'data/xtrain.pkl', compress = 3);
joblib.dump(y, 'data/y.pkl', compress = 3);
#feature selection
feats_25 = SelectPercentile(chi2, 25).fit(xtrain, y)
xtrain = feats_25.transform(xtrain)
xtest = feats_25.transform(xtest)
clf = xgb.XGBClassifier(objective = 'binary:logistic',
learning_rate = 0.05,
max_depth = 5,
nthread = 8,
seed = 42,
subsample = 0.4,
colsample_bytree = 0.7,
min_child_weight = 1,
n_estimators = 100,
gamma = 0.15, silent = True)
#bag of 15 models
rounds = 15
preds_mat = np.zeros((len(sample.index), rounds))
for i in range(rounds):
clf.set_params(seed = i + 1)
clf.fit(xtrain, y)
preds_tmp = clf.predict_proba(xtest)[:, 1]
preds_mat[:, i] = preds_tmp
bagged_preds = preds_mat.mean(axis = 1)
sample.prediction = bagged_preds
sample.to_csv('submissions/facebook_submission.csv', index = False)
col_names = np.array(col_names)[feats_25.get_support()]
xgb_params = {'objective': 'binary:logistic',
'eta': 0.05,
'max_depth': 5,
'seed': 42,
'subsample': 0.4,
'colsample_bytree': 0.7,
'min_child_weight': 1,
'gamma': 0.15}
num_round = 100
dtrain = xgb.DMatrix(xtrain, label = y)
booster = xgb.train(xgb_params, dtrain, num_round)
importance = booster.get_fscore()
df_imp = pd.DataFrame(columns = ('feature', 'importance'))
sum_imp = 0
for imp in importance:
row = col_names[int(imp[1:])], importance[imp]
df_imp.loc[len(df_imp.index)] = row
sum_imp += importance[imp]
df_imp['relative_importance'] = df_imp.importance/float(sum_imp)
df_imp.drop(['importance'], 1, inplace = True)
df_imp.sort(columns=['relative_importance'], ascending = False, inplace = True)
df_imp.index = df_imp.feature
pd.options.mode.chained_assignment=None
df_plot = df_imp[:20]
names = {'country':'Country', 'num': 'Numeric', 'timediff': 'Time difference',
'device':'Device', 'url':'URL', 'ip':'IP address'}
for pattern in sorted(names.keys()):
df_plot[names[pattern]] = 0
if pattern == 'ip':
rows = df_plot.feature.str.isnumeric()
elif pattern == 'timediff':
rows = df_plot.feature.str.contains(pattern) & ~df_plot.feature.str.contains('num')
else:
rows = df_plot.feature.str.contains(pattern)
df_plot.loc[rows, names[pattern]] = df_plot.loc[rows, 'relative_importance']
df_plot = df_plot.drop(['feature', 'relative_importance'], 1)
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
matplotlib.style.use('ggplot')
matplotlib.rcParams.update({'font.size': 13})
matplotlib.rcParams['figure.figsize'] = 9, 6
ax = df_plot.plot(kind='barh', stacked = True)
ax.invert_yaxis()
plt.xlabel('Relative importance')
plt.ylabel('Feature');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Hello World! in other language
Step3: The magic command (used in jupyte notebook)
Step4: Import package and library
Step6: Comments Are Marked by #
Step7: A Statement
Step8: It is also possible to continue expressions on the next line within parentheses, without using the \ marker
Step9: Sometimes it can be useful to put multiple statements on a single line.
Step10: A Note on the print() Function¶
|
<ASSISTANT_TASK:>
Python Code:
# make a list
students = ['boy', 'boy', 'girl', 'boy', 'girl', 'girl', 'boy', 'boy', 'girl', 'girl', 'boy', 'boy']
boys = 0; girls = 0
for s in students:
if s == 'boy':
boys = boys +1
else:
girls+=1
print("boys:", boys)
print("girls:", girls)
print("Hello", "World!")
print("Tips 3: Use \ to escape an characters like \"")
print("Tips 4: Use \\n \n to make a newline character")
print(Tips 5: Use three \" to
make
multiple
line
)
%timeit sum(range(100000))
import numpy
numpy
import numpy as np
np
# Declare x=0
x=0
x += 2 # shorthand for x = x + 2
Multi line Strings can also be used as comment (a docstring)
x += 2
x = 1 + 2 + 3 + 4 + 5 \
+ 6 + 7 + 8
x = (1 + 2 + 3 + 4 +
5 + 6 + 7 + 8)
boys = 0; girls = 0
for s in students:
if s == 'boy':
boys = boys +1
else:
girls+=1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Init
Step2: Using nestly
Step3: Plotting results
Step4: SANDBOX
|
<ASSISTANT_TASK:>
Python Code:
workDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/'
genomeDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/genomes/'
R_dir = '/home/nick/notebook/SIPSim/lib/R/'
import glob
from os.path import abspath
import nestly
import itertools
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
# building tree structure
nest = nestly.Nest()
## params that vary
BD_min = np.arange(1.67, 1.77, 0.02).tolist()
BD_max = [x + 0.04 for x in BD_min]
f = lambda x: {'BD_range': str(x[0]) + '-' + str(x[1]),
'BD_min':x[0],
'BD_max':x[1]}
BD_range = [f(x) for x in itertools.product(BD_min, BD_max)
if x[0] < x[1]]
nest.add('BD_range', BD_range, update=True)
## set params
nest.add('padj', [0.1], create_dir=False)
nest.add('log2', [0.25], create_dir=False)
nest.add('topTaxaToPlot', [100], create_dir=False)
## input/output files
### phyloseq & BD-shift files
nest.add('inFileDir', [os.path.join(workDir, 'validation')], create_dir=False)
nest.add('phyloseq', ['OTU_n2_abs1e10_sub20000'], create_dir=False)
nest.add('BD_shift', ['ampFrags_kde_dif_incorp_BD-shift.txt'], create_dir=False)
nest.add('R_dir', [R_dir], create_dir=False)
# building directory tree
buildDir = os.path.join(workDir, 'DESeq_BD-range')
nest.build(buildDir)
bashFile = os.path.join(workDir, 'SIPSimRun.sh')
%%writefile $bashFile
#!/bin/bash
# copy input files to prevent parallel file reading errors
cp {inFileDir}/{phyloseq}.physeq {phyloseq}.physeq
cp {inFileDir}/{BD_shift} {BD_shift}
#-- R analysis --#
export PATH={R_dir}:$PATH
## filtering phyloseq object to just taxa/samples of interest
phyloseq_edit.r \
{phyloseq}.physeq \
--BD_min {BD_min} \
--BD_max {BD_max} \
> {phyloseq}_filt.physeq
## making ordination
phyloseq_ordination.r \
{phyloseq}_filt.physeq \
{phyloseq}_bray-NMDS.pdf
## DESeq2
phyloseq_DESeq2.r \
{phyloseq}_filt.physeq \
--log2 {log2} \
--hypo greater \
> {phyloseq}_DESeq2
## Confusion matrix
DESeq2_confuseMtx.r \
{BD_shift} \
{phyloseq}_DESeq2 \
--padj {padj}
!chmod 775 $bashFile
!cd $workDir; \
nestrun -j 30 --template-file $bashFile -d DESeq_BD-range --log-file log.txt
# aggregating confusion matrix data
## table
!cd $workDir; \
nestagg delim \
-d DESeq_BD-range \
-k BD_min,BD_max \
-o ./DESeq_BD-range/DESeq2-cMtx_table.csv \
DESeq2-cMtx_table.csv
## overall
!cd $workDir; \
nestagg delim \
-d DESeq_BD-range \
-k BD_min,BD_max\
-o ./DESeq_BD-range/DESeq2-cMtx_overall.csv \
DESeq2-cMtx_overall.csv
## byClass
!cd $workDir; \
nestagg delim \
-d DESeq_BD-range \
-k BD_min,BD_max \
-o ./DESeq_BD-range/DESeq2-cMtx_byClass.csv \
DESeq2-cMtx_byClass.csv
%%R -i workDir -w 600 -h 600
setwd(workDir)
byClass = read.csv('./DESeq_BD-range/DESeq2-cMtx_byClass.csv')
byClass$byClass[is.na(byClass$byClass)] = 0
cat.str = function(x,y, col=':'){
x = as.character(x)
y = as.character(y)
z = paste(c(x,y),collapse=col)
return(z)
}
byClass = byClass %>%
mutate(BD_range = mapply(cat.str, BD_min, BD_max)) %>%
mutate(BD_min = as.character(BD_min),
BD_max = as.character(BD_max))
p = ggplot(byClass, aes(X, byClass, color=BD_max)) +
geom_point(alpha=0.5, size=3) +
labs(y='value') +
facet_grid(BD_min ~ .) +
theme_bw() +
theme(
text=element_text(size=16),
axis.text.x=element_text(angle=90, hjust=1, vjust=0.5),
axis.title.x=element_blank()
)
p
%%R -w 850 -h 300
col2keep = c('Balanced Accuracy', 'Sensitivity','Specificity')
byClass.f = byClass %>%
filter(X %in% col2keep) %>%
mutate(BD_min = as.numeric(BD_min),
BD_max = as.numeric(BD_max),
byClass = as.numeric(byClass),
byClass.inv = 1 - byClass)
byClass.fs = byClass.f %>%
group_by(X) %>%
summarize(byClass.max = max(byClass))
just.true = function(x){
if(x == TRUE){
return(1)
} else{
return(NA)
}
}
byClass.j = inner_join(byClass.f, byClass.fs, c('X' = 'X')) %>%
mutate(max_val = as.numeric(byClass == byClass.max),
byClass.txt = round(byClass, 2))
byClass.jf = byClass.j %>%
filter(max_val == 1)
x.breaks = unique(byClass.j$BD_min)
p = ggplot(byClass.j, aes(BD_min, BD_max, fill=byClass.inv)) +
geom_tile() +
geom_text(aes(label=byClass.txt), color=c('white'), size=4) +
geom_text(data=byClass.jf, aes(label=byClass.txt), color=c('red'), size=4) +
scale_x_continuous(breaks=x.breaks) +
labs(x='Minimum Buoyant Density', y='Maximum Buoyant Density') +
facet_grid(. ~ X) +
theme_bw() +
theme(
text=element_text(size=16),
axis.text.x=element_text(angle=90, hjust=1, vjust=0.5)
)
p
%%R -i workDir -w 600 -h 700
setwd(workDir)
byClass = read.csv('./DESeq_BD-range/DESeq2-cMtx_byClass.csv')
byClass$byClass[is.na(byClass$byClass)] = 0
#byClass$percIncorp = factor(byClass$percIncorp, levels=as.character(unique(sort(byClass$percIncorp))))
cat.str = function(x,y, col=':'){
x = as.character(x)
y = as.character(y)
z = paste(c(x,y),collapse=col)
return(z)
}
byClass = byClass %>%
mutate(BD_range = mapply(cat.str, BD_min, BD_max))
p = ggplot(byClass, aes(byClass, ymin=BD_min, ymax=BD_max, color=BD_range)) +
geom_linerange() +
labs(y='value') +
facet_grid(X ~ .) +
#coord_flip() +
theme_bw() +
theme(
text=element_text(size=16),
axis.text.x=element_text(angle=90, hjust=1, vjust=0.5),
axis.title.x=element_blank()
)
p
%%bash -s $workDir
cd $1
export PATH=/home/nick/notebook/SIPSim/lib/R/:$PATH
## symlinking OTU subsample phyloseq file
# done by NESTLY?
# files: FILE.physeq, ampFrags_kde_dif_incorp_BD-shift.txt
## filtering phyloseq object to just taxa/samples of interest
phyloseq_edit.r \
{otu_table}.physeq \
--BD_min {BD_min} --BD_max {BD_max} \
> {otu_table}_filt.physeq
# Chuck's method
## DESeq2
phyloseq_DESeq2.r \
{otu_table}_filt.physeq \
--log2 0.25 \
> {otu_table}_filt_DESeq2
## Confusion matrix
DESeq2_confuseMtx.r \
ampFrags_kde_dif_incorp_BD-shift.txt \
{otu_table}_filt_DESeq2 \
--padjBH 0.1
# altHypothesis = 'greater'
## DESeq2
phyloseq_DESeq2.r \
OTU_n2_abs1e10_sub20000_filt.physeq \
--log2 0.25 \
--hypo greater \
> OTU_n2_abs1e10_sub20000_DESeq2
## Confusion matrix
DESeq2_confuseMtx.r \
ampFrags_kde_dif_incorp_BD-shift.txt \
OTU_n2_abs1e10_sub20000_DESeq2 \
--padj 0.1
%%writefile $bashFile
#!/bin/bash
#-- R analysis --#
export PATH={R_dir}:$PATH
# plotting taxon abundances
OTU_taxonAbund.r \
OTU_n2_abs{abs}_sub{subsample}.txt \
-r {topTaxaToPlot} \
-o OTU_n2_abs{abs}_sub{subsample}
# running DeSeq2 and making confusion matrix on predicting incorporators
## making phyloseq object from OTU table
phyloseq_make.r \
OTU_n2_abs{abs}_sub{subsample}_w.txt \
-s OTU_n2_abs{abs}_sub{subsample}_meta.txt \
> OTU_n2_abs{abs}_sub{subsample}.physeq
## filtering phyloseq object to just taxa/samples of interest
phyloseq_edit.r \
OTU_n2_abs{abs}_sub{subsample}.physeq \
--BD_min {BD_min} --BD_max {BD_max} \
> OTU_n2_abs{abs}_sub{subsample}_filt.physeq
## making ordination
phyloseq_ordination.r \
OTU_n2_abs{abs}_sub{subsample}_filt.physeq \
OTU_n2_abs{abs}_sub{subsample}_bray-NMDS.pdf
## DESeq2
phyloseq_DESeq2.r \
OTU_n2_abs{abs}_sub{subsample}_filt.physeq \
> OTU_n2_abs{abs}_sub{subsample}_DESeq2
## Confusion matrix
DESeq2_confuseMtx.r \
{fileName}_kde_dif_incorp_BD-shift.txt \
OTU_n2_abs{abs}_sub{subsample}_DESeq2 \
--padj {padj} --log2 {log2}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Closures
Step2: Decorators
Step3: Some practical applications of decorators
Step4: Chaining of Decorators
Step5: Let's see if switching the order of decorators helps
Step6: Decorators with Arguments
|
<ASSISTANT_TASK:>
Python Code:
def square(x):
return x*x
def cube(x):
return x*x*x
# This is custom-built map function which is going to behave like in-bulit map function.
def my_map(func, arg_list):
result = []
for i in arg_list:
result.append(func(i))
return result
squares = my_map(square, [1,2,3,4])
print(squares)
cubes = my_map(cube, [1,2,3,4])
print(cubes)
def html_tag(tag):
def wrap_text(msg):
print('<{0}>{1}<{0}>'.format(tag, msg))
return wrap_text
print_h1 = html_tag('h1')
print_h1('Test Headline')
print_h1('Another Headline')
print_p = html_tag('p')
print_p('Test Paragraph!')
def decorator_function(original_function):
def wrapper_function():
print("wrapper executed this before {}".format(original_function.__name__))
return original_function()
return wrapper_function
def display():
print("display function ran!")
decorated_display = decorator_function(display)
decorated_display()
# The above code is functionally the same as below:
def decorator_function(original_function):
def wrapper_function():
print("wrapper executed this before {}".format(original_function.__name__))
return original_function()
return wrapper_function
@decorator_function
def display():
print("display function ran!")
display()
# Lets make our decorator function to work with functions with different number of arguments
# For this we use, *args (arguments) and **kwargs (keyword arguments).
# args and kwargs are convention, you can use any other name you want like *myargs, **yourkeywordargs
def decorator_function(original_function):
def wrapper_function(*args, **kwargs):
print("wrapper executed this before {}".format(original_function.__name__))
return original_function(*args, **kwargs)
return wrapper_function
@decorator_function
def display():
print("display function ran!")
@decorator_function
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display()
display_info('John', 25)
# Now let's use a class as a decorator instead of a function
class decorator_class(object):
def __init__(self, original_function):
self.original_function = original_function
def __call__(self, *args, **kwargs): # This method is going to behave just like our wrapper function behaved
print('call method executed this before {}'.format(self.original_function.__name__))
return self.original_function(*args, **kwargs)
@decorator_class
def display():
print("display function ran!")
@decorator_class
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display()
display_info('John', 25)
#Let's say we want to keep track of how many times a specific function was run and what argument were passed to that function
def my_logger(orig_func):
import logging
logging.basicConfig(filename='{}.log'.format(orig_func.__name__), level=logging.INFO) #Generates a log file in current directory with the name of original funcion
def wrapper(*args, **kwargs):
logging.info(
'Ran with args: {}, and kwargs: {}'.format(args, kwargs))
return orig_func(*args, **kwargs)
return wrapper
@my_logger
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display_info('John', 25)
# Note:
# Since display_info is decorated with my_logger, the above call is equivalent to:
# decorated_display = my_logger(display_info)
# decorated_display('John', 25)
def my_timer(orig_func):
import time
def wrapper(*args, **kwargs):
t1 = time.time()
result = orig_func(*args, **kwargs)
t2 = time.time() - t1
print('{} ran in: {} sec'.format(orig_func.__name__, t2))
return result
return wrapper
@my_timer
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display_info('John', 25)
# Note:
# Since display_info is decorated with my_timer, the above call is equivalent to:
# decorated_display = my_timer(display_info)
# decorated_display('John', 25)
# (or simply put)
# my_timer(display_info('John', 25))
@my_timer
@my_logger
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display_info('John', 25) # This is equivalent to my_timer(my_logger(display_info('John', 25)))
# The above code will give us some unexpected results.
# Instead of printing "display_info ran in: ---- sec" it prints "wrapper ran in: ---- sec"
@my_logger
@my_timer
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display_info('John', 25) # This is equivalent to my_logger(my_timer(display_info('John', 25)))
# Now this would create wrapper.log instead of display_info.log like we expected.
# To understand why wrapper.log is generated instead of display_info.log let's look at the following code.
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display_info = my_timer(display_info('John',25))
print(display_info.__name__)
# So how do we solve this problem
# The answer is by using the wraps decorator
# For this we need to import wraps from functools module
from functools import wraps
def my_logger(orig_func):
import logging
logging.basicConfig(filename='{}.log'.format(orig_func.__name__), level=logging.INFO) #Generates a log file in current directory with the name of original funcion
@wraps(orig_func)
def wrapper(*args, **kwargs):
logging.info(
'Ran with args: {}, and kwargs: {}'.format(args, kwargs))
return orig_func(*args, **kwargs)
return wrapper
def my_timer(orig_func):
import time
@wraps(orig_func)
def wrapper(*args, **kwargs):
t1 = time.time()
result = orig_func(*args, **kwargs)
t2 = time.time() - t1
print('{} ran in: {} sec'.format(orig_func.__name__, t2))
return result
return wrapper
@my_logger
@my_timer
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display_info('Hank', 22)
def prefix_decorator(prefix):
def decorator_function(original_function):
def wrapper_function(*args, **kwargs):
print(prefix, "Executed before {}".format(original_function.__name__))
result = original_function(*args, **kwargs)
print(prefix, "Executed after {}".format(original_function.__name__), '\n')
return result
return wrapper_function
return decorator_function
@prefix_decorator('LOG:')
def display_info(name, age):
print('display_info ran with arguments ({}, {})'.format(name, age))
display_info('John', 25)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Inputs
Step2: Computation
Step3: Plot of the reflectance spectrum at $\lambda$ = 633 nm
Step4: Plot of the local fields at $\lambda=633$ nm at the SPP coupling angle
|
<ASSISTANT_TASK:>
Python Code:
# libraries
import numpy as np # numpy
import sys # sys to add py_matrix to the path
# matplotlib inline plots
import matplotlib.pylab as plt
%matplotlib inline
# adding py_matrix parent folder to python path
sys.path.append('../../')
import py_matrix as pm # importing py_matrix
# useful parameters
f_size=20;
# building the optical constant database, point the folder below to the "materials" py_matrix folder
eps_db_out=pm.mat.generate_eps_db('../materials/',ext='*.edb')
eps_files,eps_names,eps_db=eps_db_out['eps_files'],eps_db_out['eps_names'],eps_db_out['eps_db']
# multilayer and computation inputs
stack=['e_bk7','e_au','e_vacuum']; # materials composing the stack, as taken from eps_db
d_list=[0.0,55.0,0.0]; # multilayer thicknesses: incident medium and substrate have zero thickness
wl_0=633; # incident wavelenght in nm
# polar angle in radians
theta_min=40*np.pi/1.8e2;
theta_max=50*np.pi/1.8e2;
theta_step=500;
v_theta=np.linspace(theta_min,theta_max,theta_step)
# azimuthal angle radians
phi_0=0.0
# optical constant tensor
m_eps=np.zeros((len(stack),3,3),dtype=np.complex128);
e_list=pm.mat.db_to_eps(wl_0,eps_db,stack) # retrieving optical constants at wl_0 from the database
m_eps[:,0,0]=e_list # filling dielectric tensor diagonal
m_eps[:,1,1]=e_list
m_eps[:,2,2]=e_list
# initializing reflectance output vector
v_r_p=np.zeros_like(v_theta)
# angle loop
for i_t,t in enumerate(v_theta):
#------Computing------
m_r_ps=pm.core.rt(wl_0,t,phi_0,m_eps,d_list)['m_r_ps'] # reflection matrix
v_r_p[i_t]=pm.utils.R_ps_rl(m_r_ps)['R_p'] # getting p-polarized reflectance
# reflectivity plots
plt.figure(1,figsize=(15,10))
plt.plot(v_theta*1.8e2/np.pi,v_r_p,'k',linewidth=2.0)
# labels
plt.xlabel(r'$\Theta^{\circ}$',fontsize=f_size+10)
plt.ylabel('R',fontsize=f_size+10)
# ticks
plt.xticks(fontsize=f_size)
plt.yticks(fontsize=f_size)
# grids
plt.grid()
#legends
plt.legend(['55 nm Au film reflectance at 633 nm'],loc='upper right',fontsize=f_size,frameon=False);
# fields components and wavevectors
theta_0 = v_theta[v_r_p.argmin()] # getting the plasmon coupling angle
out = pm.core.rt(wl_0,theta_0,phi_0,m_eps,d_list) # reflection matrix
m_Kn = out['m_Kn']
m_Hn = out['m_Hn']
m_En = out['m_En']
# computing the field, absorbed power and Poynting vector
v_z = np.linspace(-100,500,1000) # z probing
v_field = np.array([np.abs(pm.utils.field(m_Kn,m_En,m_Hn,e_list,d_list,0.0,0.0,z,'TM')['H'][1]) for z in v_z])
v_abs = np.array([np.abs(pm.utils.field(m_Kn,m_En,m_Hn,e_list,d_list,0.0,0.0,z,'TM')['abs']) for z in v_z])
v_S = np.array([np.abs(pm.utils.field(m_Kn,m_En,m_Hn,e_list,d_list,0.0,0.0,z,'TM')['S']) for z in v_z])
# field plots
plt.figure(figsize=(9,7.5))
# transverse magnetic field modulues
plt.subplot(2,2,1)
# plot
plt.plot(v_z,v_field,'k',linewidth=3.0)
plt.axvline(d_list[0],color='gray',linestyle='dashed',linewidth=2.0)
plt.axvline(d_list[1],color='gray',linestyle='dashed',linewidth=2.0)
# labels
plt.ylabel(r'$|H_{\mathrm{y}}|$',fontsize=f_size+10)
# ticks
plt.xticks([0,200,400],fontsize=f_size)
plt.yticks([0,2,4,6,8],fontsize=f_size)
# grids
plt.grid(color='gray')
# local absorbed power
plt.subplot(2,2,2)
# plot
plt.plot(v_z,v_abs,'k',linewidth=3.0)
plt.axvline(d_list[0],color='gray',linestyle='dashed',linewidth=2.0)
plt.axvline(d_list[1],color='gray',linestyle='dashed',linewidth=2.0)
# labels
plt.ylabel(r'Abs power (a.u.)',fontsize=f_size+5)
# ticks
plt.xticks([0,200,400],fontsize=f_size)
plt.yticks(fontsize=f_size)
# grids
plt.grid(color='gray')
# Sx component of the Poynting vector
plt.subplot(2,2,3)
# plot
plt.plot(v_z,v_S[:,0],'k',linewidth=3.0)
plt.axvline(d_list[0],color='gray',linestyle='dashed',linewidth=2.0)
plt.axvline(d_list[1],color='gray',linestyle='dashed',linewidth=2.0)
# labels
plt.xlabel(r'z (nm)',fontsize=f_size+10)
plt.ylabel(r'$S_{\mathrm{x}}$',fontsize=f_size+5)
# ticks
plt.xticks([0,200,400],fontsize=f_size)
plt.yticks(fontsize=f_size)
# grids
plt.grid(color='gray')
# Sz component of the Poynting vector
plt.subplot(2,2,4)
# plot
plt.plot(v_z,v_S[:,2],'k',linewidth=3.0)
plt.axvline(d_list[0],color='gray',linestyle='dashed',linewidth=2.0)
plt.axvline(d_list[1],color='gray',linestyle='dashed',linewidth=2.0)
# labels
plt.xlabel(r'z (nm)',fontsize=f_size+10)
plt.ylabel(r'$S_{\mathrm{z}}$',fontsize=f_size+5)
# ticks
plt.xticks([0,200,400],fontsize=f_size)
plt.yticks(fontsize=f_size)
# grids
plt.grid(color='gray')
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-2', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Parametrized Model
Step2: Search for the Critical Boron Concentration
Step3: Finally, the openmc.search_for_keff function also provided us with Lists of the guesses and corresponding keff values generated during the search process with OpenMC. Let's use that information to make a quick plot of the value of keff versus the boron concentration.
|
<ASSISTANT_TASK:>
Python Code:
# Initialize third-party libraries and the OpenMC Python API
import matplotlib.pyplot as plt
import numpy as np
import openmc
import openmc.model
%matplotlib inline
# Create the model. `ppm_Boron` will be the parametric variable.
def build_model(ppm_Boron):
# Create the pin materials
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_element('U', 1., enrichment=1.6)
fuel.add_element('O', 2.)
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_element('Zr', 1.)
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.741)
water.add_element('H', 2.)
water.add_element('O', 1.)
# Include the amount of boron in the water based on the ppm,
# neglecting the other constituents of boric acid
water.add_element('B', ppm_Boron * 1e-6)
# Instantiate a Materials object
materials = openmc.Materials([fuel, zircaloy, water])
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(r=0.39218)
clad_outer_radius = openmc.ZCylinder(r=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius & (+min_x & -max_x & +min_y & -max_y)
# Create root Universe
root_universe = openmc.Universe(name='root universe')
root_universe.add_cells([fuel_cell, clad_cell, moderator_cell])
# Create Geometry and set root universe
geometry = openmc.Geometry(root_universe)
# Instantiate a Settings object
settings = openmc.Settings()
# Set simulation parameters
settings.batches = 300
settings.inactive = 20
settings.particles = 1000
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -10, 0.63, 0.63, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings.source = openmc.source.Source(space=uniform_dist)
# We dont need a tallies file so dont waste the disk input/output time
settings.output = {'tallies': False}
model = openmc.model.Model(geometry, materials, settings)
return model
# Perform the search
crit_ppm, guesses, keffs = openmc.search_for_keff(build_model, bracket=[1000., 2500.],
tol=1e-2, print_iterations=True)
print('Critical Boron Concentration: {:4.0f} ppm'.format(crit_ppm))
plt.figure(figsize=(8, 4.5))
plt.title('Eigenvalue versus Boron Concentration')
# Create a scatter plot using the mean value of keff
plt.scatter(guesses, [keffs[i].nominal_value for i in range(len(keffs))])
plt.xlabel('Boron Concentration [ppm]')
plt.ylabel('Eigenvalue')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Load into individual parcel tables in chunks
Step3: Validate loading
Step15: Consolidate into single table
|
<ASSISTANT_TASK:>
Python Code:
import psycopg2 as pg
import pandas as pd
import os
conn = pg.connect('service=parcels')
conn_str = os.environ.get('PARCELS_CONNECTION')
def chunks(l, n):
Yield successive n-sized chunks from l.
for i in range(0, len(l), n):
yield l[i:i + n]
with conn.cursor() as cur:
cur.execute("select table_name from information_schema.tables where table_schema = 'core_logic_2018'")
res = cur.fetchall()
tables = [x[0] for x in res]
res = []
with conn.cursor() as cur:
for t in tables:
cur.execute("select count(1) from core_logic_2018.{}".format(t))
res.append({'table': t, 'count': cur.fetchone()[0]})
to_do = filter(lambda x: x['count'] == 0, res)
for table_list in chunks(map(lambda x: x['table'], to_do), 50):
tables = ' '.join(table_list)
print "Loading {}".format(tables)
!GDAL_MAX_DATASET_POOL_SIZE=100 ogr2ogr -f "PostgreSQL" PG:"$conn_str" ut_parcel_premium.gdb/ $tables -progress -lco SCHEMA=core_logic_2018 -lco OVERWRITE=yes --config PG_USE_COPY YES
df = pd.read_csv("./parcel-meta.csv")
for r in res:
row = df[df.Filename == r['table']]
if int(row['Records']) != r['count']:
display('Mismatch on parcel: {} csv count {} != db count of {}'.format(row['Filename'], row['Records'], r['count']))
with conn.cursor() as c:
c.execute('CREATE TABLE public.parcels_2018 AS TABLE public.parcels WITH NO DATA;')
conn.commit()
sql =
insert into public.parcels_2018 (ogc_fid, wkb_geometry, parcel_id, state_code, cnty_code, apn,
apn2, addr, city, state, zip, plus, std_addr, std_city, std_state,
std_zip, std_plus, fips_code, unfrm_apn, apn_seq_no, frm_apn,
orig_apn, acct_no, census_tr, block_nbr, lot_nbr, land_use, m_home_ind,
prop_ind, own_cp_ind, tot_val, lan_val, imp_val, tot_val_cd,
lan_val_cd, assd_val, assd_lan, assd_imp, mkt_val, mkt_lan, mkt_imp,
appr_val, appr_lan, appr_imp, tax_amt, tax_yr, assd_yr, ubld_sq_ft,
bld_sq_ft, liv_sq_ft, gr_sq_ft, yr_blt, eff_yr_blt, bedrooms,
rooms, bld_code, bld_imp_cd, condition, constr_typ, ext_walls,
quality, story_nbr, bld_units, units_nbr)
select {}, shape, parcel_id, state_code, cnty_code, apn,
apn2, addr, city, state, zip, plus, std_addr, std_city, std_state,
std_zip, std_plus, fips_code, unfrm_apn, apn_seq_no, frm_apn,
orig_apn, acct_no, census_tr, block_nbr, lot_nbr, land_use, m_home_ind,
prop_ind, own_cp_ind, tot_val, lan_val, imp_val, tot_val_cd,
lan_val_cd, assd_val, assd_lan, assd_imp, mkt_val, mkt_lan, mkt_imp,
appr_val, appr_lan, appr_imp, tax_amt, tax_yr, assd_yr, ubld_sq_ft,
bld_sq_ft, liv_sq_ft, gr_sq_ft, yr_blt, eff_yr_blt, bedrooms,
rooms, bld_code, bld_imp_cd, condition, constr_typ, ext_walls,
quality, story_nbr, bld_units, units_nbr
from core_logic_2018.{}
try:
for r in res[50:]:
with conn.cursor() as c:
c.execute("select count(1) from information_schema.columns where table_name = %(table)s and table_schema = 'core_logic_2018' and column_name = 'objectid'",
r)
id_col = 'objectid' if c.fetchone()[0] == 1 else 'ogc_fid'
c.execute(sql.format(id_col, r['table']))
conn.commit()
except Exception as e:
print(e)
conn.rollback()
with conn.cursor() as c:
c.execute(CREATE INDEX ON public.parcels_2018 USING btree (census_tr COLLATE pg_catalog."default");)
c.execute(CREATE INDEX ON public.parcels_2018 USING btree ("substring"(census_tr::text, 0, 7) COLLATE pg_catalog."default");)
c.execute(CREATE INDEX ON public.parcels_2018 USING btree (city COLLATE pg_catalog."default", state COLLATE pg_catalog."default");)
c.execute(CREATE INDEX ON public.parcels_2018 USING btree (fips_code COLLATE pg_catalog."default");)
c.execute(CREATE INDEX ON public.parcels_2018 USING btree (land_use COLLATE pg_catalog."default");)
c.execute(CREATE UNIQUE INDEX ON public.parcels_2018 USING btree (parcel_id);)
c.execute(CREATE INDEX ON public.parcels_2018 USING gist (wkb_geometry);)
c.execute(CREATE INDEX ON public.parcels_2018 USING btree (state_code COLLATE pg_catalog."default", "substring"(census_tr::text, 0, 7) COLLATE pg_catalog."default");)
c.execute(CREATE INDEX ON public.parcels_2018 USING btree (state COLLATE pg_catalog."default");)
c.execute(CREATE INDEX ON public.parcels_2018 USING btree (story_nbr);)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: However, since the original full size osm file is too big, I didn't use this sample file in the later part. In the whole wrangling, audit and clean process, I just used the full original osm file
Step2: Find different "k" attribute of tags and count them
Step3: Aduit the street type and find out the wrong street type
Step4: Mapping dictionaries which used to update and clean those amendable data
Step5: Find out the all type of postal code
Step6: Find out all type of city names
Step9: Audit, correct and write data from xml into csv files
Step10: Verify the result of audit and update
Step11: Data overview part
|
<ASSISTANT_TASK:>
Python Code:
import xml.etree.ElementTree as ET # Use cElementTree or lxml if too slow
OSM_FILE = "/Users/yangrenqin/udacity/P3/san-francisco.osm" # Replace this with your osm file
SAMPLE_FILE = "/Users/yangrenqin/udacity/P3/sample1.osm"
k = 30 # Parameter: take every k-th top level element
def get_element(osm_file, tags=('node', 'way', 'relation')):
context = iter(ET.iterparse(osm_file, events=('start', 'end')))
_, root = next(context)
for event, elem in context:
if event == 'end' and elem.tag in tags:
yield elem
root.clear()
with open(SAMPLE_FILE, 'w') as output:
output.write('<?xml version="1.0" encoding="UTF-8"?>\n')
output.write('<osm>\n ')
# Write every kth top level element
for i, element in enumerate(get_element(OSM_FILE)):
if i % k == 0:
output.write(ET.tostring(element, encoding='unicode'))
output.write('</osm>')
from collections import defaultdict
filename='/Users/yangrenqin/udacity/P3/san-francisco.osm'
def count_tags(filename):
tags=defaultdict(int)
for _,elem in (ET.iterparse(filename)):
tags[elem.tag] += 1
return tags
count_tags(filename)
import re
filename='/Users/yangrenqin/udacity/P3/san-francisco.osm'
lower = re.compile(r'^([a-z]|_)*$')
lower_colon = re.compile(r'^([a-z]|_)*:([a-z]|_)*$')
lower_colons=re.compile(r'^([a-z]|_)*(:([a-z]|_)*)+$')
problemchars = re.compile(r'[=\+/&<>;\'"\?%#$@\,\. \t\r\n]')
def key_type(element, keys,other):
if element.tag == "tag":
a=lower.search(element.attrib['k'])
b=lower_colon.search(element.attrib['k'])
c=problemchars.search(element.attrib['k'])
d=lower_colons.search(element.attrib['k'])
if a:
keys['lower'] += 1
elif b:
keys['lower_colon'] += 1
elif c:
keys['problemchars'] += 1
elif d:
keys['lower_colons'] += 1
else:
keys['other'] += 1
other.append(element.attrib['k'])
return keys,other
def process_map(filename):
keys = {"lower": 0, "lower_colon": 0, "lower_colons":0, "problemchars": 0, "other": 0}
other=[]
for _, element in ET.iterparse(filename):
keys,other = key_type(element, keys,other)
return keys,other
keys,others=process_map(filename)
print(keys)
import xml.etree.ElementTree as ET
from collections import defaultdict
import re
filename='/Users/yangrenqin/udacity/P3/san-francisco.osm'
expected = ["Street", "Avenue", "Boulevard", "Drive", "Court", "Place", "Square", "Lane", "Road",
"Trail", "Parkway", "Commons", "Way", "Highway", "Path", "Terrace", "Alley", "Center",
"Circle", "Plaza", "Real"]
street_type_re = re.compile(r'\b\S+\.?$', re.IGNORECASE)
street_types=defaultdict(set)
def audit_street_type(street_types, street_name):
m = street_type_re.search(street_name)
if m:
street_type = m.group()
if street_type not in expected:
street_types[street_type].add(street_name)
def is_street_name(elem):
return (elem.attrib['k'] == "addr:street")
def audit(osmfile):
osm_file = open(osmfile, "r")
for event, elem in ET.iterparse(osm_file, events=("start",)):
if elem.tag == "node" or elem.tag == "way":
for tag in elem.iter("tag"):
if is_street_name(tag):
audit_street_type(street_types, tag.attrib['v'])
osm_file.close()
return street_types
error_street_type=audit(filename)
error_street_type
street_mapping = { "St": "Street",
"St.": "Street",
"Steet": "Street",
"st": "Street",
"street": "Street",
"Ave": "Avenue",
"Ave.": "Avenue",
"ave": "Avenue",
"avenue": "Avenue",
"Rd.": "Road",
"Rd": "Road",
"Blvd": "Boulevard",
"Blvd,": "Boulevard",
"Blvd.": "Boulevard",
"Boulavard": "Boulevard",
"Boulvard": "Boulevard",
"Dr": "Drive",
"Dr.": "Drive",
"Pl": "Plaza",
"Plz": "Plaza",
"square": "Square"
}
postcode_mapping={"CA 94030": "94030",
"CA 94133": "94133",
"CA 94544": "94544",
"CA 94103": "94103",
"CA:94103": "94103"
}
error_postcode={'1087', '515', 'CA'}
cityname_mapping={"Berkeley, CA": "Berkeley",
"Fremont ": "Fremont",
"Oakland, CA": "Oakland",
"Oakland, Ca": "Oakland",
"San Francisco, CA": "San Francisco",
"San Francisco, CA 94102": "San Francisco",
"San Francicsco": "San Francisco",
"San Fransisco": "San Francisco",
"San Francsico": "San Francisco",
"Artherton": "Atherton"
}
error_cityname={'155', '157'}
filename='/Users/yangrenqin/udacity/P3/san-francisco.osm'
def is_street_postcode(elem):
return (elem.attrib['k'] == "addr:postcode")
def audit(osmfile):
osm_file = open(osmfile, "r")
postcode_types = set()
for event, elem in ET.iterparse(osm_file, events=("start",)):
if elem.tag == "node" or elem.tag == "way":
for tag in elem.iter("tag"):
if is_street_postcode(tag):
postcode_types.add(tag.attrib['v'])
osm_file.close()
return postcode_types
postcode=audit(filename)
postcode
filename='/Users/yangrenqin/udacity/P3/san-francisco.osm'
def is_city(elem):
return (elem.attrib['k'] == "addr:city")
def audit(osmfile):
osm_file = open(osmfile, "r")
city_types = set()
for event, elem in ET.iterparse(osm_file, events=("start",)):
if elem.tag == "node" or elem.tag == "way":
for tag in elem.iter("tag"):
if is_city(tag):
city_types.add(tag.attrib['v'])
osm_file.close()
return city_types
cityname=audit(filename)
cityname
import csv
import codecs
import re
import xml.etree.cElementTree as ET
OSM_PATH = "/Users/yangrenqin/udacity/P3/san-francisco.osm"
NODES_PATH = "/Users/yangrenqin/udacity/P3/nodes.csv"
NODE_TAGS_PATH = "/Users/yangrenqin/udacity/P3/nodes_tags.csv"
WAYS_PATH = "/Users/yangrenqin/udacity/P3/ways.csv"
WAY_NODES_PATH = "/Users/yangrenqin/udacity/P3/ways_nodes.csv"
WAY_TAGS_PATH = "/Users/yangrenqin/udacity/P3/ways_tags.csv"
LOWER_COLON = re.compile(r'^([a-z]|_)+:([a-z]|_)+')
PROBLEMCHARS = re.compile(r'[=\+/&<>;\'"\?%#$@\,\. \t\r\n]')
street_type_re = re.compile(r'\b\S+\.?$', re.IGNORECASE)
# Make sure the fields order in the csvs matches the column order in the sql table schema
NODE_FIELDS = ['id', 'lat', 'lon', 'user', 'uid', 'version', 'changeset', 'timestamp']
NODE_TAGS_FIELDS = ['id', 'key', 'value', 'type']
WAY_FIELDS = ['id', 'user', 'uid', 'version', 'changeset', 'timestamp']
WAY_TAGS_FIELDS = ['id', 'key', 'value', 'type']
WAY_NODES_FIELDS = ['id', 'node_id', 'position']
def capitalize(a):
b=''
for i in a.split(' '):
i=i.capitalize()+' '
b=b+i
a=b.strip()
return a
def shape_element(element, node_attr_fields=NODE_FIELDS, way_attr_fields=WAY_FIELDS,
problem_chars=PROBLEMCHARS, default_tag_type='regular'):
node_attribs = {}
way_attribs = {}
way_nodes = []
tags = [] # Handle secondary tags the same way for both node and way elements
if element.tag == 'node':
for i in element.attrib:
if i in node_attr_fields:
node_attribs[i]=element.attrib[i]
if element.getchildren() == []:
pass
else:
for tag in element.iter('tag'):
node_tags={}
k=tag.attrib['k']
if PROBLEMCHARS.search(k):
continue
elif LOWER_COLON.search(k):
if LOWER_COLON.search(k).group() == k:
if k == "addr:street":
m = street_type_re.search(tag.attrib['v'])
if m:
street_type = m.group()
if street_type in error_street_type:
if street_type in street_mapping:
tag.attrib['v']=tag.attrib['v'].replace(street_type,street_mapping[street_type])
else:
continue
node_tags['key']=k.split(':')[1]
node_tags['type']=k.split(':')[0]
node_tags['id']=element.attrib['id']
node_tags['value']=tag.attrib['v']
tags.append(node_tags)
else:
continue
if k == "addr:postcode":
if tag.attrib['v'] in error_postcode:
continue
else:
if tag.attrib['v'] in postcode_mapping:
tag.attrib['v']=postcode_mapping[tag.attrib['v']]
node_tags['key']=k.split(':')[1]
node_tags['type']=k.split(':')[0]
node_tags['id']=element.attrib['id']
node_tags['value']=tag.attrib['v']
tags.append(node_tags)
if k == "addr:city":
if tag.attrib['v'] in error_cityname:
continue
else:
if tag.attrib['v'] in cityname_mapping:
tag.attrib['v']=cityname_mapping[tag.attrib['v']]
node_tags['key']=k.split(':')[1]
node_tags['type']=k.split(':')[0]
node_tags['id']=element.attrib['id']
node_tags['value']=capitalize(tag.attrib['v'])
tags.append(node_tags)
else:
node_tags['key']=k.partition(':')[-1]
node_tags['type']=k.partition(':')[0]
node_tags['id']=element.attrib['id']
node_tags['value']=tag.attrib['v']
tags.append(node_tags)
else:
node_tags['id']=element.attrib['id']
node_tags['value']=tag.attrib['v']
node_tags['key']=k
node_tags['type']=default_tag_type
tags.append(node_tags)
if element.tag == 'way':
for i in element.attrib:
if i in way_attr_fields:
way_attribs[i]=element.attrib[i]
if element.getchildren() == []:
pass
else:
for tag in element.iter('tag'):
way_tags={}
k=tag.attrib['k']
if PROBLEMCHARS.search(k):
continue
elif LOWER_COLON.search(k):
if LOWER_COLON.search(k).group() == k:
if k == "addr:street":
m = street_type_re.search(tag.attrib['v'])
if m:
street_type = m.group()
if street_type in error_street_type:
if street_type in street_mapping:
tag.attrib['v']=tag.attrib['v'].replace(street_type,street_mapping[street_type])
else:
continue
way_tags['key']=k.split(':')[1]
way_tags['type']=k.split(':')[0]
way_tags['id']=element.attrib['id']
way_tags['value']=tag.attrib['v']
tags.append(way_tags)
else:
continue
if k == "addr:postcode":
if tag.attrib['v'] in error_postcode:
continue
else:
if tag.attrib['v'] in postcode_mapping:
tag.attrib['v']=postcode_mapping[tag.attrib['v']]
way_tags['key']=k.split(':')[1]
way_tags['type']=k.split(':')[0]
way_tags['id']=element.attrib['id']
way_tags['value']=tag.attrib['v']
tags.append(way_tags)
if k == "addr:city":
if tag.attrib['v'] in error_cityname:
continue
else:
if tag.attrib['v'] in cityname_mapping:
tag.attrib['v']=cityname_mapping[tag.attrib['v']]
way_tags['key']=k.split(':')[1]
way_tags['type']=k.split(':')[0]
way_tags['id']=element.attrib['id']
way_tags['value']=capitalize(tag.attrib['v'])
tags.append(way_tags)
else:
way_tags['key']=k.partition(':')[-1]
way_tags['type']=k.partition(':')[0]
way_tags['id']=element.attrib['id']
way_tags['value']=tag.attrib['v']
tags.append(way_tags)
else:
way_tags['id']=element.attrib['id']
way_tags['value']=tag.attrib['v']
way_tags['key']=k
way_tags['type']=default_tag_type
tags.append(way_tags)
for i,nd in enumerate(element.iter('nd')):
way_nd={}
way_nd['id']=element.attrib['id']
way_nd['node_id']=nd.attrib['ref']
way_nd['position']=i
way_nodes.append(way_nd)
if element.tag == 'node':
if element.getchildren() == []:
return {'node': node_attribs}
else:
return {'node': node_attribs, 'node_tags': tags}
elif element.tag == 'way':
if element.getchildren() == []:
return {'way': way_attribs}
else:
return {'way': way_attribs, 'way_nodes': way_nodes, 'way_tags': tags}
def get_element(osm_file, tags=('node', 'way')):
Yield element if it is the right type of tag
context = ET.iterparse(osm_file, events=('start', 'end'))
_, root = next(context)
for event, elem in context:
if event == 'end' and elem.tag in tags:
yield elem
root.clear()
def is_numeric(s):
try:
float(s)
return True
except ValueError:
return False
def keep_numeric(original):
for i,v in original.items():
if is_numeric(v):
if float(v).is_integer():
original[i]=int(float(v))
else:
original[i]=float(v)
return original
def process_map(file_in):
Iteratively process each XML element and write to csv(s)
with codecs.open(NODES_PATH, 'w') as nodes_file, codecs.open(NODE_TAGS_PATH, 'w') as nodes_tags_file, \
codecs.open(WAYS_PATH, 'w') as ways_file,codecs.open(WAY_NODES_PATH, 'w') as way_nodes_file, \
codecs.open(WAY_TAGS_PATH, 'w') as way_tags_file:
nodes_writer = csv.DictWriter(nodes_file, NODE_FIELDS)
node_tags_writer = csv.DictWriter(nodes_tags_file, NODE_TAGS_FIELDS)
ways_writer = csv.DictWriter(ways_file, WAY_FIELDS)
way_nodes_writer = csv.DictWriter(way_nodes_file, WAY_NODES_FIELDS)
way_tags_writer = csv.DictWriter(way_tags_file, WAY_TAGS_FIELDS)
nodes_writer.writeheader()
node_tags_writer.writeheader()
ways_writer.writeheader()
way_nodes_writer.writeheader()
way_tags_writer.writeheader()
for element in get_element(file_in, tags=('node', 'way')):
el = shape_element(element)
if el:
if element.tag == 'node':
if element.getchildren() == []:
nodes_writer.writerow(keep_numeric(el['node']))
else:
nodes_writer.writerow(keep_numeric(el['node']))
node_tags_writer.writerows([keep_numeric(i) for i in el['node_tags']])
elif element.tag == 'way':
if element.getchildren() == []:
ways_writer.writerow(keep_numeric(el['way']))
else:
ways_writer.writerow(keep_numeric(el['way']))
way_nodes_writer.writerows([keep_numeric(i) for i in el['way_nodes']])
way_tags_writer.writerows([keep_numeric(i) for i in el['way_tags']])
if __name__ == '__main__':
process_map(OSM_PATH)
import sqlite3
import pandas as pd
db = sqlite3.connect('sanfrancisco.db')
c=db.cursor()
query="SELECT tags.value, COUNT(*) as count\
FROM (SELECT * FROM nodes_tags\
UNION ALL\
SELECT * FROM ways_tags) tags\
WHERE tags.key='street'\
GROUP BY tags.value\
ORDER BY count DESC;"
c.execute(query)
rows=pd.DataFrame(c.fetchall(),columns=['Street','count'])
db.close()
rows.head(20)
rows.tail(20)
db = sqlite3.connect('sanfrancisco.db')
c=db.cursor()
query="SELECT tags.value, COUNT(*) as count\
FROM (SELECT * FROM nodes_tags\
UNION ALL\
SELECT * FROM ways_tags) tags\
WHERE tags.key='postcode'\
GROUP BY tags.value\
ORDER BY count DESC;"
c.execute(query)
rows=pd.DataFrame(c.fetchall(),columns=['Postcode','count'])
db.close()
rows.head(10)
rows.tail(10)
db = sqlite3.connect('sanfrancisco.db')
c=db.cursor()
query="SELECT tags.value, COUNT(*) as count\
FROM (SELECT * FROM nodes_tags\
UNION ALL\
SELECT * FROM ways_tags) tags\
WHERE tags.key='city'\
GROUP BY tags.value\
ORDER BY count DESC;"
c.execute(query)
rows=pd.DataFrame(c.fetchall(),columns=['City','count'])
db.close()
rows
db = sqlite3.connect('sanfrancisco.db')
c=db.cursor()
query="SELECT nodes_tags.value,COUNT(*) as num\
FROM nodes_tags, (SELECT DISTINCT(id) FROM nodes_tags WHERE value='cafe') AS i\
WHERE nodes_tags.id=i.id AND nodes_tags.key='name'\
GROUP BY nodes_tags.value\
ORDER BY num desc\
LIMIT 1;"
c.execute(query)
result=c.fetchall()
db.close()
result
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Throwing axe
Step3: Let's make a System
Step5: As a simple starting place, I ignore drag, so vx and omega are constant, and ay is just -g.
Step6: As always, let's test the slope function with the initial conditions.
Step7: And then run the simulation.
Step8: Visualizing the results
Step9: We can plot the velocities the same way.
Step10: Another way to visualize the results is to plot y versus x. The result is the trajectory through the plane of motion.
Step11: Animation
Step12: During the animation, the parts of the axe seem to slide around relative to each other. I think that's because the lines and circles get rounded off to the nearest pixel.
|
<ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
radian = UNITS.radian
def make_system():
Makes a System object for the given conditions.
returns: System with init, ...
P = Vector(0, 2) * m
V = Vector(8, 4) * m/s
theta = 2 * radian
omega = -7 * radian/s
init = State(P=P, V=V, theta=theta, omega=omega)
t_end = 1.0 * s
return System(init=init, t_end=t_end,
g = 9.8 * m/s**2,
mass = 1.5 * kg,
length = 0.7 * m)
system = make_system()
system.init
def slope_func(state, t, system):
Computes derivatives of the state variables.
state: State (x, y, x velocity, y velocity)
t: time
system: System object with length0, m, k
returns: sequence (vx, vy, ax, ay)
P, V, theta, omega = state
A = Vector(0, -system.g)
alpha = 0 * radian / s**2
return V, A, omega, alpha
slope_func(system.init, 0, system)
results, details = run_ode_solver(system, slope_func)
details
results.tail()
def plot_position(P):
x = P.extract('x')
y = P.extract('y')
plot(x, label='x')
plot(y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results.P)
def plot_velocity(V):
vx = V.extract('x')
vy = V.extract('y')
plot(vx, label='vx')
plot(vy, label='vy')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results.V)
plot(results.theta, label='theta', color='C2')
decorate(xlabel='Time (s)',
ylabel='Angle (radian)')
plot(results.omega, label='omega', color='C2')
decorate(xlabel='Time (s)',
ylabel='Angular velocity (rad/s)')
def plot_trajectory(P, **options):
x = P.extract('x')
y = P.extract('y')
plot(x, y, **options)
decorate(xlabel='x position (m)',
ylabel='y position (m)')
plot_trajectory(results.P, label='trajectory')
def make_frame(theta):
rhat = Vector(pol2cart(theta, 1))
that = rhat.perp()
return rhat, that
P, V, theta, omega = results.first_row()
rhat, that = make_frame(theta)
rhat
that
np.dot(rhat, that)
O = Vector(0, 0)
plot_segment(O, rhat)
plot_segment(O, that)
plt.axis('equal')
xs = results.P.extract('x')
ys = results.P.extract('y')
l1 = 0.6 * m
l2 = 0.1 * m
def draw_func(state, t):
plt.axis('equal')
set_xlim([0,8])
set_ylim([0,6])
P, V, theta, omega = state
rhat, that = make_frame(theta)
# plot the handle
A = P - l1 * rhat
B = P + l2 * rhat
plot_segment(A, B, color='red')
# plot the axe head
C = B + l2 * that
D = B - l2 * that
plot_segment(C, D, color='black', linewidth=10)
# plot the COG
x, y = P
plot(x, y, 'bo')
decorate(xlabel='x position (m)',
ylabel='y position (m)')
state = results.first_row()
draw_func(state, 0)
animate(results, draw_func)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We start by defining the source and target landmarks. Notice that, in this first example source = target!!!
Step2: The warp can be effectively computed, although the rendering will not appear to be correct...
Step3: The next step is to define the set of points at which the derivative of the previous TPS warp must be evaluated. In this case, we use the function meshgrid to generate points inside the convex hull defined by the source landmarks.
Step4: We evaluate the derivative, reshape the output, and visualize the result.
Step5: If everything goes as expected, the upper corner of the images defining the derivative of the warp wrt the x and y coordinates of the first of the source landmarks should both contain values close to 1.
Step6: The sum of all the derivatives wrt the x coordinates should produce an all 1 image
Step7: and so should the sum of all derivatives wrt the y coordinates.
Step8: Finally, the derivatives with respect to the x and y coordinates should be in this case exactly the same!!!
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import scipy.io as sio
import matplotlib.pyplot as plt
from menpo.shape import PointCloud
import menpo.io as mio
from menpofit.transform import DifferentiableThinPlateSplines
src_landmarks = PointCloud(np.array([[-1, -1],
[-1, 1],
[ 1, -1],
[ 1, 1]]))
tgt_landmarks = PointCloud(np.array([[-1, -1],
[-1, 1],
[ 1, -1],
[ 1, 1]]))
tps = DifferentiableThinPlateSplines(src_landmarks, tgt_landmarks)
np.allclose(tps.apply(src_landmarks).points, tgt_landmarks.points)
x = np.arange(-1, 1, 0.01)
y = np.arange(-1, 1, 0.01)
xx, yy = np.meshgrid(x, y)
points = np.array([xx.flatten(1), yy.flatten(1)]).T
%matplotlib inline
dW_dxy = tps.d_dl(points)
reshaped = dW_dxy.reshape(xx.shape + (4,2))
#dW_dx
plt.subplot(241)
plt.imshow(reshaped[:,:,0,0])
plt.subplot(242)
plt.imshow(reshaped[:,:,1,0])
plt.subplot(243)
plt.imshow(reshaped[:,:,2,0])
plt.subplot(244)
plt.imshow(reshaped[:,:,3,0])
#dW_dy
plt.subplot(245)
plt.imshow(reshaped[:,:,0,1])
plt.subplot(246)
plt.imshow(reshaped[:,:,1,1])
plt.subplot(247)
plt.imshow(reshaped[:,:,2,1])
plt.subplot(248)
plt.imshow(reshaped[:,:,3,1])
print(reshaped[1:5,1:5,0,0])
print(reshaped[1:5,1:5,0,1])
summed_x = np.sum(reshaped[:,:,:,0], axis=-1)
np.allclose(np.ones(xx.shape), summed_x)
plt.imshow(summed_x)
summed_y = np.sum(reshaped[:,:,:,1], axis=-1)
np.allclose(np.ones(xx.shape), summed_y)
plt.imshow(summed_y)
np.allclose(reshaped[:,:,:,0], reshaped[:,:,:,1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example
Step2: Example
Step3: And here's a script for CSVs (with errorhandling etc.)
|
<ASSISTANT_TASK:>
Python Code:
ckey = ''
csecret = ''
atoken = ''
asecret = ''
import tweepy
# create "keychain"
auth = tweepy.OAuthHandler(ckey, csecret)
auth.set_access_token(atoken, asecret)
# create the API "object"
api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True)
# testing connection by printing own timeline
public_tweets = api.home_timeline(count = 5)
for tweet in public_tweets:
print(tweet.text)
# print Axel's last 5 tweets
axels_timeline = api.user_timeline(id = 'snurb_dot_info', count = 5)
for x in axels_timeline:
print(x.text)
import tweepy
#create second 'app'
ckey2 = ''
csecret2 = ''
atoken2 = ''
asecret2 = ''
# create "keychain"
auth2 = tweepy.OAuthHandler(ckey2, csecret2)
auth2.set_access_token(atoken2, asecret2)
# create the API "object"
api2 = tweepy.API(auth2, wait_on_rate_limit = True, wait_on_rate_limit_notify = True)
class MyStreamListener(tweepy.StreamListener):
# define the tasks of your certain kind of listener
def on_status(self, status):
print('@' + status.user.screen_name + ': ' + status.text)
def on_error(self, status_code):
print('error code = %s' % status_code)
return False
myListener = MyStreamListener()
# create a listener of that kind
myStream = tweepy.Stream(auth = api2.auth, listener=myListener)
# use the listener for creating a stream
try:
myStream.filter(track=['auspol'])
# access the stream
except KeyboardInterrupt:
myStream.disconnect()
print('\nGoodbye!')
import time
user = 'flxvctr' # the screenname of the user you want the friends of
def get_friend_ids(username):
uids = tweepy.Cursor(api.friends_ids, screen_name=username).items()
idlist = []
for uid in uids:
idlist.append(uid)
return idlist
def get_screen_name_and_created_at(uids):
friend_count = len(uids)
i = 0
results = []
while i < friend_count:
j = i + 100
users = api.lookup_users(user_ids=uids[i:j])
for user in users:
results.append([uids[i],user.screen_name,int(time.mktime(user.created_at.timetuple()))])
i += 1
return results
friendlist = get_screen_name_and_created_at(get_friend_ids(user))
print(len(friendlist),friendlist)
# Usage:
# insert consumer and app keys below
# run (eg in Terminal): python3 scriptname.py inputfile.csv outputfile.csv
# inputfile is a list of userids
import time
import csv
import io
import tweepy
from os import stat
from sys import argv
#ckey = ''
#csecret = ''
#atoken = ''
#asecret = ''
#uncomment if necessary
auth = tweepy.OAuthHandler(ckey, csecret)
auth.set_access_token(atoken, asecret)
api = tweepy.API(auth, wait_on_rate_limit = True, wait_on_rate_limit_notify = True)
# userlistpath = argv[1]
# friendlistpath = argv[2]
# uncomment two lines above for script use
# change the following two lines
userlistpath = ''
friendlistpath = ''
def get_friend_ids(userid):
userid = int(userid)
uids = tweepy.Cursor(api.friends_ids, user_id = userid).items()
idlist = []
idcount = 0
try:
for uid in uids:
idlist.append(uid)
idcount += 1
except tweepy.TweepError as e:
idlist.append(-int(e.reason[-3:]))
print(e.reason + ' for %d' %(userid))
print('%d friend ids collected for %d.' %(idcount,userid))
return idlist
def get_screen_name_and_created_at(uids):
friend_count = len(uids)
i = 0
results = []
while i < friend_count:
if uids[0] < 0:
results.append([uids[i]]*3)
break
else:
j = i + 100
if j > friend_count:
j = friend_count
try:
users = api.lookup_users(user_ids=uids[i:j])
for user in users:
results.append([uids[i],user.screen_name,int(time.mktime(user.created_at.timetuple()))])
i += 1
except tweepy.TweepError as e:
code = int(e.reason[-3:])
for uid in uids[i:j]:
results.append([uid, -code, -code])
i += 1
print('%d friend details collected.' %(i))
return results
# opens csv, calls the funtions above with the csv's contents, writes results in other csv
def write_edge_list(readpath, writepath):
with io.open(readpath,
encoding='utf_8', newline='') as userlist, io.open(writepath,
mode='a',encoding='utf_8', newline='') as friendlist:
users = csv.reader(userlist)
friends = csv.writer(friendlist, delimiter = '\t', lineterminator = '\n')
if stat(writepath).st_size == 0:
friends.writerow(['user','friend_id','friend_name','friend_created'])
n = 0
for user in users:
print('Friends of %d users gathered so far. Goin on.' %(n))
friendids = get_friend_ids(user[0])
frienddetails = get_screen_name_and_created_at(friendids)
for details in frienddetails:
friends.writerow([user[0],details[0],details[1],details[2]])
n += 1
print('Friends of %d users gathered. Pew. That was exciting.' %(n))
write_edge_list(userlistpath,friendlistpath)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-hm', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating data
Step2: DataFrame entries aren't limited to integers.
Step3: We are using the pd.DataFrame constructor to generate these DataFrame objects.
Step4: A Series is a sequence of values.
Step5: Think of a Series as a single column in a DataFrame.
Step6: Now that you can see the relationship between the Series and the DataFrame, it's time to move on.
Step7: We can use the .shape attribute to see how large the resulting DataFrame is
Step8: It appears that our wine_reviews DataFrame has ~130,000 records with 14 different columns, giving us just under 2 million entries to work with.
Step9: The pandas read_csv() function is quite versatile, with over 30 optional parameters you can specify.
Step10: Another file format that you will see regularly is the Excel spreadsheet (.xls or .xlst).
Step11: As you can see in this example, Excel files are often not formatted as well as CSV files are.
Step12: Now we are ready to write a SQL statement.
Step13: Isn't that just magical?
Step14: To write an Excel file you can use to_excel and the sheet_name again
Step15: And for SQL database entries, supply the name of the table and specify a connecter
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
pd.DataFrame({'Yes': [50, 21], 'No': [131, 2]})
pd.DataFrame({'Bob': ['I liked it.', 'It was awful.'], 'Sue': ['Pretty good.', 'Bland.']})
pd.DataFrame({'Bob': ['I loved it.', 'I hated it.'],
'Sue': ['That was okay.', 'That was not okay.']},
index=['Product A', 'Product B'])
pd.Series([1, 2, 3, 4, 5])
pd.Series([30, 35, 40], index=['2015 Sales', '2016 Sales', '2017 Sales'], name='Product A')
wine_reviews = pd.read_csv('inputs/wine-reviews/winemag-data-130k-v2.csv')
wine_reviews[:1]
wine_reviews.shape
wine_reviews.head()
print(wine_reviews.head())
wine_reviews = pd.read_csv('inputs/wine-reviews/winemag-data-130k-v2.csv', index_col=0)
wine_reviews.head()
wic = pd.read_excel('inputs/publicassistance/xls_files_all/WICAgencies2013ytd.xls', sheet_name='Total Women')
wic.head()
import sqlite3
conn = sqlite3.connect('inputs/FPA_FOD_20170508.sqlite')
fires = pd.read_sql_query("SELECT * FROM fires", conn)
fires.head()
wine_reviews.head().to_csv('inputs/wine-reviews/wine_reviews.csv')
wic.to_excel('inputs/publicassistance/wic.xlsx', sheet_name='Total Women')
conn = sqlite3.connect('inputs/fires.sqlite')
fires.head(10).to_sql('fires', conn)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The file c-grammar.g contains a context-free grammar for the language C.
Step2: Our goal is to convert this grammar into an <span style="font-variant
Step3: The function grammar_2_string takes a list of grammar rules as its input and renders these rules as an <span style="font-variant
Step4: The function rule_2_string takes a grammar rule $r$ as its input and transforms this rule into an <span style="font-variant
Step5: The function body_2_string takes a list of grammar items as its inputs and turns them into an <span style="font-variant
Step6: The function item_2_string takes a grammar item as its inputs and turns the item into an <span style="font-variant
Step7: The command below cleans the directory. If you are running windows, you have to replace rmwith del.
|
<ASSISTANT_TASK:>
Python Code:
!cat Grammar.g4
!type Grammar.g4
!cat c-grammar.g
!type c-grammar.g
!antlr4 -Dlanguage=Python3 Grammar.g4
from GrammarLexer import GrammarLexer
from GrammarParser import GrammarParser
import antlr4
def grammar_2_string(grammar):
result = ''
result += '<html>\n'
result += '<head>\n'
result += '<title>Grammar</title>\n'
result += '</head>\n'
result += '<body>\n'
result += '<table>\n'
for rule in grammar:
result += rule_2_string(rule)
result += '</table>\n'
result += '</body>\n'
result += '</html>\n'
return result
def rule_2_string(rule):
head, body = rule
result = ''
result += '<tr>\n'
result += '<td style="text-align:right"><a name="' + head + '"><em>' + head + '<em></a></td>\n'
result += '<td><code>:</code></td>\n'
result += '<td>' + body_2_string(body[0]) + '</td>'
result += '</tr>\n'
for i in range(1, len(body)):
result += '<tr><td></td><td><code>|</code></td><td>'
result += body_2_string(body[i])
result += '</td></tr>\n'
result += '<tr><td></td><td><code>;</code></td><tr>\n\n'
return result
def body_2_string(body):
result = ''
if len(body) > 0:
for item in body:
result += item_2_string(item) + ' '
else:
result += '<code>/* empty */</code>'
return result
def item_2_string(item):
kind, contend = item
if kind == 'var':
return '<a href="#' + contend + '"><em>' + contend + '</em></a>'
else:
return '<code>' + contend + '</code>'
def main():
input_stream = antlr4.FileStream('c-grammar.g')
lexer = GrammarLexer(input_stream)
token_stream = antlr4.CommonTokenStream(lexer)
parser = GrammarParser(token_stream)
grammar = parser.start()
result = grammar_2_string(grammar.result)
file = open('c-grammar.html', 'w')
file.write(result)
main()
!open c-grammar.html
!explorer c-grammar.html
!rm *.py *.tokens *.interp
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Step5: Making batches
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
Step8: The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the one-hot vector with the embedding matrix, you end up selecting only one row out of the entire matrix
Step9: For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Step10: Validation
Step11: Training
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
|
<ASSISTANT_TASK:>
Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
print(len(int_words))
## Your code here
from collections import Counter
counter = Counter(int_words)
train_words = []
for word in int_words:
f = counter[word] / len(int_words)
p = 1 - (np.sqrt((1e-3) / f))
if p < 0.5:
train_words.append(word)
print(len(train_words))
print(int_words[-10:], len(int_words))
print(train_words[-10:], len(train_words))
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
r = np.random.randint(1, window_size)
words_from_history = words[max(idx - r, 0) : idx]
words_from_future = words[idx + 1 : min(idx + r + 1, len(words))]
#print(words[idx], " ", r, " ", words_from_history, " ", words_from_future)
return words_from_history + words_from_future
get_target(train_words, 10445974-2)
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y)) #batch_yはリストなので、同じbatch_xとbatcy_yの各要素とのペアをそれぞれ作る
yield x, y
x, y = [], []
batch_x = 10
batch_y = [1,2,3]
x.extend([batch_x]*len(batch_y))
print(x)
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, shape=[None])
labels = tf.placeholder(tf.int32, shape=[None, None])
#sess = tf.Session()
#print(sess.run(inputs))
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform([n_vocab, n_embedding], minval=-1, maxval=1)) # create embedding weight matrix here
#embed = tf.nn.embedding_lookup(embedding, np.arange(n_vocab))# use tf.nn.embedding_lookup to get the hidden layer output
embed = tf.nn.embedding_lookup(embedding, inputs)# use tf.nn.embedding_lookup to get the hidden layer output
#print(len(inputs))
#params = tf.constant([[0,1,2],[3,4,5],[6,7,8]])
#ids = tf.constant([0,2,1])
#x = tf.nn.embedding_lookup(params, ids)
#sess = tf.Session()
#sess.run(x)
#np.arange(10)
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal([n_vocab, n_embedding], stddev=0.1))
softmax_b = tf.Variable(tf.zeros([n_vocab]))
# Calculate the loss using negative sampling
#print(softmax_w)
#print(softmax_b)
#print(labels)
#print(inputs)
#print(embed)
#print(n_sampled, n_vocab)
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
import random
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 1
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
#テスト用#####################################################
break
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Almost all the machine learning toolboxes take their input in this form
Step2: The only major difference between the above two data structures is the data type of target i.e. categorical/discrete vs. continuous values. The ClassificationDataset accepts only categorical values for its target, often as strings ('healthy', 'disease', 'cat', 'chair' etc), whereas the RegressionDataset allows continuous floating point numbers (e.g. age, temperature, salary, weight etc). Depending on the target data type, the analyses (esp. in machine learning) changes dramatically in terms of which predictive model is employed, how they are optimized and what performance metrics are considered etc.
Step3: These dataset have customised and convenient display methods, summarizing the important info. For example, just printing instance shows its content summary (typically count of samplets per target and a description if any).
Step4: You can see the dataset some description attached to it, however we can also it is empty. This can be verified in a boolean context as shown below
Step7: Let's add samples to this dataset which is when this dataset implementation becomes really handy. Before we do that, we will define some convenience routines defined to just illustrate a simple yet common use of this dataset.
Step8: So now we have IO routines to read the data for us. Let's define where the data will come from
Step9: This would obviously change for your applications, but this has sufficient properties to illustrate the point.
Step11: That's a lot of methods and attributes to use, organize and retrieve datasets.
Step12: The dataset can be populated by adding all subjects belonging to a one class (referred to by cls_id here), done by adding one samplet at a time, using the .add_samplet() method. Let's go ahead and add some samplets based on id lists we just created.
Step13: Nice. Isn't it?
Step14: Even better, right? No more coding of several commands to get the complete and concise sense of the dataset.
Step15: If you'd like to take a look data inside for few subjects - shall we call it a glance?
Step16: We can control the number of items to glance, by passing a number to dataset.glance() method
Step17: Or you may be wondering what are the subject IDs in the dataset.. here they are
Step18: These datasets offer all convenient methods and attributes you need. Besides it is quite easy to extend them to fit your needs and improve your workflow.
Step19: Like a Python dict, it raises an error if the key is not in the dataset
Step20: A more graceful handling would be to use dataset.get to control what value to be returned in case the requested id is not found in the dataset.
Step21: <a id='iteration'></a>
Step22: Did you see that? It's so intuitive and natural! Such a clean traversal of dataset.
Step23: we can easily traverse the dataset to check the result
Step24: As the transform accepts an arbitrary callable, we could do many more sophisticated things, such as access the subset of features e.g. cortical thickness for a particular region of interest (say posterior cingulate gyrus).
Step25: Using this "mask" function, we can easily obtain features for an ROI
Step26: We can verify that the new dataset does indeed have only 3 features, for the same subjects/classes
Step27: Let's make a bar plot with the just computed numbers
Step28: Remember as the original source of data was random, this has no units, property or meaning!
Step29: That's it, obtaining the data for a given class is a simple call away.
Step30: Even with updated description automatically, to indicate its history. Let's see some data from controls
Step31: We can also query a random subset of samples for manual inspection or cross-validation purposes. For example
Step32: You can see which samplets were selected
Step33: You can verify that it is indeed random by issuing another call
Step34: Let's see how we can retrieve specific samples by their IDs (for which there are many use cases)
Step35: So as simple as that.
Step36: This method returns two sets of sample ids corresponding to training set (which 50% of samples from all classes in the dataset) and the rest in test_set. Let's see what they have
Step37: We can also get a train/test split by specifying an exact number of subjects we would like from each class (e.g. when you would like to avoid class imbalance in the training set)
Step38: Let's see what the training set contains - we expect 3*3 =9 subjects
Step39: We can indeed verify that is the case, by creating a new smaller dataset from that list of ids and getting a summary
Step40: Another programmatic way to look into different classes is this
Step41: which returns all the classes that you could iterative over.
Step42: How about selecting a subset of features from all samples?
Step43: Great. Isn't it? You can also see the two-time-point history (initial subset in classes, followed by a subset in features).
Step44: That's it - it is saved.
Step45: We can check to see they are indeed one and the same
Step46: <a id='arithmetic'></a>
Step47: How can you check if they are "functionally identical"? As in same keys, same data and classes for each key... Easy
Step48: Now let's try the arithmetic
Step49: Great. The add method recognized the identical set of keys and performed a horiz cat, as can be noticed by the twice the number of features in the combined dataset
Step50: We can also do some removal in similar fashion
Step51: Data structure is even producing a warning to let you know the resulting output would be empty! We can verify that
Step52: <a id='portability'></a>
|
<ASSISTANT_TASK:>
Python Code:
import sys, os
import numpy as np
import matplotlib
%matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
n = 10 # number of samples
p = 3 # number of features
X = np.random.random([n, p]) # random data for illustration
y = [1]*5 + [2]*5 # random labels ...
np.set_printoptions(precision=2) # save some screen space
print('X : \n{}'.format(X))
print('y : \n{}'.format(y))
from pyradigm import ClassificationDataset as ClfDataset
from pyradigm import RegressionDataset as RegrDataset
dataset = ClfDataset()
dataset.description = 'ADNI1: cortical thickness features from Freesurfer v6.0, QCed.'
dataset
bool(dataset)
def read_thickness(path):
Dummy function to minic a data reader.
# in your actural routine, this might be:
# pysurfer.read_thickness(path).values()
return np.random.random(2)
def get_features(work_dir, subj_id):
Returns the whole brain cortical thickness for a given subject ID.
# extension to identify the data file; this could be .curv, anything else you choose
ext_thickness = '.thickness'
thickness = dict()
for hemi in ['lh', 'rh']:
path_thickness = os.path.join(work_dir, subj_id, hemi + ext_thickness)
thickness[hemi] = read_thickness(path_thickness)
# concatenating them to build a whole brain feature set
thickness_wb = np.concatenate([thickness['lh'], thickness['rh']])
return thickness_wb
work_dir = '/project/ADNI/FreesurferThickness_v4p3'
class_set = ['Control', 'Alzheimer', 'MCI']
class_sizes = [7, 8, 6]
[mm for mm in dir(dataset) if not mm.startswith('_') ]
import random
random.seed(42)
def get_id_list(class_name, size=10):
Generates a random ID list.
return ['{}{:04d}'.format(class_name[0],np.random.randint(50*size)) for _ in range(size)]
for class_index, class_id in enumerate(class_set):
print('Adding class {:>5}'.format(class_id))
target_list = get_id_list(class_id,class_sizes[class_index])
for subj_id in target_list:
print('\t reading subject {:>7}'.format(subj_id))
thickness_wb = get_features(work_dir, subj_id)
# adding the sample to the dataset
dataset.add_samplet(subj_id, thickness_wb, class_id)
dataset
dataset.num_samplets
dataset.num_features
dataset.target_set
dataset.target_sizes
dataset.target_sizes['Control']
dataset.glance()
dataset.glance(2)
dataset.samplet_ids
dataset['M0022']
dataset['dlfjdjf']
dataset.get('dkfjd', np.nan)
for samplet, features in dataset:
print("{} : {:>10} : {}".format(sample, dataset.targets[samplet], features))
mean_data = dataset.transform(np.mean)
mean_data.description = 'mean values per subject'
mean_data
for samplet, val in mean_data:
print("{} : {:>10} : {:.3f}".format(samplet, mean_data.targets[samplet], val))
# let's make a toy function to return the indices for the ROI
def get_ROI_indices(x): return x[:3]
pcg = dataset.transform(get_ROI_indices)
pcg
pcg.num_features
data, lbl, keys = pcg.data_and_targets()
n, bins, patches = plt.hist(data)
ctrl = dataset.get_class('Control')
ctrl
ctrl.glance(2)
random_subset = dataset.random_subset(perc_in_class=0.3)
random_subset
random_subset.samplet_ids
# supplying a new seed everytime to ensure randomization
from datetime import datetime
dataset.random_subset(perc_in_class=0.3).samplet_ids
data = dataset.get_subset(dataset.samplet_ids[1:20])
data
train_set, test_set = dataset.train_test_split_ids( train_perc = 0.5)
train_set, test_set
train_set, test_set = dataset.train_test_split_ids( count_per_class = 3)
train_set
training_dataset = dataset.get_subset(train_set)
training_dataset
target_set, target_sizes = training_dataset.summarize()
target_set, target_sizes
dataset
binary_dataset = dataset.get_class(['Control','Alzheimer'])
binary_dataset
binary_dataset.get_feature_subset(range(2))
from pathlib import Path
out_file = Path('.') / 'Freesurfer_thickness_v4p3.PyradigmDataset.pkl'
binary_dataset.save(out_file)
reloaded = ClfDataset(out_file) # another form of the constructor!
reloaded
binary_dataset == reloaded
dataset_two = ClfDataset(in_dataset=dataset) # yet another constructor: in its copy form!
dataset_two == dataset
combined = dataset + dataset_two
combined
smaller = combined - dataset
bool(smaller)
from sklearn import svm
clf = svm.SVC(gamma=0.001, C=100.)
data_matrix, target, sample_ids = binary_dataset.data_and_targets()
clf.fit(data_matrix, target)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Relevant Parameters
Step3: Note that gravitational redshift effects for RVs (rv_grav) are disabled by default. We could call add_compute and then set them to be true, or just temporarily override them by passing rv_grav to the run_compute call.
Step4: Now let's run another model but with much smaller stars (but with the same masses).
Step5: Now let's run another model, but with gravitational redshift effects disabled
Step6: Influence on Radial Velocities
Step7: Besides the obvious change in the Rossiter-McLaughlin effect (not due to gravitational redshift), we can see that making the radii smaller shifts the entire RV curve up (the spectra are redshifted as they have to climb out of a steeper potential at the surface of the stars).
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rv01')
b.set_value_all('ld_mode', 'manual')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.0, 0.0])
b.set_value_all('atm', 'blackbody')
print(b['value@requiv@primary@component'], b['value@requiv@secondary@component'])
b.run_compute(rv_method='flux-weighted', rv_grav=True, irrad_method='none', model='defaultradii_true')
b['requiv@primary'] = 0.4
b['requiv@secondary'] = 0.4
b.run_compute(rv_method='flux-weighted', rv_grav=True, irrad_method='none', model='smallradii_true')
b.run_compute(rv_method='flux-weighted', rv_grav=False, irrad_method='none', model='smallradii_false')
afig, mplfig = b.filter(model=['defaultradii_true', 'smallradii_true']).plot(legend=True, show=True)
afig, mplfig = b.filter(model=['smallradii_true', 'smallradii_false']).plot(legend=True, show=True)
print(b['rvs@rv01@primary@defaultradii_true'].get_value().min())
print(b['rvs@rv01@primary@smallradii_true'].get_value().min())
print(b['rvs@rv01@primary@smallradii_false'].get_value().min())
print(b['rvs@rv01@primary@defaultradii_true'].get_value().max())
print(b['rvs@rv01@primary@smallradii_true'].get_value().max())
print(b['rvs@rv01@primary@smallradii_false'].get_value().max())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import a square DEM that includes the watershed
Step2: Run the FlowAccumulator and the DepressionFinderAndRouter components to find depressions, to route the flow across them and to calculate flow direction and drainage area
Step3: Set the id of the outlet. The value indicated here is the node id of the entire watershed's outlet
Step4: Run the watershed utility and show the watershed mask
Step5: Run the flow__distance utility
Step6: Mask the flow__distance to the watershed mask. This operation has to be done because the flow__distance utility is applied to the entire grid that contains other streams not connected with our stream network and, for this reason, not belonging to our watershed.
Step7: Add the flow__distance field to the grid and show the spatial distribution of the distances from each node to the watershed's outlet
|
<ASSISTANT_TASK:>
Python Code:
from landlab.io import read_esri_ascii
from landlab.components import FlowAccumulator
from landlab.plot import imshow_grid
from matplotlib.pyplot import figure
%matplotlib inline
from landlab.utils import watershed
import numpy as np
from landlab.utils.flow__distance import calculate_flow__distance
(mg, z) = read_esri_ascii("nocella_resampled.txt", name="topographic__elevation")
fr = FlowAccumulator(
mg, flow_director="D8", depression_finder="DepressionFinderAndRouter"
)
fr.run_one_step()
outlet_id = 15324
ws_mask = watershed.get_watershed_mask(mg, outlet_id)
figure()
imshow_grid(mg, ws_mask, allow_colorbar=False)
flow__distance = calculate_flow__distance(mg, add_to_grid=True, clobber=True)
flow_distance = np.zeros(mg.number_of_nodes)
flow_distance[ws_mask] = flow__distance[ws_mask] - flow__distance[outlet_id]
mg.add_field("flow_distance", flow_distance, at="node", clobber=True)
figure()
imshow_grid(mg, mg.at_node["flow_distance"], colorbar_label="flow distance (m)")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: After you have "un-zipped" the data file you have a file named kc_house_data.csv" We will load that into a pandas data frame and take a look at the first few lines of data. [ We will call the data-frame df for now, which is common practice.]
Step2: The data looks very clean. We wont have to do any type or format conversions. Lets make a quick check for missing values. [ looks OK ]
Step3: Here is some basic stats for some of the data.
Step4: You can see the home prices vary from \$75K to \$7.7million with living space from 290sqft to 13540sqft. Lots of variety!
Step5: You can see that "lot" size is not well correlated to price but the data for living space is reasonable. Visually the best feature to use looks like sqft_living as we expected.
Step6: Now take a a closer look at the data with a joint distribution plot.
Step7: The increase of price with sqft_living space is pretty clear and the "Pearson r value" is .7 indicating a reasonable correlation. However, the data distributions show a big concentration of values in the lower left of the plot. That makes sense, most houses are between 1200 and 3000 sqft and a few hundred thousand dollars. We can eliminate the very expensive and very large houses and take another look at the data.
Step8: Something worth considering is that different neighborhoods can vary greatly in average house price. Some nice neighborhoods are very expensive and some other (also nice!) neighborhoods can be quite affordable. It might be good to look at average house price by zipcode since we have that in our dataset.
Step9: It looks like there are 70 different sip codes in King county. Lets see how many house sales there were in each.
Step10: How about the average house sale price in each zipcode ...
Step11: The two zipcodes that look the most interesting to me are 98103 and 98039. 98103 has the most house sale values, 602, with an average sale price of \$584,919. The most expensive zipcode 98039 has 50 sale values with an average sale price of \$2,160,606.
Step12: Using the "selectors" above we can look at plots of price vs sqft_living in those zipcodes.
Step13: The 98103 zipcode has a distribution that looks similar to the complete dataset. It's interesting that in the most expensive zipcode, 98039, the house sale prices seems to be highly correlated to the size of the house (house-size envy
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd # data handeling
import numpy as np # numeriacal computing
import matplotlib.pyplot as plt # plotting core
import seaborn as sns # higher level plotting tools
%matplotlib inline
pd.set_option('display.float_format', lambda x: '%.2f' % x)
pd.set_option('display.max_columns', 21)
pd.set_option('display.max_rows', 70)
df = pd.read_csv("kc_house_data.csv") # create a dataframe with pandas "pd"
df.head() # display the first few lines of the dataframe "df"
df.isnull().values.any() # check for missing values
df[["price","bedrooms","bathrooms","sqft_living","sqft_lot","sqft_above","yr_built","sqft_living15","sqft_lot15"]].describe()
sns.pairplot(data=df, x_vars=['sqft_living','sqft_lot','sqft_above','sqft_living15','sqft_lot15'], y_vars=["price"])
df2 = df[["price", "sqft_living"]]
df2.head()
sns.jointplot('sqft_living','price', data=df2, size=10, alpha=.5, marker='+')
sns.jointplot('sqft_living','price', data=df2, xlim=(500,3500), ylim=(100000,1000000), size=10, alpha=.5, marker='+')
df["zipcode"].nunique()
df['zipcode'].value_counts()
df.groupby('zipcode')['price'].mean() # group by zipcode and compute the mean of prices in a zipcode
zip98103 = df['zipcode'] == 98103 # True if zip is 98103
zip98039 = df['zipcode'] == 98039
sns.jointplot('sqft_living','price', data=df2[zip98103], size=10, alpha=.5, marker='+')
sns.jointplot('sqft_living','price', data=df2[zip98039], size=10, alpha=.5, marker='+')
df_98039 = df2[zip98039]
df_98039.describe()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compute residuals
Step2: Plot the light curves
Step3: Fit the Hyperparameters and plot the GP mean with the data
Step4: Create a Pandas dataframe and save the hyperparameters
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
%run __init__.py
from exotk.utils.misc import fold
from src.extcore import *
lpf = LPFTM()
pv0 = pd.read_hdf(RFILE_EXT, 'ckwn/fc').median().values
fluxes_m = lpf.compute_transit(pv0)
residuals = [fo-fm for fo,fm in zip(lpf.fluxes, fluxes_m)]
gps = [GPTime(time, res) for time,res in zip(lpf.times, residuals)]
hps = []
phases = list(map(lambda t: fold(t, P, TC, 0.5)-0.5, lpf.times))
fig,axs = subplots(4,3, figsize=(14,14),sharey=True, sharex=True)
for iax,ilc in enumerate(lpf.lcorder):
a = axs.flat[iax]
a.plot(phases[ilc], lpf.fluxes[ilc],'.', alpha=0.5)
a.plot(phases[ilc], fluxes_m[ilc],'k')
a.plot(phases[ilc], lpf.fluxes[ilc]-fluxes_m[ilc]+0.95,'.', alpha=0.5)
a.text(0.5, 0.95, lpf.passbands[ilc], ha='center', va='top', size=12, transform=a.transAxes)
setp(axs, ylim=(0.94,1.01), xlim=(-0.035,0.035))
fig.tight_layout()
axs.flat[-1].set_visible(False)
hps = []
for gp in tqdm(gps, desc='Optimising GP hyperparameters'):
gp.fit()
hps.append(gp.hp)
fig,axs = subplots(4,3, figsize=(14,10),sharey=True, sharex=True)
for iax,ilc in enumerate(lpf.lcorder):
axs.flat[iax].plot(phases[ilc], gps[ilc].flux, '.', alpha=0.5)
gps[ilc].compute(hps[ilc])
pr = gps[ilc].predict()
axs.flat[iax].plot(phases[ilc], pr, 'k')
setp(axs, ylim=(-0.015,.015), xlim=(-0.04,0.04))
fig.tight_layout()
axs.flat[-1].set_visible(False)
fig,axs = subplots(4,3, figsize=(14,10),sharey=True, sharex=True)
for iax,ilc in enumerate(lpf.lcorder):
axs.flat[iax].plot(phases[ilc], lpf.fluxes[ilc], '.', alpha=0.5)
gps[ilc].compute(hps[ilc])
pr = gps[ilc].predict()
axs.flat[iax].plot(phases[ilc], fluxes_m[ilc]+pr, 'k')
setp(axs, ylim=(0.955,1.015), xlim=(-0.04,0.04))
fig.tight_layout()
axs.flat[-1].set_visible(False)
with pd.HDFStore(DFILE_EXT) as f:
ntr = [k[3:] for k in f.keys() if 'lc/triaud' in k]
nma = [k[3:] for k in f.keys() if 'lc/mancini' in k]
df = pd.DataFrame(hps, columns=gp.names, index=lpf.passbands)
df['lc_name'] = ntr+nma
df
df.ix[:3].to_hdf(RFILE_EXT, 'gphp/triaud2013')
df.ix[3:].to_hdf(RFILE_EXT, 'gphp/mancini2014')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the data!
Step2: Read human ATAC-seq table
Step3: Extract relevant data
Step4: Read mouse ATAC-seq table
Step5: Inspect the data
Step6: Let's do the same for the mouse data.
Step8: Select cell type specific peaks
Step9: Run maelstrom
Step10: Correlation of motifs with TF gene expression
Step11: Conservation of motifs between human and mouse
|
<ASSISTANT_TASK:>
Python Code:
import os
import urllib
import hashlib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import pearsonr,spearmanr
from sklearn.preprocessing import scale
from gimmemotifs.maelstrom import run_maelstrom
%matplotlib inline
# Ignore warnings (just for clarity of tutorial)
import warnings
warnings.filterwarnings('ignore')
from gimmemotifs.config import MotifConfig
cfg = MotifConfig()
motif_dir = cfg.get_motif_dir()
base_dir = motif_dir.replace("motif_databases", "")
ncpus = int(cfg.get_default_params()["ncpus"])
if ncpus <= 2:
config_file = os.path.join(base_dir, "gimmemotifs.cfg")
print "GimmeMotifs currently uses only 2 cores"
print "If possible, I recommend to change the ncpus paramater in {}".format(config_file)
fnames = {
"GSE74912_ATACseq_All_Counts.txt.gz":
("https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE74912&format=file&file=GSE74912_ATACseq_All_Counts.txt.gz",
"8bb507507f17809eb5ea658646263e77"),
"GSE60101_1256271tableS2.txt.gz":
("https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE60101&format=file&file=GSE60101_1256271tableS2.txt.gz",
"88ea3f015fd5e196f39db737770f291d"),
"GSE74246_RNAseq_All_Counts.txt.gz":
("https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE74246&format=file&file=GSE74246_RNAseq_All_Counts.txt.gz",
"7cc54350efbd6253192bc29354c4ee33"),
"GSE59992_counts_table.txt.gz":
("https://ndownloader.figshare.com/files/6802764",
"207437745cb2a1fa6f5986587df0e170"),
}
for fname, (url, chksum) in fnames.items():
md5 = ""
if os.path.exists(fname):
md5 = hashlib.md5(open(fname).read()).hexdigest()
# Download file if it doesn't exist, or if is corrupt
if md5 != chksum:
print "Downloading {}".format(fname)
urllib.urlretrieve (url, fname)
# Read the file
df = pd.read_table("GSE74912_ATACseq_All_Counts.txt.gz")
# Create regions of 200 bp and combine chromosome, start and end into one column (chrom:start-end)
middle = ((df["Start"] + df["End"]) / 2).astype(int)
df["Start"] = middle - 100
df["End"] = middle + 100
df["loc"] = df["Chr"] + ":" + df["Start"].astype(str) + "-" + df["End"].astype(str)
df = df.set_index("loc")
# Maps experiment codes used in GSE74912 to cell types
exp_map = {
1: "HSC",
2: "MPP",
3: "LMPP",
4: "CMP",
5: "GMP",
6: "MEP",
7: "Mono",
9: "CD4",
10: "CD8",
11: "Nkcell",
13: "Bcell",
14: "CLP",
15: "Ery",
}
hg_columns = []
df_avg = pd.DataFrame(index=df.index)
for code,exp in exp_map.items():
# Get all columns that match the experiment code
cols = df.columns[df.columns.str.contains(r'({}|-{}[AB])'.format(exp,code))]
cols = [c for c in cols if not c.startswith("SU")]
# Take the mean of the log2-transformed read count
df_avg[exp] = np.log2(df[cols] + 1).mean(1)
hg_columns.append(exp)
df_avg = df_avg[hg_columns]
# Read mouse data
df_mm = pd.read_table("GSE59992_counts_table.txt.gz")
df_mm["loc"] = df_mm["chrom"] + ":" + df_mm["start"].astype(str) + "-" + df_mm["end"].astype(str)
df_mm = df_mm.set_index("loc")
df_mm = df_mm[["Lsk", "CMP", "GMP", "MEP", "Monocytes", "EryA", "CD4", "CD8", "B-cells", "NK"]]
df_mm.columns = ["MPP", "CMP", "GMP", "MEP", "Monocytes", "Erythroblast", "CD4", "CD8", "Bcell", "Nkcell"]
df_mm = np.log2(df_mm + 1)
sns.boxplot(df_avg);
plt.ylabel("log2 read count");
# Normalize
df_avg = df_avg.apply(scale, 0)
sns.boxplot(df_avg);
plt.ylabel("scaled log2 read count");
sns.boxplot(df_mm);
plt.ylabel("log2 read count");
# Normalize
df_mm = df_mm.apply(scale, 0)
sns.boxplot(df_mm);
plt.ylabel("scaled log2 read count");
def select_peaks(df, n=10000):
Select around `n` peaks in total; for each cell type the highest.
# How many peaks per cell type
npeaks = 10000 / len(df.columns)
selection = pd.DataFrame()
for x in df.columns:
# All other columns
others = [c for c in df.columns if not c == x]
# Difference between column of interest and the highest of the other
high = df[x] - df[others].max(1)
# Select the top `npeaks` peaks
idx = high.sort_values().tail(npeaks).index
selection = pd.concat((selection, df.loc[idx]))
return selection
hg_selection = select_peaks(df_avg)
mm_selection = select_peaks(df_mm)
hg_selection.to_csv("hg19.most_variable.10k.txt", sep="\t")
mm_selection.to_csv("mm10.most_variable.10k.txt", sep="\t")
for name,df in [("human", hg_selection), ("mouse", mm_selection)]:
fig = plt.figure();
d = df.corr()
cm = sns.clustermap(d, cmap="RdBu_r", vmin=-1, vmax=1)
plt.setp(cm.ax_heatmap.yaxis.get_majorticklabels(), rotation=0);
plt.title("Correlation matrix of {} hematopoietic ATAC-seq data".format(name));
plt.savefig("correlation.matrix.{}.svg".format(name))
#run_maelstrom("hg19.most_variable.10k.txt", "hg19", "hg19.maelstrom.out")
#run_maelstrom("mm10.most_variable.10k.txt", "mm10", "mm10.maelstrom.out")
for species in ["hg19", "mm10"]:
df_result = pd.read_table("{}.most_variable.out/final.out.csv".format(species),
index_col=0, comment="#")
m2f = pd.read_table(os.path.join(motif_dir, "gimme.vertebrate.v3.1.motif2factors.txt"),
index_col=0)
# Truncate for lines with many factors
m2f["factors"] = m2f["factors"].str.replace(r"(\b(LOC|AL|AC|BX|CR)[0-9\.]+|\w+_XENTR),?", "").str[:40]
m2f = m2f.dropna()
f = list(df_result.max(1).sort_values()[-50:].index)
df_vis = df_result.loc[f].join(m2f).set_index("factors").dropna()
# Plot heatmap
cm = sns.clustermap(df_vis, cmap="viridis", figsize=(10,20))
# Rotate labels
plt.setp(cm.ax_heatmap.yaxis.get_majorticklabels(), rotation=0);
plt.title("Maelstrom output {}".format(species))
plt.savefig("heatmap_motif_activity.{}.svg".format(species))
def load_mouse_expression(logtransform=True):
df_result = pd.read_table("mm10.most_variable.out/final.out.csv", index_col=0, comment="#")
df_exp = pd.read_table("GSE60101_1256271tableS2.txt.gz",skiprows=1)
df_exp["NAME"] = df_exp["NAME"].str.upper()
df_exp = df_exp.set_index("NAME")
tr = {
"B":"Bcell",
"CD4":"CD4",
"CD8":"CD8",
"NK":"Nkcell",
"EryA":"Erythroblast",
"CMP":"CMP",
"GMP":"GMP",
"Granulocyte":"Granulocyte",
"MEP":"MEP",
"Mono":"Monocytes",
"MPP":"MPP",
}
# Only use cell types for which we have ATAC-seq
df_exp = df_exp[tr.keys()]
df_exp.columns = [tr[col] for col in df_exp.columns]
df_exp = df_exp[df_result.columns]
if logtransform:
df_exp = np.log2(df_exp + 1)
return df_exp
def load_human_expression(logtransform=True):
df_result = pd.read_table("hg19.most_variable.out/final.out.csv", index_col=0, comment="#")
df_rnaseq = pd.read_table("GSE74246_RNAseq_All_Counts.txt.gz", index_col=0)
df_exp = pd.DataFrame(index=df_rnaseq.index)
for code,exp in exp_map.items():
# Get all columns that match the experiment code
cols = df_rnaseq.columns[df_rnaseq.columns.str.lower().str.contains(r'({}|-{}[AB])'.format(exp.lower(),code))]
cols = [c for c in cols if not c.startswith("SU")]
# Take the mean of the log2-transformed read count
if logtransform:
df_exp[exp] = np.log2(df_rnaseq[cols] + 1).mean(1)
else:
df_exp[exp] = df_rnaseq[cols].mean(1)
df_exp = df_exp[df_result.columns]
return df_exp
def calc_correlation(df_exp, m2f, genome="hg19", min_exp=0):
fnames = [
"final.out.csv",
"activity.classic.count.out.txt",
"activity.lightning.score.out.txt",
"activity.mwu.score.out.txt",
"activity.rf.score.out.txt",
"activity.mara.count.out.txt",
"activity.lasso.score.out.txt"
]
df_map = m2f.copy()
s = df_map['factors'].str.split(',').apply(pd.Series, 1).stack()
s.index = s.index.droplevel(-1) # to line up with df's index
s.name = 'factor' # needs a name to join
del df_map['factors']
df_map = df_map.join(s)
e = df_map.join(df_exp, on="factor")
e = e[np.any(e.iloc[:,1:] >= min_exp,1)]
df_corr = e[["factor"]].copy()
df_map = df_corr.copy()
for fname in fnames:
df_test = pd.read_table("{}.most_variable.out/{}".format(genome, fname), index_col=0, comment="#")
l = len(df_test.columns)
m = df_map.join(df_test)
df_combined = pd.concat((e.iloc[:,1:], m .iloc[:,1:]),1)
name = fname.replace("activity.", "").replace(".score.out.txt", "").replace(".count.out.txt", "")
name = name.replace(".out.csv", "")
df_corr[name] = df_combined.apply(lambda row: pearsonr(row[:l], row[l:])[0], 1)
df_corr = df_corr.dropna()
return df_corr
# Only take TFs that have an expression level at least equal to this value
# in at least one cell type
EXPRESSION_CUTOFF = 4
df_exp = {}
df_corr = {}
df_exp["mm10"] = load_mouse_expression()
df_exp["hg19"] = load_human_expression()
for species in df_exp.keys():
df_corr[species] = calc_correlation(df_exp[species], m2f, genome=species, min_exp=EXPRESSION_CUTOFF)
df_corr[species] = df_corr[species].drop("factor", 1)
df_corr[species] = df_corr[species].groupby(df_corr[species].index).max()
dfs = {}
# Combine all individual correlation dataframes
for species in df_exp.keys():
dfs[species] = pd.DataFrame((df_corr[species] >= 0.8).sum() )
dfs[species] = dfs[species].reset_index()
dfs[species].columns = ["method", "Number of motifs"]
dfs[species]["species"] = species
df_sum = pd.concat(dfs.values())
sns.factorplot(y="Number of motifs", x="method", hue="species", data=df_sum, kind="bar",
order=["final", "mwu", "classic", "mara", "lasso", "rf", "lightning"])
plt.savefig("expression_correlation.svg")
def get_overlap(hg_result, mm_result, cutoff_hg, cutoff_mm, norm=True):
df_overlap = pd.DataFrame(index=mm_result.columns)
for c1 in hg_result.columns:
a = []
for c2 in mm_result.columns:
x = set(hg_result[c1][hg_result[c1] >= cutoff_hg].index)
y = set(mm_result[c2][mm_result[c2] >= cutoff_mm].index)
if len(x | y) > 0:
if norm:
a.append(len(x & y) / float(len(x | y)))
else:
a.append(len(x & y))
else:
a.append(0)
df_overlap[c1] = a
return df_overlap
fnames = {
"Ensemble":"final.out.csv",
"Hypergeometric":"activity.classic.count.out.txt",
"CD Multiclass":"activity.lightning.score.out.txt",
"Mann-Whitney U":"activity.mwu.score.out.txt",
"Random Forest":"activity.rf.score.out.txt",
"Mara": "activity.mara.count.out.txt",
"Lasso": "activity.lasso.score.out.txt",
}
mm_columns = ["CD4", "CD8", "Nkcell", "Bcell", "Monocytes"]
hs_columns = ["CD4", "CD8", "Nkcell", "Bcell", "Mono"]
for name in fnames:
# Load the data
mm_result = pd.read_table("mm10.most_variable.out/" + fnames[name], index_col=0, comment="#")
hg_result = pd.read_table("hg19.most_variable.out/" + fnames[name], index_col=0, comment="#")
# Get the columns in the same order
mm_result = mm_result[mm_columns]
hg_result = hg_result[hs_columns]
# Get the 10% most "significant" motifs
cutoff_mm = np.percentile(mm_result, 90)
cutoff_hg = np.percentile(hg_result, 90)
for norm in [True, False]:
df_overlap = get_overlap(hg_result, mm_result, cutoff_hg, cutoff_mm, norm=norm)
fig = plt.figure()
if norm:
sns.heatmap(df_overlap, cmap="Reds", square=True, vmin=0, vmax=0.4)
else:
sns.heatmap(df_overlap, cmap="Reds", square=True, vmin=0, vmax=30)
plt.title("{} {}/{} (norm={})".format(name, cutoff_hg,cutoff_mm, norm))
plt.savefig("{}{}.svg".format(name, {True:".norm",False:""}[norm]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Second Order Linear System with 1 degree of freedom
Step2: Now let's consider the following initial conditions
Step3: Analytic solution
Step4: Numerical solution
Step5: We will plot the difference between the analytic solution and the numeric solution
Step6: We can see that the errors in the numerical approximation are adding up along time.
Step7: The System is bounded between $[-env(t), env(t)]$ and so $y(t) \to 0$ when $t \to \infty$.
Step8: The System rapidly converges to 0.
Step9: This System converges to 0 but it takes longer to do so.
Step10: This force function is added to the second member of the last equation, in the first order System of ODEs defined in (2).
Step11: We can see that the disturbance introduced in the System does not change the bounds of the natural function.
Step12: In this case the period of the natural function and the disturbance functions match ($w = w_n$), and thus the waves are infinitely amplified.
|
<ASSISTANT_TASK:>
Python Code:
def rungekutta(fn, y0, ti=0, tf=10, h=0.01):
h = np.float(h)
x = np.arange(ti, tf, h)
Y = np.zeros((len(x), len(y0)))
Y[0] = y0
for i in range(0, len(x)-1):
yi = Y[i]
xi = x[i]
k1 = h * fn(xi, yi)
k2 = h * fn(xi + 0.5 * h, yi + 0.5 * k1)
k3 = h * fn(xi + 0.5 * h, yi + 0.5 * k2)
k4 = h * fn(xi + 1.0 * h, yi + 1.0 * k3)
yk = yi + (1./6.) * (k1 + 2*k2 + 2*k3 + k4)
Y[i+1] = yk
return x, Y
def natural(wn, qsi):
return lambda x,y: np.array([
y[1],
-2 * wn * qsi * y[1] - np.power(wn, 2) * y[0],
])
y0 = np.array([np.sqrt(2), np.sqrt(2)])
wn = 2 * np.pi
def undampedAnalyticSolution(x, Y, wn):
return Y[0] * np.cos(wn * x) + (Y[1] / wn) * np.sin(wn * x)
x = np.arange(0,5,0.01)
plt.plot(x, undampedAnalyticSolution(x, y0, wn))
plt.title('Undamped System - Analytic solution')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.grid()
qsi=0
x, Y = rungekutta(natural(wn, qsi), y0, tf=5)
plt.plot(x, Y[:,0])
plt.title('Undamped System - Numerical solution')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.grid()
xa, Ya = rungekutta(natural(wn, qsi), y0, tf=5)
xb = np.arange(0,5,0.01)
Yb = undampedAnalyticSolution(xb, y0, wn)
plt.plot(xb, Ya[:,0]-Yb)
plt.title('Undamped System (analytic - numerical solutions)')
plt.xlabel('t')
plt.ylabel('residual')
plt.grid()
qsi=0.05
A = np.sqrt(np.power((y0[1]+qsi*wn*y0[0]) / (wn * np.sqrt(1-np.power(qsi,2))), 2) + np.power(y0[0],2))
envelope = lambda x : A * np.exp(-qsi*wn*x)
x, Y = rungekutta(natural(wn, qsi), y0, tf=5)
plt.plot(x, Y[:,0])
plt.plot(x, [[-envelope(x), envelope(x)] for x in x], color="gray", alpha=0.5)
plt.title('Underdamped System')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.grid()
x, Y = rungekutta(natural(wn, qsi=1), y0, tf=5, h=.01)
plt.plot(x, Y[:,0])
plt.title('Critically damped System')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.grid()
x, Y = rungekutta(natural(wn, qsi=2), y0, tf=5, h=.01)
plt.plot(x, Y[:,0])
plt.title('Overdamped System')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.grid()
def forced(wn, qsi, f=30, force=lambda x: 1):
n = natural(wn, qsi)
return lambda x,y: np.array([
n(x,y)[0],
n(x,y)[1] + f * force(x),
])
x, Y = (rungekutta(forced(wn, qsi=0, force=lambda x : np.cos(x*5*np.pi/6.)), y0))
plt.plot(x, Y[:,0], label='forced response')
x, Y = (rungekutta(natural(wn, qsi=0), y0))
plt.plot(x, Y[:,0], color="grey", alpha=0.5, label='natural response')
plt.title('System response when F(t) = 30 * cos(5pi/6 t)')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.grid()
x, Y = (rungekutta(forced(wn, qsi=0, force=lambda x :np.cos(x*wn)), y0))
plt.plot(x, Y[:,0], label='forced response')
x, Y = (rungekutta(natural(wn, qsi=0), y0))
plt.plot(x, Y[:,0], color="grey", alpha=0.5, label='natural response')
plt.title('System response when F(t) = 30 * cos(wn t)')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.grid()
x, Y = rungekutta(forced(wn, qsi=0, force=lambda x :np.cos(x*0.90*wn)), y0, tf=20)
plt.plot(x, Y[:,0], label='forced response')
x, Y = rungekutta(natural(wn, qsi=0), y0, tf=20)
plt.plot(x, Y[:,0], color="grey", alpha=0.5, label='natural response')
plt.title('System response when F(t) = 30 * cos(0.9wn t)')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.grid()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's have a quick peek at the data, it's always a good idea to do this, to know how to best load it
Step2: OK, the timestamps look like miliseconds from the epoch.
Step3: Now let's get some quick summary statistics on the data
Step4: This already gives us more detail that Google does (cf screenshot above), as we have more precise min/max/average values, as well as extra statistics.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
fname = "Untitled Experiment Trial 1.csv"
!head "$fname"
tc = lambda x: int(x)-1463893627134
df = pd.read_csv(fname, index_col=0, converters={0:tc}, names=["time", "lux"], header=0)
df.head()
df.describe()
df.plot(logy=True);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's create two stars as if their point spread functions (PSFs) are one-d gaussians with full width half maximum (FWHM) of 1.2 pixels, which means sigma ~0.5 pixel. Star one is twice the brightness as star two, and the two stars are one pixel apart. Note I haven't pixelize the flux in the left figure below.
Step2: Let's first create a Eclipse signal that makes star one dimmer by 10%.
Step3: Let's look at the actual dimming of two stars assuming we take a radius equals to one apperture around both stars.
Step4: Both star dimmed, however, star 1 dimed by about 7%, while star 2 dimed by slightly less than 6%.
Step5: Let's first have compare the Manuel selected signals with the catalog selected signals.
Step6: There are four groups of cluster above
Step7: We drop all the irrelevant columns and all the informations from catalog to creat our innitial feature set.
Step8: This is a slightly inbalanced sample, with the false sample twice as large as the true samples.
Step9: Note that we added the new auc score which computes the area under the precision recall curve.
Step10: Let's make some new features
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib
from matplotlib import pyplot as plt
import numpy as np
import scipy as sp
%matplotlib inline
from scipy.stats import norm
FWHM=1.2
x=np.linspace(-3,3,50)
rv = norm(scale=FWHM/2.35)
starone=rv.pdf(x)*1000.
rv = norm(scale=FWHM/2.35)
startwo=(rv.pdf(x-1.0))*500.
fig=plt.figure()
ax=fig.add_subplot(121)
ax.plot(x,starone,label="star1")
ax.plot(x,startwo,label="star2")
ax.plot(x,starone+startwo,label="Total")
ax.set_xlabel("Pixel")
ax.set_ylabel("Flux")
ax.legend()
x_pixel=np.arange(7)-3
def pixelize(x,flux,x_pixel):
flux_pixel=np.zeros(len(x_pixel))
for i in xrange(len(x_pixel)):
index=(x>=x_pixel[i]-0.5)*(x<x_pixel[i]+0.5)
flux_pixel[i]=np.sum(flux[index])
return flux_pixel
starone_pixel=pixelize(x,starone,x_pixel)
startwo_pixel=pixelize(x,startwo,x_pixel)
ax2=fig.add_subplot(122)
ax2.plot(x_pixel,starone_pixel,label="star1")
ax2.plot(x_pixel,startwo_pixel,label="star2")
total=starone_pixel+startwo_pixel
ax2.plot(x_pixel,total,label="Total")
ax2.set_xlabel("Pixel")
#ax2.set_ylabel("Flux")
#ax2.legend()
plt.show()
fig=plt.figure()
ax=fig.add_subplot(121)
ax.plot(x,starone*0.9,label="star1")
ax.plot(x,startwo,label="star2")
ax.plot(x,starone*0.9+startwo,label="Total")
ax.set_xlabel("Pixel")
ax.set_ylabel("Flux")
ax.legend()
starone_pixel_ec=pixelize(x,starone*0.9,x_pixel)
startwo_pixel_ec=pixelize(x,startwo,x_pixel)
ax2=fig.add_subplot(122)
ax2.plot(x_pixel,starone_pixel_ec,label="star1")
ax2.plot(x_pixel,startwo_pixel_ec,label="star2")
total_ec=starone_pixel_ec+startwo_pixel_ec
ax2.plot(x_pixel,total_ec,label="Total")
ax2.set_xlabel("Pixel")
#ax2.set_ylabel("Flux")
plt.show()
plt.scatter(np.arange(2),[np.sum(total[2:5]),np.sum(total_ec[2:5])],marker='o',s=50,label='star1_obs')
plt.scatter(np.arange(2),[np.sum(total[3:6]),np.sum(total_ec[3:6])],marker='d',s=50,label='star2_obs')
plt.legend()
import pandas as pd
import sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
from sklearn.utils import shuffle
from sklearn import metrics
from sklearn.metrics import roc_curve
from sklearn.metrics import classification_report
from sklearn.decomposition import PCA
from sklearn.svm import SVC
from sklearn.cross_validation import cross_val_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.grid_search import GridSearchCV
def make_ROC_curve(testY, predY, name):
fig2 = plt.figure()
ax= fig2.add_subplot(1,1,1)
fpr, tpr, _ = roc_curve(testY, predY)
ax.plot(fpr, tpr, label = name)
ax.set_title(('ROC Curve for %s') % name)
ax.set_ylabel('True Positive Rate')
ax.set_xlabel('False Positive Rate')
def collect_lc_feature(idlist):
LCfeature=np.zeros([len(idlist),481])
count=0
for i in idlist:
#print i
infile="LTFsmall/"+str(i)+".ltf"
lc=np.loadtxt(infile)[:,1]
LCfeature[count,0]=i
LCfeature[count,1:]=lc
count+=1
return LCfeature
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(2)
plt.xticks(tick_marks, ['false positives', 'transits'], rotation=45)
plt.yticks(tick_marks, ['false positives', 'transits'])
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
def fit(model,name,data,cv=True):
trainX,trainY,testX,testY,X,Y=data
model.fit(trainX, trainY)
predY = model.predict(testX)
f1score = metrics.f1_score(testY, predY)
cm = metrics.confusion_matrix(testY, predY)
plot_confusion_matrix(cm)
predY=model.predict_proba(testX)[:,1]
rocscore = metrics.roc_auc_score(testY, predY)
precision, recall, thresholds = metrics.precision_recall_curve(testY, predY)
aucscore=metrics.auc(precision,recall,reorder=True)
print "#####################################"
print "Result using",model
print "f1 score from train test split %f" % f1score
print "roc score from train test split %f" % rocscore
print "auc score from train test split %f" % aucscore
if cv:
f1cvscore= cross_val_score(model, X, Y, cv = 5, scoring = 'f1')
cvscore= cross_val_score(model, X, Y, cv = 5, scoring = 'roc_auc')
print "roc_auc score from CV5 %f +- %f" % (np.mean(cvscore),np.std(cvscore))
print "f1 score from CV5 %f +- %f" % (np.mean(f1cvscore),np.std(f1cvscore))
print cm
make_ROC_curve(testY,predY,name)
return
df=pd.read_csv("TESSfield_19h_44d_combinedfeatures.csv")
#print df.shape
plt.plot(df['ManuleY']+np.random.randn(df.shape[0])*0.1,df['CatalogY']+np.random.randn(df.shape[0])*0.1,'.')
plt.xlim([-0.4,1.3])
plt.ylim([-0.4,1.3])
plt.xlabel('ManuleY')
plt.ylabel('CatalogY')
Manule_selected=df.loc[df['ManuleY']==1,:]
Manule_selected.shape
X=Manule_selected.drop(['Ids','CatalogY','ManuleY','CombinedY','Catalog_Period','Depth','Catalog_Epoch','SNR','Catmag','CatX','CatY'],axis=1)
Y=Manule_selected['CatalogY']
trainX, testX, trainY, testY= train_test_split(X, Y,test_size = 0.2)
data=[trainX,trainY,testX,testY,X,Y]
print X.shape, Y[Y==1].shape
model=RandomForestClassifier(n_estimators=3000,n_jobs=-1)
name="RFC"
fit(model,name,data)
model=GradientBoostingClassifier(n_estimators=3000)
name="GBC"
fit(model,name,data)
from xgboost import XGBClassifier
model = XGBClassifier(n_estimators=3000)
model.fit(trainX,trainY)
name="XGBoost"
fit(model,name,data)
import scipy as sp
from scipy import special
def find_distance(Parr,Earr,threshold=0.005):
#print Parr
dist_P=np.zeros(len(Parr))+0.5
dist_E=np.zeros(len(Parr))+0.5
for i in xrange(len(Parr)):
for j in xrange(len(Parr)):
if i==j:
continue
if Parr[j]>Parr[i]:
delt_P= np.abs(np.abs(Parr[j]-Parr[i])/Parr[i]-round(np.abs(Parr[j]-Parr[i])/Parr[i]))
else:
delt_P= np.abs(np.abs(Parr[j]-Parr[i])/Parr[j]-round(np.abs(Parr[j]-Parr[i])/Parr[j]))
if dist_P[i]>delt_P:
dist_P[i]=delt_P
delt_E=np.abs(np.abs(Earr[j]-Earr[i])/Parr[i]-round(np.abs(Earr[j]-Earr[i])/Parr[i]))
if dist_E[i]>delt_E:
dist_E[i]=delt_E
if dist_P[i]==0:
dist_P[i]=10
else:
dist_P[i]=np.sqrt(2)*sp.special.erfcinv(dist_P[i])
#print i, dist_P[i]
if dist_E[i]==0:
dist_E[i]=10
else:
dist_E[i]=np.sqrt(2)*sp.special.erfcinv(dist_E[i])
return [dist_P,dist_E]
#add back the magnitude of stars from the catalog
X=Manule_selected.drop(['Ids','CatalogY','ManuleY','CombinedY','Catalog_Period','Depth','Catalog_Epoch','SNR','CatX','CatY'],axis=1)
Pdist,Edist=find_distance(np.array(X['BLS_Period_1_0']),np.array(X['BLS_Tc_1_0']))
X['dist_E']=Edist
X['dist_P']=Pdist
X['diff_mag']=X['Catmag']-X['BLS_OOTmag_1_0']
plt.plot(Pdist,Edist,'.')
trainX, testX, trainY, testY= train_test_split(X, Y,test_size = 0.2)
data=[trainX,trainY,testX,testY,X,Y]
print X.shape, Y[Y==1].shape
model=RandomForestClassifier(n_estimators=5000,class_weight={0:1,1:3},n_jobs=-1)
name="RFC"
fit(model,name,data)
model=GradientBoostingClassifier(n_estimators=3000)
name="GBC"
fit(model,name,data)
from xgboost import XGBClassifier
model = XGBClassifier(n_estimators=3000,max_depth=18,colsample_bytree=1.0,min_child_weight=1.0,learning_rate=0.001)
model.fit(trainX,trainY)
name="XGBoost"
fit(model,name,data)
featurelist=X.columns
rfc= RandomForestClassifier(n_estimators=3000)
rfc.fit(trainX, trainY)
importances = rfc.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfc.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
print indices.shape,trainX.shape
# Print the feature ranking
print("Feature ranking:")
threshold=0.03
droplist=[]
for f in range(X.shape[1]):
try:
if importances[indices[f]]<threshold:
droplist.append(featurelist[indices[f]])
print("%d. feature %d (%s %f)" % (f + 1, indices[f], featurelist[indices[f]],importances[indices[f]]))
except:
pass
X_selected=X.drop(droplist,axis=1)
X_selected.head()
model=RandomForestClassifier(n_estimators=3000,n_jobs=-1,class_weight='balanced_subsample')
name="RFC"
fit(model,name,data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exploratory data analysis
Step2: The original training data frame has 891 rows. In the starting kit, we give you a subset of 445 rows. Some passengers have missing information
Step3: About two thirds of the passengers perished in the event. A dummy classifier that systematically returns "0" would have an accuracy of 62%, higher than that of a random model.
Step4: Non-linearly transformed data
Step5: Plot the bivariate distributions and marginals of two variables
Step6: Making predictions
Step7: The preprocessor object created with make_column_transformer can be used in a scikit-learn pipeline. A pipeline assembles several steps together and can be used to cross validate an entire workflow. Generally, transformation steps are combined with a final estimator.
Step8: We can cross-validate our pipeline using cross_val_score. Below we will have specified cv=8 meaning KFold cross-valdiation splitting will be used, with 8 folds. The Area Under the Receiver Operating Characteristic Curve (ROC AUC) score is calculated for each split. The output score will be an array of 8 scores from each KFold. The score mean and standard of the 8 scores is printed at the end.
Step9: Testing
Step10: Next we need to fit our pipeline on our training data
Step11: Now we can predict on our test data
Step12: Finally, we can calculate how well our model performed on the test data
Step13: RAMP submissions
Step14: If you take a look at the sample submission in the directory submissions/starting_kit, you will find a file named submission.py, which has the above code in it.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import glob
import numpy as np
from scipy import io
import matplotlib.pyplot as plt
import pandas as pd
train_filename = 'data/train.csv'
data = pd.read_csv(train_filename)
y_train = data['Survived'].values
X_train = data.drop(['Survived', 'PassengerId'], axis=1)
X_train.head(5)
data.describe()
data.count()
data.groupby('Survived').count()
from pandas.plotting import scatter_matrix
scatter_matrix(data.get(['Fare', 'Pclass', 'Age']), alpha=0.2,
figsize=(8, 8), diagonal='kde');
data_plot = data.get(['Age', 'Survived'])
data_plot = data.assign(LogFare=lambda x : np.log(x.Fare + 10.))
scatter_matrix(data_plot.get(['Age', 'LogFare']), alpha=0.2, figsize=(8, 8), diagonal='kde');
data_plot.plot(kind='scatter', x='Age', y='LogFare', c='Survived', s=50, cmap=plt.cm.Paired);
import seaborn as sns
sns.set()
sns.set_style("whitegrid")
sns.jointplot(data_plot.Age[data_plot.Survived == 1],
data_plot.LogFare[data_plot.Survived == 1],
kind="kde", size=7, space=0, color="b");
sns.jointplot(data_plot.Age[data_plot.Survived == 0],
data_plot.LogFare[data_plot.Survived == 0],
kind="kde", size=7, space=0, color="y");
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.impute import SimpleImputer
categorical_cols = ['Sex', 'Pclass', 'Embarked']
numerical_cols = ['Age', 'SibSp', 'Parch', 'Fare']
preprocessor = make_column_transformer(
(OneHotEncoder(handle_unknown='ignore'), categorical_cols),
(SimpleImputer(strategy='constant', fill_value=-1), numerical_cols),
)
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
pipeline = Pipeline([
('transformer', preprocessor),
('classifier', LogisticRegression()),
])
from sklearn.model_selection import cross_val_score
scores = cross_val_score(pipeline, X_train, y_train, cv=8, scoring='roc_auc')
print("mean: %e (+/- %e)" % (scores.mean(), scores.std()))
# test_filename = 'data/test.csv'
# data = pd.read_csv(test_filename)
# y_test = data['Survived'].values
# X_test = data.drop(['Survived', 'PassengerId'], axis=1)
# X_test.head(5)
# clf = pipeline.fit(X_train, y_train)
# y_pred = pipeline.predict(X_test)
# from sklearn.metrics import roc_auc_score
# score = roc_auc_score(y_test, y_pred)
# score
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
def get_estimator():
categorical_cols = ['Sex', 'Pclass', 'Embarked']
numerical_cols = ['Age', 'SibSp', 'Parch', 'Fare']
preprocessor = make_column_transformer(
(OneHotEncoder(handle_unknown='ignore'), categorical_cols),
(SimpleImputer(strategy='constant', fill_value=-1), numerical_cols),
)
pipeline = Pipeline([
('transformer', preprocessor),
('classifier', LogisticRegression()),
])
return pipeline
# !ramp_test_submission
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='process'></a>
Step2: Like RV, RandomProcess only defines the random process. Values of the process can be simulated using the usual simulation tools. Since a stochastic process is a collection of random variables, many of the commands in the previous sections (Random variables, Multiple random variables, Conditioning) are useful when simulating stochastic processes.
Step3: Simulate and plot many sample paths, specifying the range of $t$ values to plot. Note that the darkness of a path represents its relative likelihood.
Step4: <a id='value'></a>
Step5: Find the joint distribution of process values at times 1 and 1.5.
Step6: Find the conditional distribution of $X(1.5)$ given $X(1) = 1)$.
Step7: <a id='mean'></a>
Step8: The variance function maps $t$ to $Var(X(t))$; similarly for the standard deviation function. These functions can be used to give error bands about the mean function.
Step9: <a id='rw'></a>
Step10: The above code defines a random process incrementally. Once a RandomProcess is defined, it can be manipulated the same way, regardless of how it is defined.
|
<ASSISTANT_TASK:>
Python Code:
from symbulate import *
%matplotlib inline
def f(omega, t):
return omega[0] + omega[1] * t
X = RandomProcess(Bernoulli(0.9) * Bernoulli(0.7), TimeIndex(fs=inf), f)
X.sim(1).plot(alpha = 1)
X.sim(100).plot(tmin=0, tmax=2)
def f(omega, t):
return omega[0] * t + omega[1]
X = RandomProcess(Bernoulli(0.9) * Bernoulli(0.7), TimeIndex(fs=inf), f)
X[1.5].sim(10000).plot()
(X[1] & X[1.5]).sim(1000).plot("tile")
(X[1.5] | (X[1] == 1)).sim(10000).plot()
paths = X.sim(1000)
plot(paths)
plot(paths.mean(), 'r')
# This illustrates the functionality, but is not an appropriate example for +/- 2SD
plot(paths)
paths.mean().plot('--')
(paths.mean() + 2 * paths.sd()).plot('--')
(paths.mean() - 2 * paths.sd()).plot('--')
P = Bernoulli(0.5)**inf
Z = RV(P)
X = RandomProcess(P, TimeIndex(fs=1))
X[0] = 0
for n in range(100):
X[n+1] = X[n] + Z[n+1]
X.sim(1).plot(alpha = 1)
X.sim(100).plot(tmin = 0, tmax = 5)
X[5].sim(10000).plot()
(X[5] & X[10]).sim(10000).plot("tile")
(X[10] | (X[5] == 3)).sim(10000).plot()
(X[5] | (X[10] == 4)).sim(10000).plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prevalence of recoded cases
Step2: Feature value distribution
|
<ASSISTANT_TASK:>
Python Code:
# feature descriptives table
desc_file = join(output_dir, '{}_feature_descriptives.{}'.format(experiment_id, file_format))
df_desc = DataReader.read_from_file(desc_file, index_col=0)
HTML(df_desc.to_html(classes=['sortable'], float_format=float_format_func))
outliers_file = join(output_dir, '{}_feature_outliers.{}'.format(experiment_id, file_format))
df_outliers = DataReader.read_from_file(outliers_file, index_col=0)
df_outliers.index.name = 'feature'
df_outliers = df_outliers.reset_index()
df_outliers = pd.melt(df_outliers, id_vars=['feature'])
df_outliers = df_outliers[df_outliers.variable.str.contains(r'[ulb].*?perc')]
# we need to increase the plot height if feature names are long
if longest_feature_name > 10:
height = 3 + math.ceil((longest_feature_name - 10)/10)
else:
height = 3
# we also need a higher aspect if we have more than 40 features
# The aspect defines the final width of the plot (width=aspect*height).
# We keep the width constant (9 for plots with many features or 6
# for plots with few features) by dividing the expected width
# by the height.
aspect = 9/height if len(features_used) > 40 else 6/height
# colors for the plot
colors = sns.color_palette("Greys", 3)
# what's the largest value in the data frame
maxperc = df_outliers['value'].max()
# compute the limits for the graph
limits = (0, max(2.5, maxperc))
with sns.axes_style('whitegrid'):
# create a barplot without a legend since we will manually
# add one later
p = sns.catplot(x="feature", y="value", hue="variable", kind="bar",
palette=colors, data=df_outliers, height=height,
aspect=aspect, legend=False)
p.set_axis_labels('', '% cases truncated\nto mean +/- 4*sd')
p.set_xticklabels(rotation=90)
p.set(ylim=limits)
# add a line at 2%
axis = p.axes[0][0]
axis.axhline(y=2.0, linestyle='--', linewidth=1.5, color='black')
# add a legend with the right colors
legend=axis.legend(('both', 'lower', 'upper'), title='', frameon=True, fancybox=True, ncol=3)
legend.legendHandles[0].set_color(colors[0])
legend.legendHandles[1].set_color(colors[1])
# we want to try to force `tight_layout()`, but if this
# raises a warning, we don't want the entire notebook to fail
with warnings.catch_warnings():
warnings.simplefilter('ignore')
plt.tight_layout(h_pad=1.0)
imgfile = join(figure_dir, '{}_outliers.svg'.format(experiment_id))
plt.savefig(imgfile)
if use_thumbnails:
show_thumbnail(imgfile, next(id_generator))
else:
plt.show()
# feature descriptives extra table
desce_file = join(output_dir, '{}_feature_descriptivesExtra.{}'.format(experiment_id,
file_format))
df_desce = DataReader.read_from_file(desce_file, index_col=0)
HTML(df_desce.to_html(classes=['sortable'], float_format=float_format_func))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
from string import punctuation
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
#print(text)
# TODO: Implement Function
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
int_to_vocab = dict(enumerate(vocab, 1))
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
tokens_dict = dict([('.','||Period||'), (',','||Comma||'), ('"', '||Quotation_Mark||'),
(';', '||Semicolon||'), ('!', "||Exclamation_Mark||"), ('?', '||Question_Mark'),
('(', '||Left_Parentheses||'), (')', '||Right_Parentheses||'),
('--', '||Dash||'), ('\n', '||Return||')])
return tokens_dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32,[None, None], name='target')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
rnn_layer = 2
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * rnn_layer)
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name='initial_state')
return (cell, initial_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return (outputs, final_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(inputs=outputs,num_outputs=vocab_size,
activation_fn=None,
weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
biases_initializer=tf.zeros_initializer())
return logits,final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_batches = int(len(int_text) / (batch_size * seq_length))
# Drop the last few characters to make only full batches
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = int_text[1: n_batches * batch_size * seq_length]
ydata.append(int_text[0])
ydata = np.array(ydata)
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x_batches, y_batches)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 200
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 100
# Embedding Dimension Size
embed_dim = 256
# Sequence Length
seq_length = 15
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 100
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
inputs = loaded_graph.get_tensor_by_name("input:0")
initial_state = loaded_graph.get_tensor_by_name("initial_state:0")
final_state = loaded_graph.get_tensor_by_name("final_state:0")
probs = loaded_graph.get_tensor_by_name("probs:0")
return (inputs, initial_state, final_state, probs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
prob = list(probabilities)
word_id= prob.index(max(prob))
return int_to_vocab[word_id]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step 0
Step1: Step 2
Step2: Step 3
Step3: Deployment Option 1
Step4: Model Server Dashboard
Step5: TODO
|
<ASSISTANT_TASK:>
Python Code:
# You may need to Reconnect (more than Restart) the Kernel to pick up changes to these sett
import os
master = '--master spark://127.0.0.1:47077'
conf = '--conf spark.cores.max=1 --conf spark.executor.memory=512m'
packages = '--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.1'
jars = '--jars /root/lib/jpmml-sparkml-package-1.0-SNAPSHOT.jar'
py_files = '--py-files /root/lib/jpmml.py'
os.environ['PYSPARK_SUBMIT_ARGS'] = master \
+ ' ' + conf \
+ ' ' + packages \
+ ' ' + jars \
+ ' ' + py_files \
+ ' ' + 'pyspark-shell'
print(os.environ['PYSPARK_SUBMIT_ARGS'])
from pyspark.ml import Pipeline
from pyspark.ml.feature import RFormula
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.sql import SparkSession
sparkSession = SparkSession.builder.getOrCreate()
data = sparkSession.read.format("csv") \
.option("inferSchema", "true").option("header", "true") \
.load("hdfs://127.0.0.1:39000/datasets/census/census.csv")
data.head()
formula = RFormula(formula = "income ~ .")
classifier = DecisionTreeClassifier()
pipeline = Pipeline(stages = [formula, classifier])
pipelineModel = pipeline.fit(data)
print(pipelineModel)
print(pipelineModel.stages[1].toDebugString)
from jpmml import toPMMLBytes
pmmlBytes = toPMMLBytes(sparkSession, data, pipelineModel)
print(pmmlBytes.decode("utf-8"))
from urllib import request
update_url = 'http://<your-ip>:39040/update-pmml/pmml_census'
update_headers = {}
update_headers['Content-type'] = 'application/xml'
req = request.Request(update_url, headers=update_headers, data=pmmlBytes)
resp = request.urlopen(req)
print(resp.status) # Should return Http Status 200
from urllib import request
evaluate_url = 'http://<your-ip>:39040/evaluate-pmml/pmml_census'
evaluate_headers = {}
evaluate_headers['Content-type'] = 'application/json'
input_params = '{"age":39,"workclass":"State-gov","education":"Bachelors","education_num":13,"marital_status":"Never-married","occupation":"Adm-clerical","relationship":"Not-in-family","race":"White","sex":"Male","capital_gain":2174,"capital_loss":0,"hours_per_week":40,"native_country":"United-States"}'
encoded_input_params = input_params.encode('utf-8')
req = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)
resp = request.urlopen(req)
print(resp.read()) # Should return valid classification with probabilities
!mkdir -p /root/src/pmml/census/
with open('/root/src/pmml/census/pmml_census.pmml', 'wb') as f:
f.write(pmmlBytes)
!ls /root/src/pmml/census/pmml_census.pmml
!start-loadtest.sh $SOURCE_HOME/loadtest/RecommendationServiceStressTest-local-census.jmx
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: The problems
Step6: Solving problem 1
Step8: Properly structuring the OR clause takes away the problem of having different results for
Step11: Option 2
Step13: Option 3
Step15: Solving problem 2
Step17: Or in its simpler form
Step19: This does not seem to be affected by whether you mention cql or not (that is a cql specification, if I am not wrong).
Step22: The counts are now correct
|
<ASSISTANT_TASK:>
Python Code:
# coding: utf-8
import os
from cheshire3.baseObjects import Session
from cheshire3.document import StringDocument
from cheshire3.internal import cheshire3Root
from cheshire3.server import SimpleServer
session = Session()
session.database = 'db_dickens'
serv = SimpleServer(session, os.path.join(cheshire3Root, 'configs', 'serverConfig.xml'))
db = serv.get_object(session, session.database)
qf = db.get_object(session, 'defaultQueryFactory')
resultSetStore = db.get_object(session, 'resultSetStore')
idxStore = db.get_object(session, 'indexStore')
# This is the query that is currently being used.
# The count is the number of chapters
query = qf.get_query(session,
((c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx = "fog") or c3.chapter-idx = "dense")
)
result_set = db.search(session, query)
print len(result_set)
# To get a more speficic count one also needs to include the numbers of hits
# in the different chapters
def count_total(result_set):
Helper function to count the total number of hits
in the search results
count = 0
for result in result_set:
count += len(result.proxInfo)
return count
count_total(result_set)
def try_query(query):
Another helper function to take a query and return
the total number of hits
query = qf.get_query(session, query)
result_set = db.search(session, query)
return count_total(result_set)
try_query(
((c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx = "dense") or c3.chapter-idx = "fog")
)
try_query(
(c3.subcorpus-idx all "dickens" and/cql.proxinfo (c3.chapter-idx = "dense" or c3.chapter-idx = "fog"))
)
try_query(
(c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx any "dense fog")
)
try_query(
(c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx any "fog dense")
)
try_query(
((c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx = "dense") or
(c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx = "fog"))
)
try_query(
(c3.subcorpus-idx all "dickens" and/proxinfo (c3.chapter-idx = "dense" or/proxinfo c3.chapter-idx = "fog"))
)
try_query(
(c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx any/proxinfo "fog dense")
)
try_query(
(c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx any/cql.proxinfo "fog dense")
)
dense = try_query((c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx = "dense"))
print dense
fog = try_query((c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx = "fog"))
print fog
dense + fog
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: EEG
|
<ASSISTANT_TASK:>
Python Code:
# Author: Padma Sundaram <tottochan@gmail.com>
# Kaisu Lankinen <klankinen@mgh.harvard.edu>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
import numpy as np
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
# Read data
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition='Left Auditory',
baseline=(None, 0))
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fwd = mne.read_forward_solution(fname_fwd)
cov = mne.read_cov(fname_cov)
# Read inverse operator:
inv_op = make_inverse_operator(evoked.info, fwd, cov, fixed=True, verbose=True)
# Calculate MNE:
snr = 3.0
lambda2 = 1.0 / snr ** 2
stc = apply_inverse(evoked, inv_op, lambda2, 'MNE', verbose=True)
# Calculate SNR in source space:
snr_stc = stc.estimate_snr(evoked.info, fwd, cov)
# Plot an average SNR across source points over time:
ave = np.mean(snr_stc.data, axis=0)
fig, ax = plt.subplots()
ax.plot(evoked.times, ave)
ax.set(xlabel='Time (sec)', ylabel='SNR MEG-EEG')
fig.tight_layout()
# Find time point of maximum SNR:
maxidx = np.argmax(ave)
# Plot SNR on source space at the time point of maximum SNR:
kwargs = dict(initial_time=evoked.times[maxidx], hemi='split',
views=['lat', 'med'], subjects_dir=subjects_dir, size=(600, 600),
clim=dict(kind='value', lims=(-100, -70, -40)),
transparent=True, colormap='viridis')
brain = snr_stc.plot(**kwargs)
evoked_eeg = evoked.copy().pick_types(eeg=True, meg=False)
inv_op_eeg = make_inverse_operator(evoked_eeg.info, fwd, cov, fixed=True,
verbose=True)
stc_eeg = apply_inverse(evoked_eeg, inv_op_eeg, lambda2, 'MNE', verbose=True)
snr_stc_eeg = stc_eeg.estimate_snr(evoked_eeg.info, fwd, cov)
brain = snr_stc_eeg.plot(**kwargs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the next cell, read in the data using CSV. You do not (yet) need to apply any data-type conversions; what needs to come out of this is a dict object (call this object "data") which is composed of lists, one for every header.
Step2: The column Location is not in a useful form. You may use the code below to convert it to Latitude and Longitude.
Step3: Below, construct a dataset class. You will see stub functions. You may reuse what we have worked on in class, or you may develop something new.
Step4: We will now test that this works by filtering out a given tree species.
Step5: Visualization Assignment
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import csv
fn = "/home/huilyu2/work/Trees_Owned_by_the_City_of_Champaign.csv"
# /home/huilyu2/work/Trees_Owned_by_the_City_of_Champaign.csv
# YOUR CODE HERE
data = {}
with open(fn, "r") as f:
reader = csv.reader(f)
header = next(reader)
for column in header:
data[column] = []
for row in reader:
for column, value in zip(header,row):
data[column].append(value)
np.testing.assert_equal(len(data), 6)
for col in data:
np.testing.assert_equal(len(data[col]), 26940)
data["Latitude"] = []
data["Longitude"] = []
for loc in data["Location"]:
lat, lon = loc.split(",")
data["Latitude"].append(float(lat[1:]))
data["Longitude"].append(float(lon[:-1]))
class Dataset:
def __init__(self, data):
self.data = data.copy()
def convert(self, column, dtype):
# YOUR CODE HERE
self.data[column] = np.array(self.data[column], dtype=dtype)
#raise NotImplementedError()
def columns(self):
# YOUR CODE HERE
return self.data.keys()
#raise NotImplementedError()
def filter_eq(self, column, value):
# YOUR CODE HERE
good = (self.data[column] == value)
new_data = {}
for column in self.data:
new_data[column] = self.data[column][good]
return Dataset(new_data)
#raise NotImplementedError()
def filter_lt(self, column, value):
# YOUR CODE HERE
good = (self.data[column] < value)
new_data = {}
for column in self.data:
new_data[column] = self.data[column][good]
return Dataset(new_data)
#raise NotImplementedError()
def filter_gt(self, column, value):
# YOUR CODE HERE
good = (self.data[column] > value)
new_data = {}
for column in self.data:
new_data[column] = self.data[column][good]
return Dataset(new_data)
#raise NotImplementedError()
def filter_ne(self, column, value):
# YOUR CODE HERE
good = (self.data[column] != value)
new_data = {}
for column in self.data:
new_data[column] = self.data[column][good]
return Dataset(new_data)
#raise NotImplementedError()
def size(self):
# YOUR CODE HERE
for key in self.data:
return self.data[key].size
#raise NotImplementedError()
def split(self, column):
# YOUR CODE HERE
new_datasets = {}
for split_value in np.unique(self.data[column]):
new_datasets[split_value] = self.filter_eq(column, split_value)
return new_datasets
#raise NotImplementedError()
def stats(self):
statistics = {}
for key in self.data:
if self.data[key].dtype not in ("float", "int"):
continue
values = self.data[key]
statistics[key] = (values.min(), values.max(), values.std(), values.mean())
return statistics
def compare(self, other):
stats1 = self.stats()
stats2 = other.stats()
for column in self.columns():
if column not in stats1: continue
print("Column '{0:25s}'".format(column))
for s1, s2 in zip(stats1[column], stats2[column]):
print(" {0} vs {1}".format(s1, s2))
def plot(self, x_column, y_column):
plt.plot(self.data[x_column], self.data[y_column], '.')
trees = Dataset(data)
value_types = {'Number of Trunks': 'int',
'Diameter at Breast Height (in Feet)': 'float',
'Latitude': 'float',
'Longitude': 'float'}
for v in trees.columns():
trees.convert(v, value_types.get(v, "str"))
trees.columns()
trees.stats()
quercus_bicolor = trees.filter_eq("Tree Species", "Quercus bicolor")
np.testing.assert_equal(quercus_bicolor.size(), 1239)
stats = quercus_bicolor.stats()
np.testing.assert_equal(stats['Diameter at Breast Height (in Feet)'],
(1.0,
44.0,
4.8951987699249253,
8.3123486682808725))
np.testing.assert_equal(stats['Latitude'],
(40.070883672699999,
40.156343598100001,
0.020057649584011371,
40.106116841942452))
np.testing.assert_equal(stats['Longitude'],
(-88.332447309000003,
-88.229326890500005,
0.023900192692977454,
-88.2782118899284))
np.testing.assert_equal(stats['Number of Trunks'],
(0,
2,
0.063479522525377516,
0.99757869249394671))
from collections import Counter
def first_graph(dataset,name):
if name == "All Trees":
vacant_trees = dataset.filter_eq("Tree Species", "vacant site large")
non_vacant_trees = dataset.filter_ne("Tree Species", "vacant site large")
plt.plot(non_vacant_trees.data['Longitude'], non_vacant_trees.data['Latitude'],".",label= "non-vacant site",alpha = 0.5)
plt.plot(vacant_trees.data['Longitude'], vacant_trees.data['Latitude'],".",label= "vacant site large",alpha = 0.5)
plt.xlabel("Longitude",fontsize=20)
plt.ylabel("Latitude",fontsize=20)
plt.title("Location of {}".format(name),fontsize=22)
plt.grid(True)
plt.legend()
else:
plt.plot(dataset.data['Longitude'], dataset.data['Latitude'],".",alpha = 0.5)
plt.xlabel("Longitude",fontsize=20)
plt.ylabel("Latitude",fontsize=20)
plt.title("Location of {}".format(name),fontsize=22)
plt.grid(True)
count = Counter(trees.filter_ne("Tree Species", "vacant site large").data["Tree Species"])
count.most_common(5)
def second_graph(dataset,name):
count = Counter(dataset.data["Tree Species"])
most_common = count.most_common(5)
most_common_dict = dict((str(k), v) for k, v in most_common)
number_of_trunk = {}
for key in most_common_dict.keys():
number_of_trunk[key] = dataset.filter_eq("Tree Species",key).data["Number of Trunks"]
n, bins, patches = plt.hist(number_of_trunk.values(), stacked = True, log = True, edgecolor = 'gray',bins = np.arange(10),align='left', label = number_of_trunk.keys(),alpha=0.75)
# refer to Stack Overflow
# http://stackoverflow.com/questions/18449602/matplotlib-creating-stacked-histogram-from-three-unequal-length-arrays
plt.legend()
plt.xticks(np.arange(10))
plt.xlabel("Number of Trunks of Each Tree",fontsize=20)
plt.ylabel("Number of Trees",fontsize=20)
plt.ylim(10**(-1),10**4)
plt.title("Distribution of tree trunk count for\nfive most common species in {}".format(name),fontsize=22)
def third_graph(dataset,name):
most_common = Counter(dataset.data["Tree Species"]).most_common(1)
most_common_dict = dict((str(k), v) for k, v in most_common)
dataset = dataset.filter_eq("Tree Species", list(most_common_dict)[0])
# The following code is referred to the in-class notes.
stats = dataset.stats()
lon_min = stats["Longitude"][0]
lon_max = stats["Longitude"][1]
lat_min = stats["Latitude"][0]
lat_max = stats["Latitude"][1]
def bin_trees(num_bins = 16):
lon = np.mgrid[lon_min:lon_max:(num_bins+1)*1j]
lat = np.mgrid[lat_min:lat_max:(num_bins+1)*1j]
tree_count = np.zeros((num_bins, num_bins))
for i in range(num_bins):
left_lat = lat[i]
right_lat = lat[i+1]
filter_lat_left = dataset.filter_gt("Latitude", left_lat)
filter_lat_right = filter_lat_left.filter_lt("Latitude", right_lat)
for j in range(num_bins):
left_lon = lon[j]
right_lon = lon[j+1]
filter_lon_left = filter_lat_right.filter_gt("Longitude", left_lon)
filter_lon_right = filter_lon_left.filter_lt("Longitude", right_lon)
tree_count[i,j] += filter_lon_right.size()
plt.imshow(tree_count, extent=(lon_min,lon_max,lat_min,lat_max), origin="lower")
plt.xlabel("Longitude",fontsize=20)
plt.ylabel("Latitude",fontsize=20)
plt.title("Most common species '{}'\nacross spatial ranges in {}".format(list(most_common_dict)[0],name), fontsize=22)
color_bar = plt.colorbar()
color_bar.set_label("Count")
bin_trees()
#import ipywidgets
#ipywidgets.interact(bin_trees, num_bins = (4,128))
def fourth_graph(dataset,name):
min_number = dataset.data["Diameter at Breast Height (in Feet)"].min()
max_number = dataset.data["Diameter at Breast Height (in Feet)"].max()
bins = np.mgrid[min_number:max_number+10:5]
plt.hist(dataset.data["Diameter at Breast Height (in Feet)"], alpha = 0.75,bins=bins,edgecolor = 'k')
plt.xlabel("Diameter at Breast Height (in Feet) of Each Tree",fontsize=20)
plt.ylabel("Number of Trees",fontsize=20)
plt.title("Distribution of tree trunk breadth in {}".format(name),fontsize=22)
plt.xticks(bins)
plt.rcParams["figure.figsize"] = (20, 20)
def make_plots(dataset, name = ""):
# YOUR CODE HERE
if (name == "All Trees") | (name == "Non-Vacant Site"):
plt.subplot(2,2,1)
first_graph(dataset,name)
plt.subplot(2,2,2)
second_graph(dataset,name)
plt.subplot(2,2,3)
third_graph(dataset,name)
plt.subplot(2,2,4)
fourth_graph(dataset,name)
else:
plt.subplot(2,2,1)
first_graph(dataset,name)
plt.subplot(2,2,2)
max_number = dataset.data["Number of Trunks"].max()
plt.hist(dataset.data["Number of Trunks"],log=True,edgecolor='k', bins = np.arange(max_number+3),align='left',alpha=0.75)
plt.title("Distribution of tree trunk count\nfor the species {}".format(name),fontsize=22)
plt.xlabel("Number of Trunks of Each Tree",fontsize=20)
plt.ylabel("Number of Trees",fontsize=20)
plt.xticks(np.arange(max_number+3))
plt.subplot(2,2,3)
# The following code is referred to the in-class notes.
stats = dataset.stats()
lon_min = stats["Longitude"][0]
lon_max = stats["Longitude"][1]
lat_min = stats["Latitude"][0]
lat_max = stats["Latitude"][1]
def bin_trees(num_bins = 16):
lon = np.mgrid[lon_min:lon_max:(num_bins+1)*1j]
lat = np.mgrid[lat_min:lat_max:(num_bins+1)*1j]
tree_count = np.zeros((num_bins, num_bins))
for i in range(num_bins):
left_lat = lat[i]
right_lat = lat[i+1]
filter_lat_left = dataset.filter_gt("Latitude", left_lat)
filter_lat_right = filter_lat_left.filter_lt("Latitude", right_lat)
for j in range(num_bins):
left_lon = lon[j]
right_lon = lon[j+1]
filter_lon_left = filter_lat_right.filter_gt("Longitude", left_lon)
filter_lon_right = filter_lon_left.filter_lt("Longitude", right_lon)
tree_count[i,j] += filter_lon_right.size()
plt.imshow(tree_count, extent=(lon_min,lon_max,lat_min,lat_max), origin="lower")
plt.xlabel("Longitude",fontsize=20)
plt.ylabel("Latitude",fontsize=20)
plt.title("Tree species '{}'\nacross spatial ranges".format(name), fontsize=22)
color_bar = plt.colorbar()
color_bar.set_label("Count")
bin_trees()
plt.subplot(2,2,4)
fourth_graph(dataset,name)
make_plots(trees, "All Trees")
make_plots(trees.filter_eq("Tree Species", "Acer rubrum"), "Acer rubrum")
make_plots(trees.filter_ne("Tree Species", "vacant site large"), "Non-Vacant Site")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise
Step3: (this takes a long time...)
|
<ASSISTANT_TASK:>
Python Code:
$ unzip kocham.zip
$ cd kocham
$ python setup.py install
a toy password cracker
import time
import itertools
from multiprocess.dummy import Pool
import kocham.imap as imap
import kocham.corpus as corpus
stopwords = corpus.stopwords
ipassword = corpus.ipassword
compare = imap.login
# turn on verbosity
corpus.VERBOSE = True
# set the password to "solve"
imap.setpass('catdog@123')
# set a delay, to simulate connection time to a server
imap.setdelay(0.0)
# select a list of possible words and a possible set of characters
words = ['cat','dog','horse','apple','foo','bar','python','phobia']
chars = '1234567890!@#$'
# configure the minimum and maximum password length
args = (words, chars, 8, 10)
# configure the minimum and maximum word length, and the maximum number of words
kwds = dict(minword=3, maxword=8, size=2)
# build the password generator
passwd = ipassword(flatten=True, *args, **kwds)
start = time.time()
# solve
for p in passwd:
x = compare(p)
if x:
print x
break
end = time.time() - start
print "finished in: %s" % end
# rebuild the password generator
passwd = ipassword(flatten=True, *args, **kwds)
start = time.time()
# solve
for x in itertools.imap(compare, passwd):
if x:
print x
break
end = time.time() - start
print "finished in: %s" % end
# rebuild the password generator
passwd = ipassword(flatten=True, *args, **kwds)
start = time.time()
# solve
tp = Pool(50)
for x in tp.imap_unordered(compare, passwd):
if x:
print x
break
tp.close()
tp.join()
end = time.time() - start
print "finished in: %s" % end
some useful parallel iterated map constructs
def icompare(pwds):
import itertools
res = itertools.imap(compare, pwds)
for x in res:
if x:
return x
return
def uicompare(pwds, n=50):
from multiprocess.dummy import Pool
tp = Pool(n)
res = tp.imap_unordered(compare, pwds)
for x in res:
if x:
return x
return
# rebuild the password generator, this time don't flatten to a single interator
passwd = ipassword(flatten=False, *args, **kwds)
start = time.time()
# solve
for p in passwd:
for i in p:
x = compare(i)
if x:
print x
break
if x:
break
end = time.time() - start
print "finished in: %s" % end
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We're going to extract images of representing the bins in the spike
Step2: <img src='spike0_0.bmp' style="width
|
<ASSISTANT_TASK:>
Python Code:
# Spike images
from mpl_toolkits.mplot3d import axes3d
import numpy as np
import urllib2
import scipy.stats as stats
import matplotlib.pyplot as plt
from image_builder import get_image
np.set_printoptions(precision=3, suppress=True)
url = ('https://raw.githubusercontent.com/Upward-Spiral-Science'
'/data/master/syn-density/output.csv')
data = urllib2.urlopen(url)
csv = np.genfromtxt(data, delimiter=",")[1:] # don't want first row (labels)
# chopping data based on thresholds on x and y coordinates
x_bounds = (409, 3529)
y_bounds = (1564, 3124)
def check_in_bounds(row, x_bounds, y_bounds):
if row[0] < x_bounds[0] or row[0] > x_bounds[1]:
return False
if row[1] < y_bounds[0] or row[1] > y_bounds[1]:
return False
if row[3] == 0:
return False
return True
indices_in_bound, = np.where(np.apply_along_axis(check_in_bounds, 1, csv,
x_bounds, y_bounds))
data_thresholded = csv[indices_in_bound]
n = data_thresholded.shape[0]
def synapses_over_unmasked(row):
s = (row[4]/row[3])*(64**3)
return [row[0], row[1], row[2], s]
syn_unmasked = np.apply_along_axis(synapses_over_unmasked, 1, data_thresholded)
syn_normalized = syn_unmasked
a = np.apply_along_axis(lambda x:x[4]/x[3], 1, data_thresholded)
spike = a[np.logical_and(a <= 0.0015, a >= 0.0012)]
n, bins, _ = plt.hist(spike, 2000)
bin_max = np.where(n == n.max())
bin_width = bins[1]-bins[0]
syn_normalized[:,3] = syn_normalized[:,3]/(64**3)
spike = syn_normalized[np.logical_and(syn_normalized[:,3] <= 0.00131489435301+bin_width, syn_normalized[:,3] >= 0.00131489435301-bin_width)]
spike_thres = data_thresholded[np.logical_and(syn_normalized[:,3] <= 0.00131489435301+bin_width, syn_normalized[:,3] >= 0.00131489435301-bin_width)]
len_spike = len(spike_thres)
# Compare some of the bins represented the spike
xs = np.unique(spike_thres[:,0])
ys = np.unique(spike_thres[:,1])
name = 'spike'
get_image((0,10),(0,10),xs,ys,name)
%matplotlib inline
unique_x = np.unique(spike_thres[:,0])
unique_y = np.unique(spike_thres[:,1])
unique_z = np.unique(spike_thres[:,2])
x_sum = [0] * len(unique_x)
for i in range(len(unique_x)):
x_sum[i] = sum(spike_thres[spike_thres[:,0]==unique_x[i]][:,4])
y_sum = [0] * len(unique_y)
for i in range(len(unique_y)):
y_sum[i] = sum(spike_thres[spike_thres[:,1]==unique_y[i]][:,4])
z_sum = [0] * len(unique_z)
for i in range(len(unique_z)):
z_sum[i] = sum(spike_thres[spike_thres[:,2]==unique_z[i]][:,4])
plt.figure()
plt.figure(figsize=(28,7))
plt.subplot(131)
plt.bar(unique_x, x_sum, 1)
plt.xlim(450, 3600)
plt.ylabel('density in synapses/voxel',fontsize=20)
plt.xlabel('x-coordinate',fontsize=20)
plt.title('Total Density across Each X-Layer',fontsize=20)
plt.subplot(132)
plt.bar(unique_y, y_sum, 1)
plt.xlim(1570, 3190)
plt.ylabel('density in synapses/voxel',fontsize=20)
plt.xlabel('y-coordinate',fontsize=20)
plt.title('Total Density across Each Y-Layer',fontsize=20)
plt.subplot(133)
plt.bar(unique_z, z_sum, 1)
plt.ylabel('density in synapses/voxel',fontsize=20)
plt.xlabel('z-coordinate',fontsize=20)
plt.title('Total Density across Each Z-Layer',fontsize=20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
Step2: Great! We'll be using the numbers list you created above in the next few problems.
Step3: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output
Step4: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output
Step5: Problem set #2
Step6: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a radius greater than four earth radii. Expected output
Step7: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output
Step8: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output
Step9: EXTREME BONUS ROUND
Step10: Problem set #3
Step11: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
Step12: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint
Step13: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
Step14: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint
Step15: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
Step16: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
|
<ASSISTANT_TASK:>
Python Code:
numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'
values = numbers_str.split(",")
numbers = [int(i) for i in values]
# numbers
max(numbers)
#test
print(sorted(numbers))
sorted(numbers)[10:]
[i for i in sorted(numbers) if i%3 == 0]
import math
from math import sqrt
[math.sqrt(i) for i in sorted(numbers) if i < 100]
planets = [
{'diameter': 0.382,
'mass': 0.06,
'moons': 0,
'name': 'Mercury',
'orbital_period': 0.24,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.949,
'mass': 0.82,
'moons': 0,
'name': 'Venus',
'orbital_period': 0.62,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 1.00,
'mass': 1.00,
'moons': 1,
'name': 'Earth',
'orbital_period': 1.00,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.532,
'mass': 0.11,
'moons': 2,
'name': 'Mars',
'orbital_period': 1.88,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 11.209,
'mass': 317.8,
'moons': 67,
'name': 'Jupiter',
'orbital_period': 11.86,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 9.449,
'mass': 95.2,
'moons': 62,
'name': 'Saturn',
'orbital_period': 29.46,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 4.007,
'mass': 14.6,
'moons': 27,
'name': 'Uranus',
'orbital_period': 84.01,
'rings': 'yes',
'type': 'ice giant'},
{'diameter': 3.883,
'mass': 17.2,
'moons': 14,
'name': 'Neptune',
'orbital_period': 164.8,
'rings': 'yes',
'type': 'ice giant'}]
earth_diameter = planets[2]['diameter']
#earth radius is = half diameter. In a multiplication equation the diameter value can be use as a parameter.
[i['name'] for i in planets if i['diameter'] >= earth_diameter*4]
mass_list = []
for planet in planets:
outcome = planet['mass']
mass_list.append(outcome)
total = sum(mass_list)
total
[i['name'] for i in planets if 'giant' in i['type']]
#Done in class
import re
poem_lines = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth;',
'',
'Then took the other, as just as fair,',
'And having perhaps the better claim,',
'Because it was grassy and wanted wear;',
'Though as for that the passing there',
'Had worn them really about the same,',
'',
'And both that morning equally lay',
'In leaves no step had trodden black.',
'Oh, I kept the first for another day!',
'Yet knowing how way leads on to way,',
'I doubted if I should ever come back.',
'',
'I shall be telling this with a sigh',
'Somewhere ages and ages hence:',
'Two roads diverged in a wood, and I---',
'I took the one less travelled by,',
'And that has made all the difference.']
[line for line in poem_lines if re.search(r"\b\w{4}\b\s\b\w{4}\b", line)]
[line for line in poem_lines if re.search(r"(?:\s\w{5}\b$|\s\w{5}\b[.:;,]$)", line)]
all_lines = " ".join(poem_lines)
[item[2:] for item in (re.findall(r"\bI\b\s\b[a-z]{1,}", all_lines))]
entrees = [
"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95",
"Lavender and Pepperoni Sandwich $8.49",
"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v",
"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v",
"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95",
"Rutabaga And Cucumber Wrap $8.49 - v"
]
menu = []
for dish in entrees:
match = re.search(r"^(.*) \$(.*)", dish)
vegetarian = re.search(r"v$", match.group(2))
price = re.search(r"(?:\d\.\d\d|\d\d\.\d\d)", dish)
if vegetarian == None:
vegetarian = False
else:
vegetarian = True
if match:
dish = {
'name': match.group(1), 'price': price.group(), 'vegetarian': vegetarian
}
menu.append(dish)
menu
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's first check whether we have the dataset available
Step3: Let's select only what's of interest to us
Step4: Let's see what we've got
Step5: Let's add the Sentiment column that turns the numeric score into either positive or negative.
Step6: Similarly, the Usefulness column turns the number of votes into either useful or useless using the formula
Step7: Let's have a look at some 5s
Step8: And some 1s as well
Step9: Let's do some exploratory data analysis with WordClouds!
Step10: We can also view wordclouds for only positive or only negative entries
Step11: Extracting features from text data
Step12: Let's explore a bit what those did
Step13: Let's now find the counts for "gluten" and the phrases that contain that
Step14: Create a Multinomial Naïve Bayes model
Step15: Create a Bernoulli Naïve Bayes model
Step16: Create a Logistic Regression model
Step17: Create a Linear SVC model
Step18: Analyzing Results
Step19: After plotting the ROC curve, it would appear that the Logistic regression method provides us with the best results, although the AUC value for this method is not outstanding...
Step20: Let's also have a look at what the best & words are by looking at the coefficients
Step21: Now let's try to predict how helpful a review is
Step22: Nothing seems to pop out.. let's try to limit the dataset to only entries with at least 10 votes.
Step23: Now let's try again with the word clouds
Step24: This seems a bit better, let's see if we can build a model though
Step25: Let's also see which of the reviews are rated by our model as most helpful and least helpful
Step26: Even more complicated pipeline
Step27: Again, let's have a look at the best/worst words
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import sqlite3
import pandas as pd
import numpy as np
import nltk
import string
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.metrics import roc_curve, auc
from nltk.stem.porter import PorterStemmer
import os
from IPython.core.display import display, HTML
if not os.path.isfile('database.sqlite'):
display(HTML("<h3 style='color: red'>Dataset database missing!</h3><h3> Please download it "+
"<a href='https://www.kaggle.com/snap/amazon-fine-food-reviews'>from here on Kaggle</a> "+
"and extract it to the current directory."))
raise(Exception("missing dataset"))
con = sqlite3.connect('database.sqlite')
pd.read_sql_query("SELECT * FROM Reviews LIMIT 3", con)
messages = pd.read_sql_query(
SELECT
Score,
Summary,
HelpfulnessNumerator as VotesHelpful,
HelpfulnessDenominator as VotesTotal
FROM Reviews
WHERE Score != 3, con)
messages.head(5)
messages["Sentiment"] = messages["Score"].apply(lambda score: "positive" if score > 3 else "negative")
messages.head(2)
messages["Usefulness"] = TODO
messages.head(2)
messages[messages.Score == 5].head(10)
TODO: select some reviews with score 1
from wordcloud import WordCloud, STOPWORDS
# Note: you need to install wordcloud with pip.
# On Windows, you might need a binary package obtainable from here: http://www.lfd.uci.edu/~gohlke/pythonlibs/#wordcloud
stopwords = set(STOPWORDS)
#mpl.rcParams['figure.figsize']=(8.0,6.0) #(6.0,4.0)
mpl.rcParams['font.size']=12 #10
mpl.rcParams['savefig.dpi']=100 #72
mpl.rcParams['figure.subplot.bottom']=.1
def show_wordcloud(data, title = None):
wordcloud = WordCloud(
background_color='white',
stopwords=stopwords,
max_words=200,
max_font_size=40,
scale=3,
random_state=1 # chosen at random by flipping a coin; it was heads
).generate(str(data))
fig = plt.figure(1, figsize=(8, 8))
plt.axis('off')
if title:
fig.suptitle(title, fontsize=20)
fig.subplots_adjust(top=2.3)
plt.imshow(wordcloud)
plt.show()
show_wordcloud(messages["Summary_Clean"])
TODO: create word cloud from negative reviews
TODO: create word cloud from positive reviews
# first do some cleanup
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
import re
import string
cleanup_re = re.compile('[^a-z]+')
def cleanup(sentence):
sentence = sentence.lower()
sentence = cleanup_re.sub(' ', sentence).strip()
return sentence
messages["Summary_Clean"] = messages["Summary"].apply(cleanup)
train, test = train_test_split(messages, test_size=0.2)
print("%d items in training data, %d in test data" % (len(train), len(test)))
# To cleanup stop words, add stop_words = STOPWORDS
# But it seems to function better without it
count_vect = CountVectorizer(min_df = 1, ngram_range = (1, 4))
X_train_counts = count_vect.fit_transform(train["Summary_Clean"])
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_test_counts = count_vect.transform(test["Summary_Clean"])
X_test_tfidf = tfidf_transformer.transform(X_test_counts)
y_train = train["Sentiment"]
y_test = test["Sentiment"]
# prepare
prediction = dict()
word_features = count_vect.get_feature_names()
word_features[10000:10010]
chosen_word_idx = 99766
chosen_word_indices = np.nonzero(X_train_counts[:,chosen_word_idx].toarray().ravel())[0]
for i in chosen_word_indices[0:10]:
print("'%s' appears %d times in: %s" % (
word_features[chosen_word_idx],
X_train_counts[i,chosen_word_idx],
train["Summary"].values[i]
))
#TODO: find the counts for "gluten" and the reviews it appears in
from sklearn.naive_bayes import MultinomialNB
model = MultinomialNB().fit(X_train_tfidf, y_train)
prediction['Multinomial'] = model.predict(X_test_tfidf)
from sklearn.naive_bayes import BernoulliNB
model = BernoulliNB().fit(X_train_tfidf, y_train)
prediction['Bernoulli'] = model.predict(X_test_tfidf)
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(C=1e5)
logreg_result = logreg.fit(X_train_tfidf, y_train)
prediction['Logistic'] = logreg.predict(X_test_tfidf)
from sklearn.svm import LinearSVC
linsvc = LinearSVC(C=1e5)
linsvc_result = linsvc.fit(X_train_tfidf, y_train)
prediction['LinearSVC'] = linsvc.predict(X_test_tfidf)
def formatt(x):
if x == 'negative':
return 0
return 1
vfunc = np.vectorize(formatt)
cmp = 0
colors = ['b', 'g', 'y', 'm', 'k']
for model, predicted in prediction.items():
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test.map(formatt), vfunc(predicted))
roc_auc = auc(false_positive_rate, true_positive_rate)
plt.plot(false_positive_rate, true_positive_rate, colors[cmp], label='%s: AUC %0.2f'% (model,roc_auc))
cmp += 1
plt.title('Classifiers comparison with ROC')
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
for model_name in ["Logistic", "LinearSVC"]:
print("Confusion matrix for %s" % model_name)
print(metrics.classification_report(y_test, prediction[model_name], target_names = ["positive", "negative"]))
print()
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues, labels=["positive", "negative"]):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(labels))
plt.xticks(tick_marks, labels, rotation=45)
plt.yticks(tick_marks, labels)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cm = confusion_matrix(y_test, prediction['Logistic'])
np.set_printoptions(precision=2)
plt.figure()
plot_confusion_matrix(cm)
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plt.figure()
plot_confusion_matrix(cm_normalized, title='Normalized confusion matrix')
plt.show()
words = count_vect.get_feature_names()
feature_coefs = pd.DataFrame(
data = list(zip(words, logreg_result.coef_[0])),
columns = ['feature', 'coef'])
feature_coefs.sort_values(by='coef')
def test_sample(model, sample):
sample_counts = count_vect.transform([sample])
sample_tfidf = tfidf_transformer.transform(sample_counts)
result = model.predict(sample_tfidf)[0]
prob = model.predict_proba(sample_tfidf)[0]
print("Sample estimated as %s: negative prob %f, positive prob %f" % (result.upper(), prob[0], prob[1]))
test_sample(logreg, "The food was delicious, it smelled great and the taste was awesome")
test_sample(logreg, "The whole experience was horrible. The smell was so bad that it literally made me sick.")
test_sample(logreg, "The food was ok, I guess. The smell wasn't very good, but the taste was ok.")
show_wordcloud(messages[messages.Usefulness == "useful"]["Summary_Clean"], title = "Useful")
show_wordcloud(messages[messages.Usefulness == "useless"]["Summary_Clean"], title = "Useless")
messages_ufn = messages[messages.VotesTotal >= 10]
messages_ufn.head()
show_wordcloud(messages_ufn[messages_ufn.Usefulness == "useful"]["Summary_Clean"], title = "Useful")
show_wordcloud(messages_ufn[messages_ufn.Usefulness == "useless"]["Summary_Clean"], title = "Useless")
from sklearn.pipeline import Pipeline
train_ufn, test_ufn = train_test_split(messages_ufn, test_size=0.2)
ufn_pipe = Pipeline([
('vect', CountVectorizer(min_df = 1, ngram_range = (1, 4))),
('tfidf', TfidfTransformer()),
('clf', LogisticRegression(C=1e5)),
])
ufn_result = ufn_pipe.fit(train_ufn["Summary_Clean"], train_ufn["Usefulness"])
prediction['Logistic_Usefulness'] = ufn_pipe.predict(test_ufn["Summary_Clean"])
print(metrics.classification_report(test_ufn["Usefulness"], prediction['Logistic_Usefulness']))
ufn_scores = [a[0] for a in ufn_pipe.predict_proba(train_ufn["Summary"])]
ufn_scores = zip(ufn_scores, train_ufn["Summary"], train_ufn["VotesHelpful"], train_ufn["VotesTotal"])
ufn_scores = sorted(ufn_scores, key=lambda t: t[0], reverse=True)
# just make this into a DataFrame since jupyter renders it nicely:
pd.DataFrame(ufn_scores)
cm = confusion_matrix(test_ufn["Usefulness"], prediction['Logistic_Usefulness'])
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
np.set_printoptions(precision=2)
plt.figure()
plot_confusion_matrix(cm_normalized, labels=["useful", "useless"])
from sklearn.pipeline import FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
# Useful to select only certain features in a dataset for forwarding through a pipeline
# See: http://scikit-learn.org/stable/auto_examples/hetero_feature_union.html
class ItemSelector(BaseEstimator, TransformerMixin):
def __init__(self, key):
self.key = key
def fit(self, x, y=None):
return self
def transform(self, data_dict):
return data_dict[self.key]
train_ufn2, test_ufn2 = train_test_split(messages_ufn, test_size=0.2)
ufn_pipe2 = Pipeline([
('union', FeatureUnion(
transformer_list = [
('summary', Pipeline([
('textsel', ItemSelector(key='Summary_Clean')),
('vect', CountVectorizer(min_df = 1, ngram_range = (1, 4))),
('tfidf', TfidfTransformer())])),
('score', ItemSelector(key=['Score']))
],
transformer_weights = {
'summary': 0.2,
'score': 0.8
}
)),
('model', LogisticRegression(C=1e5))
])
ufn_result2 = ufn_pipe2.fit(train_ufn2, train_ufn2["Usefulness"])
prediction['Logistic_Usefulness2'] = ufn_pipe2.predict(test_ufn2)
print(metrics.classification_report(test_ufn2["Usefulness"], prediction['Logistic_Usefulness2']))
cm = confusion_matrix(test_ufn2["Usefulness"], prediction['Logistic_Usefulness2'])
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
np.set_printoptions(precision=2)
plt.figure()
plot_confusion_matrix(cm_normalized, labels=["useful", "useless"])
len(ufn_result2.named_steps['model'].coef_[0])
ufn_summary_pipe = next(tr[1] for tr in ufn_result2.named_steps["union"].transformer_list if tr[0]=='summary')
ufn_words = ufn_summary_pipe.named_steps['vect'].get_feature_names()
ufn_features = ufn_words + ["Score"]
ufn_feature_coefs = pd.DataFrame(
data = list(zip(ufn_features, ufn_result2.named_steps['model'].coef_[0])),
columns = ['feature', 'coef'])
ufn_feature_coefs.sort_values(by='coef')
print("And the coefficient of the Score variable: ")
ufn_feature_coefs[ufn_feature_coefs.feature == 'Score']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Factorial
Step4: Write a function that computes the factorial of small numbers using a Python loop.
Step5: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
def np_fact(n):
Compute n! = n*(n-1)*...*1 using Numpy.
if n==0:
return 1
vals = np.arange(1,n+1,1)
fact = vals.cumprod()
return fact[-1]
np_fact(3)
assert np_fact(0)==1
assert np_fact(1)==1
assert np_fact(10)==3628800
assert [np_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
def loop_fact(n):
Compute n! using a Python for loop.
fact = 1
for i in range(1,n+1):
fact=fact*i
return fact
loop_fact(5)
assert loop_fact(0)==1
assert loop_fact(1)==1
assert loop_fact(10)==3628800
assert [loop_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
%timeit -nint(1.0)
%timeit -n1 -r1 np_fact(1000000)
%timeit -n1 -r1 loop_fact(1000000)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data
Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
Step5: Network Inputs
Step6: Generator
Step7: Discriminator
Step9: Model Loss
Step11: Optimizers
Step12: Building the model
Step13: Here is a function for displaying generated images.
Step14: And another function we can use to train our network.
Step15: Hyperparameters
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), self.scaler(y)
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x
# Output layer, 32x32x3
logits =
out = tf.tanh(logits)
return out
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x =
logits =
out =
return out, logits
def model_loss(input_real, input_z, output_dim, alpha=0.2):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=0.2)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, 0.5)
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img)
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(50, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 5, 10, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
real_size = (32,32,3)
z_size = 100
learning_rate = 0.001
batch_size = 64
epochs = 1
alpha = 0.01
beta1 = 0.9
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
# Load the data and train the network here
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 5, 10, figsize=(10,5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make the grid and training data
Step2: Creating the Grid GP Model
Step3: In the next cell, we create a set of 400 test examples and make predictions. Note that unlike other scalable GP methods, testing is more complicated. Because our test data can be different from the training data, in general we may not be able to avoid creating a num_train x num_test (e.g., 10000 x 400) kernel matrix between the training and test data.
|
<ASSISTANT_TASK:>
Python Code:
import gpytorch
import torch
import math
grid_bounds = [(0, 1), (0, 2)]
grid_size = 25
grid = torch.zeros(grid_size, len(grid_bounds))
for i in range(len(grid_bounds)):
grid_diff = float(grid_bounds[i][1] - grid_bounds[i][0]) / (grid_size - 2)
grid[:, i] = torch.linspace(grid_bounds[i][0] - grid_diff, grid_bounds[i][1] + grid_diff, grid_size)
train_x = gpytorch.utils.grid.create_data_from_grid(grid)
train_y = torch.sin((train_x[:, 0] + train_x[:, 1]) * (2 * math.pi)) + torch.randn_like(train_x[:, 0]).mul(0.01)
class GridGPRegressionModel(gpytorch.models.ExactGP):
def __init__(self, grid, train_x, train_y, likelihood):
super(GridGPRegressionModel, self).__init__(train_x, train_y, likelihood)
num_dims = train_x.size(-1)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.GridKernel(gpytorch.kernels.RBFKernel(), grid=grid)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = GridGPRegressionModel(grid, train_x, train_y, likelihood)
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
training_iter = 2 if smoke_test else 50
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
for i in range(training_iter):
# Zero gradients from previous iteration
optimizer.zero_grad()
# Output from model
output = model(train_x)
# Calc loss and backprop gradients
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f lengthscale: %.3f noise: %.3f' % (
i + 1, training_iter, loss.item(),
model.covar_module.base_kernel.lengthscale.item(),
model.likelihood.noise.item()
))
optimizer.step()
model.eval()
likelihood.eval()
n = 20
test_x = torch.zeros(int(pow(n, 2)), 2)
for i in range(n):
for j in range(n):
test_x[i * n + j][0] = float(i) / (n-1)
test_x[i * n + j][1] = float(j) / (n-1)
with torch.no_grad(), gpytorch.settings.fast_pred_var():
observed_pred = likelihood(model(test_x))
import matplotlib.pyplot as plt
%matplotlib inline
pred_labels = observed_pred.mean.view(n, n)
# Calc abosolute error
test_y_actual = torch.sin(((test_x[:, 0] + test_x[:, 1]) * (2 * math.pi))).view(n, n)
delta_y = torch.abs(pred_labels - test_y_actual).detach().numpy()
# Define a plotting function
def ax_plot(f, ax, y_labels, title):
if smoke_test: return # this is for running the notebook in our testing framework
im = ax.imshow(y_labels)
ax.set_title(title)
f.colorbar(im)
# Plot our predictive means
f, observed_ax = plt.subplots(1, 1, figsize=(4, 3))
ax_plot(f, observed_ax, pred_labels, 'Predicted Values (Likelihood)')
# Plot the true values
f, observed_ax2 = plt.subplots(1, 1, figsize=(4, 3))
ax_plot(f, observed_ax2, test_y_actual, 'Actual Values (Likelihood)')
# Plot the absolute errors
f, observed_ax3 = plt.subplots(1, 1, figsize=(4, 3))
ax_plot(f, observed_ax3, delta_y, 'Absolute Error Surface')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) What's the current wind speed? How much warmer does it feel than it actually is?
Step2: 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
Step3: 4) What's the difference between the high and low temperatures for today?
Step4: 5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
Step5: 6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
Step6: 7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
|
<ASSISTANT_TASK:>
Python Code:
#api KEY = c9d64e80aa02ca113562a075e57256d7
https://api.forecast.io/forecast/c9d64e80aa02ca113562a075e57256d7/10.4806,66.9036
import requests
response = requests.get("https://api.forecast.io/forecast/c9d64e80aa02ca113562a075e57256d7/10.4806,66.9036")
forecast = response.json()
print(forecast.keys())
print(forecast['timezone'])
print(forecast['latitude'],forecast['longitude'] )
print(forecast['currently'])
print("The current wind speed is", forecast['currently']['windSpeed'], "miles per hour")
print("Right know in Caracas the temperature feels like",forecast['currently']['apparentTemperature'])
print("Right know in Caracas the temperature is",forecast['currently']['temperature'])
print("It feels about", forecast['currently']['apparentTemperature']-forecast['currently']['temperature'], "degrees warmer than the actual temperature")
print(forecast['daily'].keys())
forecast['daily']['data']
print(type(forecast['daily']['data']))
print(forecast['daily']['data'][0])
first_daily_forecast = forecast['daily']['data']
for item in first_daily_forecast[:1]:
print("In the first daily forecast", item['moonPhase'], "of the moon is currently visible")
for item in first_daily_forecast[:1]:
print("The min temperature for the day is:", item['temperatureMin'], "degrees Fahrenheit")
print("The max temperature for the day is:", item['temperatureMax'], "degrees Fahrenheit")
print("The difference between min and max temperatures is", item['temperatureMax']-item['temperatureMin'],"degrees Fahrenheit" )
for item in first_daily_forecast:
if item['temperatureMax'] > 80:
print("The high temperature for today is", item['temperatureMax'],"degrees Farenheit" )
print("It is going to be hot.")
else:
print("The high temperature for today is", item['temperatureMax'],"degrees Farenheit" )
print("It won't be as hot as it normally is, luckily.")
import requests
response = requests.get("https://api.forecast.io/forecast/c9d64e80aa02ca113562a075e57256d7/25.7742700,-80.1936600")
miami_forecast = response.json()
print(forecast_miami.keys())
print(type(miami_forecast['hourly']))
print(miami_forecast['hourly'].keys())
print(miami_forecast['hourly']['data'][0])
miami_forecast_today = miami_forecast['hourly']['data']
print("Here is an estimation of how the weather in Miami will look within the next few hours")
for item in forecast_miami_today:
if item['cloudCover'] > 0.5:
print("It will be", item['temperature'],"degrees Farenheit and overcast.")
else:
print("It will be", item['temperature'], "degrees Farenheit")
import requests
response = requests.get("https://api.forecast.io/forecast/c9d64e80aa02ca113562a075e57256d7/40.7829,73.9654,346550400")
forecast_1980 = response.json()
print(forecast_1980.keys())
print(1980_forecast['daily'].keys())
print(forecast_1980['daily']['data'])
christmas_temp_1980 = forecast_1980['daily']['data']
for item in christmas_temp_1980:
print("On Christmas Day, 1980, temperatures ranged between", item['temperatureMax'], "and", item['temperatureMin'], "degrees Farenheit at Central Park")
import requests
response = requests.get("https://api.forecast.io/forecast/c9d64e80aa02ca113562a075e57256d7/40.7829,73.9654,662083200")
forecast_1990 = response.json()
print(forecast_1990.keys())
christmas_temp_1990 = 1990_forecast['daily']['data']
for item in christmas_temp_1990:
print("On Christmas Day, 1990, temperatures ranged between", item['temperatureMax'], "and", item['temperatureMin'], "degrees Farenheit at Central Park")
import requests
response = requests.get("https://api.forecast.io/forecast/c9d64e80aa02ca113562a075e57256d7/40.7829,73.9654,977702400")
forecast_2000 = response.json()
print(forecast_2000.keys())
christmas_temp_2000 = forecast_2000['daily']['data']
for item in christmas_temp_2000:
print("On Christmas Day, 2000, temperatures ranged between", item['temperatureMax'], "and", item['temperatureMin'], "degrees Farenheit at Central Park")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Adding one more sub plot
Step2: another way to add subplots
Step3: One more way to add subplots
Step4: Adding some samples plots
Step5: Setting the Plot Range
|
<ASSISTANT_TASK:>
Python Code:
# generating some data points
X = np.linspace(-np.pi, np.pi, 256, endpoint=True)
C, S = np.cos(X), np.sin(X)
# creating a figure
fig = plt.figure(figsize=(4,3), dpi=120)
#plotting
plt.plot(X, C, linestyle='--')
plt.plot(X, S)
# plotting
plt.show()
# creating a figure
fig = plt.figure(figsize=(4,3), dpi=120)
#plotting
plt.plot(X, C, linestyle='--')
plt.plot(X, S, linewidth=2)
# adding labels
plt.xlabel("X data")
plt.ylabel("Y data")
plt.title("Plotting")
# adding margins, grid and autoscale
plt.margins(0.15)
plt.autoscale(True)
plt.grid(True)
# plotting
plt.show()
# creating a figure
fig = plt.figure(figsize=(12,6), dpi=120)
# adding subplot
plt.subplot(1,2,1) # 1 row, 2 columns, 1st plot
plt.plot(X, C, linestyle='--')
plt.xlabel("X data")
plt.ylabel("Y data")
plt.title("Cos Plotting")
plt.grid(True)
plt.margins(0.15)
plt.legend(["Cos"], loc="upper left")
plt.subplot(1,2,2) # 1 row, 2 columns, 2nd plot
plt.plot(X, S, linewidth=2)
plt.xlabel("X data")
plt.ylabel("Y data")
plt.title("Sin Plotting")
plt.legend(["Sin"], loc="upper left")
plt.margins(0.15)
plt.autoscale(True)
plt.grid(True)
# plotting
plt.show()
# creating a figure
fig = plt.figure(figsize=(12,6), dpi=120)
# adding subplot
ax1 = fig.add_subplot(1,2,1) # 1 row, 2 columns, 1st plot
ax1.plot(X, C, linestyle='--')
ax1.grid(True)
ax1.margins(0.15)
ax2 = fig.add_subplot(1,2,2) # 1 row, 2 columns, 2nd plot
ax2.plot(X, S, linewidth=2)
ax2.margins(0.2)
ax2.autoscale(True)
ax2.grid(True)
# plotting
plt.xlabel("X data")
plt.ylabel("Y data")
plt.title("Sin Plotting")
plt.legend(["Sin"], loc="upper left")
plt.show()
# creating a figure
plt.figure(figsize=(12,6), dpi=120)
f, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True)
ax1.plot(X,S, color='green', linestyle='--', linewidth='2')
ax1.grid(True)
ax1.margins(0.2)
ax2.plot(X,C, color='blue', linestyle='-', linewidth='2')
ax2.grid(True)
ax2.margins(0.2)
# creating a figure
plt.figure(figsize=(12,6), dpi=120)
f, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True)
ax1.plot(X,S, color='green', linestyle='--', linewidth='2', label="Sin wave")
ax1.grid(True)
ax1.margins(0.2)
ax2.plot(X,C, color='blue', linestyle='-', linewidth='2', label="Cos wave")
ax2.grid(True)
ax2.margins(0.2)
ax1.legend()
ax2.legend()
# creating a figure
plt.figure(figsize=(12,6), dpi=120)
f, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True)
ax1.plot(X,S, color='green', linestyle='--', linewidth='2', label="Sin wave")
ax1.grid(True)
ax1.margins(0.2)
ax2.plot(X,C, color='blue', linestyle='-', linewidth='2', label="Cos wave")
ax2.grid(True)
ax2.margins(0.2)
ax1.legend()
ax1.set_xlabel("X data")
ax1.set_ylabel("Sin data")
ax1.set_title("Sin Chart")
ax2.legend()
ax2.set_xlabel("X data")
ax2.set_ylabel("Sin data")
ax2.set_title("Cos Chart")
fig = plt.figure(figsize=(16,12))
plt.subplots(nrows=3, ncols=4)
plt.show()
fig = plt.figure(figsize=(8,6))
ax1 = plt.subplot(1,2,1)
plt.subplot(3,2,2)
plt.subplot(3,2,4)
plt.subplot(3,2,6)
ax1.plot([1,2,3], [0.5,4,1.5])
plt.show()
X = [ (2,1,1), (2,3,4), (2,3,5), (2,3,6) ]
for nrows, ncols, plot_number in X:
plt.subplot(nrows, ncols, plot_number)
# removing all ticks
X = [ (2,1,1), (2,3,4), (2,3,5), (2,3,6) ]
for nrows, ncols, plot_number in X:
plt.subplot(nrows, ncols, plot_number)
plt.xticks([])
plt.yticks([])
# removing all ticks
X = [ (1,2,1), (3,2,2), (3,2,4), (3,2,6) ]
for nrows, ncols, plot_number in X:
plt.subplot(nrows, ncols, plot_number)
plt.xticks([])
plt.yticks([])
# removing all ticks
X = [ (4,2,1), (4,2,3), (4,2,5), (4,1,4), (4,2,2), (4,2,(4,6)) ]
for nrows, ncols, plot_number in X:
plt.subplot(nrows, ncols, plot_number)
plt.xticks([])
plt.yticks([])
fig, axes = plt.subplots(1, 3, figsize=(10, 4))
x = np.arange(0, 5, 0.25)
axes[0].plot(x, x**2, x, x**3)
axes[0].set_title("default axes ranges")
axes[1].plot(x, x**2, x, x**3)
axes[1].axis('tight')
axes[1].set_title("tight axes")
axes[2].plot(x, x**2, x, x**3)
axes[2].set_ylim([0, 60])
axes[2].set_xlim([2, 5])
axes[2].set_title("custom axes range");
fig, ax1 = plt.subplots()
x = np.arange(1,7,0.1)
ax1.plot(x, 2 * np.pi * x, lw=2, color="blue")
ax1.set_ylabel(r"Circumference $(cm)$", fontsize=16, color="blue")
for label in ax1.get_yticklabels():
label.set_color("blue")
ax2 = ax1.twinx()
ax2.plot(x, np.pi * x ** 2, lw=2, color="darkgreen")
ax2.set_ylabel(r"area $(cm^2)$", fontsize=16, color="darkgreen")
for label in ax2.get_yticklabels():
label.set_color("darkgreen")
#plt.grid(color='b', alpha=1.5, linestyle='dashed', linewidth=0.5)
fig.savefig("filename.png", dpi=200)
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
def fp(t):
return -2*np.pi * np.exp(-t) * np.sin(2*np.pi*t) - np.e**(-t)*np.cos(2*np.pi*t)
def g(t):
return np.sin(t) * np.cos(1/(t+0.1))
def g(t):
return np.sin(t) * np.cos(1/(t))
python_course_green = "#476042"
fig = plt.figure(figsize=(6, 4))
t = np.arange(-5.0, 1.0, 0.1)
sub1 = fig.add_subplot(221) # instead of plt.subplot(2, 2, 1)
sub1.set_title('The function f') # non OOP: plt.title('The function f')
sub1.plot(t, f(t))
sub2 = fig.add_subplot(222, axisbg="lightgrey")
sub2.set_title('fp, the derivation of f')
sub2.plot(t, fp(t))
t = np.arange(-3.0, 2.0, 0.02)
sub3 = fig.add_subplot(223)
sub3.set_title('The function g')
sub3.plot(t, g(t))
t = np.arange(-0.2, 0.2, 0.001)
sub4 = fig.add_subplot(224, axisbg="lightgrey")
sub4.set_title('A closer look at g')
sub4.set_xticks([-0.2, -0.1, 0, 0.1, 0.2])
sub4.set_yticks([-0.15, -0.1, 0, 0.1, 0.15])
sub4.plot(t, g(t))
plt.plot(t, g(t))
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Viewing and manipulating FITS images
Step2: Opening FITS files and loading the image data
Step3: Generally the image information is located in the <code>PRIMARY</code> block. The blocks are numbered and can be accessed by indexing <code>hdu_list</code>.
Step4: You data is now stored as a 2-D numpy array. Want to know the dimensions of the image? Just look at the shape of the array.
Step5: At this point, we can just close the FITS file. We have stored everything we wanted to a variable.
Step6: SHORTCUT
Step7: Viewing the image data and getting basic statistics
Step8: Let's get some basic statistics about our image
Step9: Plotting a histogram
Step10: Displaying the image with a logarithmic scale
Step11: Basic image math
Step12: Now I'll stack the images by summing my concatenated list.
Step13: I'm going to show the image, but I want to decide on the best stretch. To do so I'll plot a histogram of the data.
Step14: I'll use the keywords vmin and vmax to set limits on the color scaling for imshow.
Step15: Writing image data to a FITS file
|
<ASSISTANT_TASK:>
Python Code:
from astropy.utils.data import download_file
from astropy.io import fits
image_file = download_file('http://data.astropy.org/tutorials/FITS-images/HorseHead.fits', cache=True )
hdu_list = fits.open(image_file)
hdu_list.info()
image_data = hdu_list[0].data
print(type(image_data))
print(image_data.shape)
hdu_list.close()
image_data = fits.getdata(image_file)
print(type(image_data))
print(image_data.shape)
plt.imshow(image_data, cmap='gray')
plt.colorbar()
# To see more color maps
# http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps
print('Min:', np.min(image_data))
print('Max:', np.max(image_data))
print('Mean:', np.mean(image_data))
print('Stdev:', np.std(image_data))
print(type(image_data.flat))
NBINS = 1000
histogram = plt.hist(image_data.flat, NBINS)
from matplotlib.colors import LogNorm
plt.imshow(image_data, cmap='gray', norm=LogNorm())
# I chose the tick marks based on the histogram above
cbar = plt.colorbar(ticks=[5.e3,1.e4,2.e4])
cbar.ax.set_yticklabels(['5,000','10,000','20,000'])
image_list = [ download_file('http://data.astropy.org/tutorials/FITS-images/M13_blue_000'+n+'.fits', cache=True ) \
for n in ['1','2','3','4','5'] ]
# The long way
image_concat = []
for image in image_list:
image_concat.append(fits.getdata(image))
# The short way
#image_concat = [ fits.getdata(image) for image in IMAGE_LIST ]
# The long way
final_image = np.zeros(shape=image_concat[0].shape)
for image in image_concat:
final_image += image
# The short way
#final_image = np.sum(image_concat, axis=0)
image_hist = plt.hist(final_image.flat, 1000)
plt.imshow(final_image, cmap='gray', vmin=2.e3, vmax=3.e3)
plt.colorbar()
outfile = 'stacked_M13_blue.fits'
hdu = fits.PrimaryHDU(final_image)
hdu.writeto(outfile, clobber=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The eigenvalues problem
Step2: Mass Displacements and Inertial Forces
Step3: Initialization
|
<ASSISTANT_TASK:>
Python Code:
l = [1, 1, 1, 2, 1]
m = [[p( 2/3, 0), p(-1/3, 1), p(1, 0), p(2/3, 0), p(4/3, 0)],
[p(-2/3, 0), p(-2/3, 0), p(0, 0), p(1/3, 0), p(2/3, 0)]]
F = array([[vw(emme, chi, l) for emme in m] for chi in m])
K = inv(F)
M = eye(2)
dl(dmat(r'\boldsymbol{F}=\frac{1}{27}\frac{L^3}{EJ}', F*27, r','))
dl(dmat(r'\boldsymbol{K}=\frac{1}{53}\frac{EJ}{L^3}', K*53, r','))
dl(dmat(r'\boldsymbol{M}=m', M,'.', fmt='%d'))
wn2, Psi = eigh(K, M) ; Psi[:,0] *= -1
Lambda2 = diag(wn2)
dl(dmat(r'\boldsymbol{\Lambda^2}=', Lambda2, r'.'))
dl(dmat(r'\boldsymbol{\Psi}=', Psi, r'.'))
r = array((2/3, 1/3))
a = -Psi.T@r
print('The modal equations of motion are')
for i, (ai, l2i) in enumerate(zip(a, wn2), 1):
dl(r'$$\ddot q_{%d} %+.6f\,\omega_0^2\,q_{%d} = %+.6f\,\ddot u_\mathcal{B}$$' %
(i, l2i, i, ai))
from numpy import array, diag, eye, poly1d
from scipy.linalg import eigh, inv
from IPython.display import Latex, HTML
display(HTML(open('01.css').read()))
def p(*l): return poly1d(l)
def vw(M, Χ, L):
return sum(p(l)-p(0) for (m, χ, l) in zip(M, Χ, L) for p in ((m*χ).integ(),))
def dmat(pre, mat, post, mattype='b', fmt='%+.6g'):
s = r'\begin{align}' + pre + r'\begin{%smatrix}'%mattype
s += r'\\'.join('&'.join(fmt%val for val in row) for row in mat)
s += r'\end{%smatrix}'%mattype + post + r'\end{align}'
return s
def dl(ls):
display(Latex(ls))
return None
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Data
Step4: Plotting Functions
Step5: Case Study
Step6: We can forecast the future long-term returns using the fitted "return curve" from the scatter-plot above. Towards the end of 2017, the P/Sales ratio was almost 2.2 for the S&P 500, which was about the previous high point of the "Dot-Com" bubble around year 2000.
Step7: So if you purchased the S&P 500 in December 2017 at this P/Sales ratio and will keep the investment for more than 7 years, while reinvesting all dividends during those years (all taxes are ignored), then the formula forecasts an annualized return of about 1.35%
Step8: Towards the end of 2017 the P/Sales ratio was about 4.9 which is close to the all-time historical highs experienced during the stock-market bubble around year 2000.
Step9: Using the formula for the fitted "return curve" from the scatter-plot above, we get this forecasted long-term return
Step10: When we plot the historical P/Sales ratio, we see that at the end of 2017 it was around 3.5 which was near its all-time high experienced during the bubble around year 2000.
Step11: Using the fitted reciprocal curve from the scatter-plot above, we get a forecasted return of about 6.1% per year, when dividends are reinvested without taxes
Step12: Towards the end of 2017 the P/Sales ratio was about 1.8 which was actually very close to the historical average.
Step13: Using the fitted "return curve" from the scatter-plot above with the P/Sales ratio of 1.8 we get the forecasted return
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# Imports from Python packages.
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
import pandas as pd
import numpy as np
import os
# Imports from FinanceOps.
from curve_fit import CurveFitReciprocal
from data_keys import *
from data import load_index_data, load_stock_data
from returns import prepare_mean_ann_returns
# Define the ticker-names for the stocks we consider.
ticker_SP500 = "S&P 500"
ticker_JNJ = "JNJ"
ticker_K = "K"
ticker_PG = "PG"
ticker_WMT = "WMT"
# Load the financial data for the stocks.
df_SP500 = load_index_data(ticker=ticker_SP500)
df_JNJ = load_stock_data(ticker=ticker_JNJ)
df_K = load_stock_data(ticker=ticker_K)
df_PG = load_stock_data(ticker=ticker_PG)
df_WMT = load_stock_data(ticker=ticker_WMT)
def plot_psales(df, ticker, start_date=None):
Plot the P/Sales ratio.
:param df: Pandas DataFrame with PSALES.
:param ticker: Ticker-name for the stock or index.
:param start_date: Start-date for the plot.
:return: Nothing.
psales = df[PSALES][start_date:].dropna()
psales.plot(title=ticker + " - P/Sales", grid=True)
def plot_ann_returns(ticker, df, key=PSALES,
min_years=7, max_years=15,
use_colors=True):
Create a single scatter-plot with P/Sales or P/Book
vs. Mean Annualized Returns for e.g. 7-15 years.
:param ticker: Ticker-name for the stock or index.
:param df: Pandas DataFrame containing key and TOTAL_RETURN.
:param key: Name of data-column to use e.g. PSALES or PBOOK.
:param min_years: Min number of years for return periods.
:param max_years: Max number of years for return periods.
:param use_colors: Boolean whether to use colors in plot.
:return: Nothing.
# Prepare the data.
# x is the P/Sales or P/Book and y is the Mean Ann. Returns.
x, y = prepare_mean_ann_returns(df=df, key=key,
min_years=min_years,
max_years=max_years)
# Create a single plot.
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(211)
# Scatter-plot.
if use_colors:
# Give each dot in the scatter-plot a shade of blue
# according to the date of the data-point.
ax.scatter(x, y,
c=list(range(len(x))), cmap='Blues',
alpha=1.0, marker='o')
else:
# Use the same color for all dots.
ax.scatter(x, y, marker='o')
# First part of the title.
title1 = "[{0}] {1} vs. {2}-{3} Years Mean Ann. Return"
title1 = title1.format(ticker, key, min_years, max_years)
# X-values for plotting fitted curves.
x_min = np.min(x)
x_max = np.max(x)
x_range = np.arange(x_min, x_max, (x_max/x_min)/1000)
# Plot reciprocal curve-fit.
curve_fit_reciprocal = CurveFitReciprocal(x=x, y=y)
y_pred = curve_fit_reciprocal.predict(x=x_range)
ax.plot(x_range, y_pred, color='red')
# Title with these curve-fit parameters.
title2 = "Mean Ann. Return = {0:.1%} / " + key + " + {1:.1%}"
title2 = title2.format(*curve_fit_reciprocal.params)
# Combine and set the plot-title.
title = "\n".join([title1, title2])
ax.set_title(title)
# Set axis labels.
ax.set_xlabel(key)
ax.set_ylabel("Mean Ann. Return")
# Convert y-ticks to percentages.
# We use a custom FuncFormatter because PercentFormatter
# is inconsistent with string-formatters used elsewhere.
formatter = FuncFormatter(lambda y, _: '{:.0%}'.format(y))
ax.yaxis.set_major_formatter(formatter)
# Show grid.
ax.grid()
# Show the plot.
plt.show()
plot_ann_returns(ticker=ticker_SP500, df=df_SP500, key=PSALES,
min_years=7, max_years=15, use_colors=True)
df_SP500[PSALES].dropna().tail(1)
plot_psales(df=df_SP500, ticker=ticker_SP500)
plot_ann_returns(ticker=ticker_JNJ, df=df_JNJ, key=PSALES,
min_years=7, max_years=15, use_colors=True)
df_JNJ[PSALES].dropna().tail(1)
plot_psales(df=df_JNJ, ticker=ticker_JNJ)
plot_ann_returns(ticker=ticker_PG, df=df_PG, key=PSALES,
min_years=7, max_years=15)
plot_psales(df=df_PG, ticker=ticker_PG)
plot_ann_returns(ticker=ticker_K, df=df_K, key=PSALES,
min_years=7, max_years=15, use_colors=True)
df_K[PSALES].dropna().mean()
plot_psales(df=df_K, ticker=ticker_K)
plot_ann_returns(ticker=ticker_WMT, df=df_WMT, key=PSALES,
min_years=7, max_years=15, use_colors=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-1', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 创建客户端
Step2: 申请秘钥
Step3: 返回值为每个句子相应的AMR图的Meaning Representation格式:
Step4: 注意上面“男孩”有2个anchor,分别对应“男孩”和“他”。也就是说,MR格式其实包含了指代消解的结果。
Step5: 多语种支持
Step6: 用户可以通过指定language参数来实现英文抽象意义表示的分析:
|
<ASSISTANT_TASK:>
Python Code:
!pip install hanlp_restful -U
from hanlp_restful import HanLPClient
HanLP = HanLPClient('https://www.hanlp.com/api', auth=None, language='zh') # auth不填则匿名,zh中文,mul多语种
graphs = HanLP.abstract_meaning_representation('男孩希望女孩相信他。')
len(graphs)
graph = graphs[0]
graph
from IPython.display import SVG, display
def show_svg(g):
display(SVG(data=g['svg']))
graph = HanLP.abstract_meaning_representation('男孩希望女孩相信他。', visualization='svg')[0]
show_svg(graph)
graph = HanLP.abstract_meaning_representation('The boy wants the girl to believe him.',
language='en', visualization='svg')[0]
show_svg(graph)
HanLP.abstract_meaning_representation(tokens=[['The', 'boy', 'wants', 'the', 'girl', 'to', 'believe', 'him', '.']],
language='en')[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Nephelometer Representation
Step2: Calibration
Step3: We can explore the calibration factors that were just determined - the units are a bit arbitrary, since we don't consider the intensity/power of the laser as we assume it is constant. Thus, these units are something like $cm^2/(\mu g/ m^3)$
Step4: Similarly, we get ratio's for $PM_{2.5}$ and $PM_{10}$
Step5: Evaluating a Nephelometer for New Aerosol Distributions
Step6: What if we went ahead and tried to evaluate on a totally unseen distribution? Let's go ahead and evaluate on an urban distribution
Step7: First, let's determine the actual $PM_1$, $PM_{2.5}$, and $PM_{10}$ loadings for this distribution
Step8: Next, let's evaluate the Nephelometer
|
<ASSISTANT_TASK:>
Python Code:
# Make imports
import opcsim
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as mticks
import seaborn as sns
%matplotlib inline
# turn off warnings temporarily
import warnings
warnings.simplefilter('ignore')
# Let's set some default seaborn settings
sns.set(context='notebook', style='ticks', palette='dark', font_scale=1.75,
rc={'figure.figsize': (12,6), **opcsim.plots.rc_log})
# init a nephelometer with a 658 nm laser, gathering light from between 7-173 degrees
neph = opcsim.Nephelometer(wl=0.658, theta=(7., 173))
neph
d1 = opcsim.AerosolDistribution("AmmSulf")
d1.add_mode(n=1e4, gm=125e-3, gsd=1.5, refr=complex(1.521, 0), kappa=0.53, rho=1.77)
# calibrate the nephelometer at 0% RH
neph.calibrate(d1, rh=0.)
neph.pm1_ratio
neph.pm25_ratio
neph.pm10_ratio
# evaluate the same distribution we used to calibrate
neph.evaluate(d1, rh=0.)
# evaluate the same distribution we used to calibrate, but at a higher RH
neph.evaluate(d1, rh=85.0)
d2 = opcsim.load_distribution("urban")
d2
print ("PM1 = {:.2f} ug/m3".format(d2.cdf(dmin=0., dmax=1., weight='mass', rho=1.65)))
print ("PM2.5 = {:.2f} ug/m3".format(d2.cdf(dmin=0., dmax=2.5, weight='mass', rho=1.65)))
print ("PM10 = {:.2f} ug/m3".format(d2.cdf(dmin=0., dmax=10., weight='mass', rho=1.65)))
neph.evaluate(d2, rh=0.)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step 1
Step1: Step 2
Step2: Step 3
Step3: Step 4
Step4: Build the model
Step 1
Step5: Step 2
Step6: Step 3
Step7: Step 4
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import tensorflow.keras as keras
import tensorflow.keras.layers as layers
from tensorflow_io.bigquery import BigQueryClient
import functools
GCP_PROJECT_ID = 'qwiklabs-gcp-00-b1e00ce17168' # Replace with your Project-ID
DATASET_GCP_PROJECT_ID = GCP_PROJECT_ID # A copy of the data is saved in the user project
DATASET_ID = 'tfe_codelab'
TRAIN_TABLE_ID = 'ulb_fraud_detection_train'
VAL_TABLE_ID = 'ulb_fraud_detection_val'
TEST_TABLE_ID = 'ulb_fraud_detection_test'
FEATURES = ['Time','V1','V2','V3','V4','V5','V6','V7','V8','V9','V10','V11','V12','V13','V14','V15','V16','V17','V18','V19','V20','V21','V22','V23','V24','V25','V26','V27','V28','Amount']
LABEL='Class'
DTYPES=[tf.float64] * len(FEATURES) + [tf.int64]
client = BigQueryClient()
def read_session(TABLE_ID):
return client.read_session(
"projects/" + GCP_PROJECT_ID, DATASET_GCP_PROJECT_ID, TABLE_ID, DATASET_ID,
FEATURES + [LABEL], DTYPES, requested_streams=2
)
def extract_labels(input_dict):
features = dict(input_dict)
label = tf.cast(features.pop(LABEL), tf.float64)
return (features, label)
BATCH_SIZE = 32
# TODO 1
# Create the datasets
raw_train_data = read_session(TRAIN_TABLE_ID).parallel_read_rows().map(extract_labels).batch(BATCH_SIZE)
raw_val_data = read_session(VAL_TABLE_ID).parallel_read_rows().map(extract_labels).batch(BATCH_SIZE)
raw_test_data = read_session(TEST_TABLE_ID).parallel_read_rows().map(extract_labels).batch(BATCH_SIZE)
next(iter(raw_train_data)) # Print first batch
# TODO 2
MEANS = [94816.7387536405, 0.0011219465482001268, -0.0021445914636999603, -0.002317402958335562,
-0.002525792169927835, -0.002136576923287782, -3.7586818983702984, 8.135919975738768E-4,
-0.0015535579268265718, 0.001436137140461279, -0.0012193712736681508, -4.5364970422902533E-4,
-4.6175444671576083E-4, 9.92177789685366E-4, 0.002366229151475428, 6.710217226762278E-4,
0.0010325807119864225, 2.557260815835395E-4, -2.0804190062322664E-4, -5.057391100818653E-4,
-3.452114767842334E-6, 1.0145936326270006E-4, 3.839214074518535E-4, 2.2061197469126577E-4,
-1.5601580596677608E-4, -8.235017846415852E-4, -7.298316615408554E-4, -6.898459943652376E-5,
4.724125688297753E-5, 88.73235686453587]
def norm_data(mean, data):
data = tf.cast(data, tf.float32) * 1/(2*mean)
return tf.reshape(data, [-1, 1])
numeric_columns = []
for i, feature in enumerate(FEATURES):
num_col = tf.feature_column.numeric_column(feature, normalizer_fn=functools.partial(norm_data, MEANS[i]))
numeric_columns.append(num_col)
numeric_columns
# TODO 3
model = keras.Sequential([
tf.keras.layers.DenseFeatures(numeric_columns),
layers.Dense(64, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
# Compile the model
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy', tf.keras.metrics.AUC(curve='PR')])
# TODO 4
CLASS_WEIGHT = {
0: 1,
1: 100
}
EPOCHS = 3
train_data = raw_train_data.shuffle(10000)
val_data = raw_val_data
test_data = raw_test_data
# Train the model using model.fit()
model.fit(train_data, validation_data=val_data, class_weight=CLASS_WEIGHT, epochs=EPOCHS)
# TODO 5
# Evaluate the model
model.evaluate(test_data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup
Step2: The network snapshot that we initialized above is illustrated below. You can view or download the devices' configuration files here.
Step3: To pretty-print the traces in HTML use the display_html function. We will show you how to extract more detailed information below.
Step4: The Flow column describes the packet being traced
Step5: The Trace column contains the detailed information provided by Batfish about the paths through the network for each flow. Let's look in detail on the first path
Step6: This flow starts at as3core1 and crosses from AS3 into AS2 via the border routers as3border1 and as2border2; on as2border2, the flow is permitted by the inbound ACL OUTSIDE_TO_INSIDE. Once inside AS2, the flow is forwarded through AS2's core and distribution servers to the department router. The flow does reach host1, but is blocked by that server iptables rule filter
Step7: To programmatically get the detailed information about the final hop of the first trace in pure Python form
Step8: Note that compared to running traceroute on a router, Batfish is able to provide much more detail about the trace
Step9: As you can see, this query found some DNS flow entering the network at each as2... node destined for host1 that would be delivered. This guarantees that DNS is at least partially available (some authorized nodes can reach host1).
Step10: The fact that Batfish returned 0 flows guarantees that host1 is reachable via DNS from everywhere within AS2. This guarantees that DNS is available to all authorized nodes.
Step11: Success! Since Batfish returned 0 flows, we are guaranteed that no unauthorized UDP flows will reach host1.
Step12: We found that the DNS server is not secure
|
<ASSISTANT_TASK:>
Python Code:
# Import packages
%run startup.py
bf = Session(host="localhost")
NETWORK_NAME = "example_network"
SNAPSHOT_NAME = "example_snapshot"
SNAPSHOT_PATH = "networks/example"
bf.set_network(NETWORK_NAME)
bf.init_snapshot(SNAPSHOT_PATH, name=SNAPSHOT_NAME, overwrite=True)
# start the traceroute from the Loopback0 interface of as3core1 to host1
headers = HeaderConstraints(dstIps='host1')
tracert = bf.q.traceroute(startLocation="as3core1[Loopback0]", headers=headers).answer().frame()
show(tracert)
tracert['Flow'][0]
tracert['Traces'][0][0] # Get the trace for the first path of the first flow
tracert['TraceCount'][0]
last_hop = tracert['Traces'][0][0].hops[-1]
repr(last_hop)
path = PathConstraints(startLocation="/as2/")
headers = HeaderConstraints(srcIps="0.0.0.0/0", dstIps="host1", applications="DNS")
reach = bf.q.reachability(pathConstraints=path, headers=headers, actions="success").answer().frame()
show(reach)
path = PathConstraints(startLocation="/as2/")
headers = HeaderConstraints(dstIps="host1", applications="DNS")
reach = bf.q.reachability(pathConstraints=path, headers=headers, actions="failure").answer().frame()
show(reach)
path = PathConstraints(startLocation="/as2/")
headers = HeaderConstraints(srcIps="0.0.0.0/0", dstIps="host1", ipProtocols="UDP", dstPorts="!53")
reach = bf.q.reachability(pathConstraints=path, headers=headers, actions="accepted").answer().frame()
show(reach)
path = PathConstraints(startLocation="@enter(/as2border/[GigabitEthernet0/0])")
headers = HeaderConstraints(srcIps="0.0.0.0/0", dstIps="host1", applications="DNS")
reach = bf.q.reachability(pathConstraints=path, headers=headers, actions="success").answer().frame()
show(reach)
multipath = bf.q.multipathConsistency().answer().frame()
first_result = multipath.head(1) # this check returns many results, just show 1
show(first_result)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the next step, we create a new dataframe with people as indexes, and all the voting Bill / Business title as column.
Step2: We observe that the average distance between all parliament members is 0.53, and that average distance between people belonging to a same partite is always smaller than 0.43, thus it indicates as expected that people belonging to a same partite vote in a similar way.
Step3: We observe that the partite to which the socialist partite is the further from in terms of votation decisions is the "Groupe des Paysans, Artisans et Bourgeois" partite, and the one to which it is the closest is the "Groupe écologiste" partite.
Step4: We now want to detect people that vote very differently from their own groups.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import glob
import os
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.ensemble
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score, train_test_split, cross_val_predict, learning_curve
import sklearn.metrics
%matplotlib inline
%load_ext autoreload
%autoreload 2
# There's a lot of columns in the DF.
# Therefore, we add this option so that we can see more columns
pd.options.display.max_columns = 100
path = '../../datas/nlp_results/'
voting_df = pd.read_csv(path+'voting_with_topics.csv')
print('Entries in the DataFrame',voting_df.shape)
#Dropping the useless column
voting_df = voting_df.drop('Unnamed: 0',1)
#Putting numerical values into the columns that should have numerical values
#print(voting_df.columns.values)
num_cols = ['Decision', ' armée', ' asile / immigration', ' assurances', ' budget', ' dunno', ' entreprise/ finance',
' environnement', ' famille / enfants', ' imposition', ' politique internationale', ' retraite ']
voting_df[num_cols] = voting_df[num_cols].apply(pd.to_numeric)
#Inserting the full name at the second position
voting_df.insert(2,'Name', voting_df['FirstName'] + ' ' + voting_df['LastName'])
voting_df.head(3)
voting_df_copy = voting_df.drop_duplicates(['text', 'Name'], keep = 'last')
people = voting_df_copy['Name'].unique()
texts = voting_df_copy['text'].unique()
print("{n} people in the parliament from 2009 to 2015".format(n=people.shape[0]))
voting_df_copy = voting_df_copy.set_index(['Name', 'text'])
voting_df_copy.head()
def processVote(vote):
if vote == 1 or vote == 2:
return vote-1
return 0.5
profile_df = pd.DataFrame(data = -1, index = people, columns = texts)
#profile_df.loc[people[0], voting_df_copy.loc[people[0]].index] = voting_df_copy.loc[people[0]].Decision
for p in people:
profile_df.loc[p, voting_df_copy.loc[p].index] = [processVote(x) for x in voting_df_copy.loc[p].Decision]
profile_df.head()
profile_df.to_csv("profileMatrix.csv")
print(profile_df.loc['Brigitta M. Gadient'].values)
profile_df.loc['Duri Campell'].values
profile_df.loc[people[0]].values
def distance(p1, p2):
d = 0.0
nCommonVotes = 0
for i in range(len(p1)):
if not (p1[i] == -1 or p2[i] == -1):
nCommonVotes += 1
d += (p1[i] - p2[i]) * (p1[i] - p2[i])
if nCommonVotes == 0:
return 100
return np.sqrt(d / nCommonVotes)
n = people.shape[0]
distanceMatrix = np.zeros((n,n))
for i in range(n):
if i % 10 == 0:
print("Compute distances from person " + str(i))
for j in range(n):
distanceMatrix[i][j] = distance(profile_df.loc[people[i]].values,
profile_df.loc[people[j]].values)
import networkx as nx
G = nx.from_numpy_matrix(distanceMatrix)
nx.draw(G)
import pylab as plt
plt.show()
print("Mean distance : {d}".format(d = np.mean(distanceMatrix)))
import pandas as pd
df = pd.DataFrame(distanceMatrix, index = people, columns = people)
df.to_csv("distanceMatrix.csv")
df.head()
#voting_df['ParlGroupName']
groupId = {"Groupe conservateur-catholique" : 1, "Groupe socialiste" : 2, "Groupe des Paysans, Artisans et Bourgeois" : 3,
"Groupe radical-démocratique" : 4, "Groupe écologiste" : 5, "Groupe BD" : 6, "Groupe vert'libéral" : 7, "Non inscrit" : 8}
GroupPeople_df = voting_df.drop_duplicates(['Name'], keep = 'last')
GroupPeople_df = GroupPeople_df.set_index('Name')
GroupPeople_df['ParlGroupName'] = [groupId[x] for x in GroupPeople_df['ParlGroupName'].values]
GroupPeople_df = GroupPeople_df['ParlGroupName']
#GroupPeople_df['ParlGroupName'] = [groupId[x] for x in GroupPeople_df.values]
GroupPeople_df.to_json('GroupList.json')
GroupPeople_df
#groupId_df = pd.DataFrame.from_dict(groupId, orient='columns')
#groupId_df.to_json('GroupId.json')
groupId_inv = {groupId[k] : k for k in groupId}
groupId_inv_df = pd.DataFrame.from_dict(groupId_inv, orient='index')
groupId_inv_df.to_json('GroupId.json')
GroupPeople_df
Group_df = pd.DataFrame(index = voting_df_copy['ParlGroupName'].unique())
Group_df['MeanDistance'] = 0.0
Group_df['NumberOfPeople'] = 0# voting_df[voting_df['ParlGroupName'] == Group_df.index].shape[0]
Group_df
# Computes the mean distance from people within peopleGroup (list of string)
def meanDistance(peopleGroup):
d = 0.0
nbPairs = 0
for p1 in peopleGroup:
for p2 in peopleGroup:
#print(p1+' '+p2)
if not (p1 is p2):
nbPairs += 1
d += distance(profile_df.loc[p1].values,
profile_df.loc[p2].values)
return d / nbPairs
Group_df.loc['Groupe conservateur-catholique']
voting_df[voting_df['ParlGroupName'] == 'Groupe conservateur-catholique']['Name'].unique
print(len(list(voting_df[voting_df['ParlGroupName'] == 'Groupe conservateur-catholique']['Name'].unique())))
print(meanDistance(list(voting_df[voting_df['ParlGroupName'] == 'Groupe BD']['Name'].unique())))
for p in Group_df.index:
print(p)
#Group_df.loc[p].NumberOfPeople = len(list(voting_df[voting_df['ParlGroupName'] == p]['Name'].unique()))
Group_df.set_value(p, 'NumberOfPeople', len(list(voting_df[voting_df['ParlGroupName'] == p]['Name'].unique())))
print(Group_df.loc[p]['NumberOfPeople'])
#Group_df.loc[p]['MeanDistance'] = meanDistance(list(voting_df[voting_df['ParlGroupName'] == p]['Name'].unique()))
Group_df.set_value(p, 'MeanDistance', meanDistance(list(voting_df[voting_df['ParlGroupName'] == p]['Name'].unique())))
print(Group_df.loc[p]['MeanDistance'])
Group_df
meanDistance(list(voting_df['Name'].unique()))
# Computes the mean distance of people within group1 from one within group2 (lists of string)
def GroupDistance(group1, group2):
d = 0.0
nbPairs = 0
for p1 in group1:
for p2 in group2:
nbPairs += 1
d += distance(profile_df.loc[p1].values,
profile_df.loc[p2].values)
return d / nbPairs
partite = "Groupe socialiste"
for p in Group_df.index:
#dist = meanDistance(list(voting_df[voting_df['ParlGroupName'] == p]['Name'].unique()))
dist = GroupDistance(list(voting_df[voting_df['ParlGroupName'] == partite]['Name'].unique()),
list(voting_df[voting_df['ParlGroupName'] == p]['Name'].unique()))
print("Mean distance between partite {p1} and {p2} is : {d}".format(
p1 = partite, p2 = p, d = dist))
groups = Group_df.index
groupDistance_df = pd.DataFrame(index = groups, columns = groups)
for g1 in groups:
for g2 in groups:
groupDistance_df.set_value(g1, g2, GroupDistance(list(voting_df[voting_df['ParlGroupName'] == g1]['Name'].unique()),
list(voting_df[voting_df['ParlGroupName'] == g2]['Name'].unique())))
groupDistance_df
groupId_inv
list(voting_df[voting_df['Name'] == 'Didier Berberat']['Name'].unique())
GroupPeople_df.index
partite = "Groupe écologiste"
GroupPeople_df[GroupPeople_df == groupId[partite]]
maxDist = 0
furthestPerson = ""
for p in GroupPeople_df[GroupPeople_df == groupId[partite]].index:
dist = GroupDistance(list(voting_df[voting_df['ParlGroupName'] == partite]['Name'].unique()),
list(voting_df[voting_df['Name'] == p]['Name'].unique()))
if dist > maxDist:
maxDist = dist
furthestPerson = p
print("Mean distance of {person} to its partite {part} : {d}".format(person = p, part = partite, d = dist))
print("The person in partite {part} which is the furthest of the others in terms of voting is : {p}"
.format(part=partite, p = furthestPerson))
partite = "Groupe écologiste"
GroupPeople_df[GroupPeople_df == groupId[partite]]
maxDist = 0
furthestPerson = ""
for partite in groupDistance_df.index:
maxDist = 0
for p in GroupPeople_df[GroupPeople_df == groupId[partite]].index:
dist = GroupDistance(list(voting_df[voting_df['ParlGroupName'] == partite]['Name'].unique()),
list(voting_df[voting_df['Name'] == p]['Name'].unique()))
if dist > maxDist:
maxDist = dist
furthestPerson = p
#print("{pers} {d}".format(pers=p, d=dist))
#print("Mean distance of {person} to its partite {part} : {d}".format(person = p, part = partite, d = dist))
print("The person in partite {part} which is the furthest of the others in terms of voting is : {p} with distance {d}"
.format(part=partite, p = furthestPerson, d = maxDist))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
def findMinSum(arr , n ) :
sum = 0
for i in range(0 , n ) :
sum += arr[i ] *(n - i )
return sum
arr =[3 , 5 , 7 , 8 ]
n = len(arr )
print(findMinSum(arr , n ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup
Step2: Reading Data
Step3: Normalize
Step4: To match the axis that theano expects (channel on axis 1)
Step5: As expected from theano
Step6: A look at the data
Step7: Batchnorm + dropout + data augmentation
Step8: Ensembling
Step9: So we should have around 99.5% accuracy. Let's see
|
<ASSISTANT_TASK:>
Python Code:
from theano.sandbox import cuda
cuda.use('gpu2')
%matplotlib inline
import utils; reload(utils)
from utils import *
from __future__ import division, print_function
path="data/mnist/"
model_path = path + 'models/'
results_path = path + 'results/'
submissions_path = path + 'submissions/'
if not os.path.exists(model_path): os.mkdir(model_path)
if not os.path.exists(results_path): os.mkdir(results_path)
if not os.path.exists(submissions_path): os.mkdir(submissions_path)
batch_size=64
training = pd.read_csv(path+'train.csv')
training.head()
test = pd.read_csv(path+"test.csv")
test.head()
Y_train = training['label'].values
Y_train
X_train = training.iloc[:,1:].values
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1) #reshape to rectangular
X_train = X_train/255 #pixel values are 0 - 255 - this makes puts them in the range 0 - 1
X_val = X_train[32000:]
X_train = X_train[0:32000]
Y_val = Y_train[32000:]
Y_train = Y_train[0:32000]
X_test = test.iloc[:,0:].values
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1) #reshape to rectangular
X_test = X_test/255 #pixel values are 0 - 255 - this makes puts them in the range 0 - 1
(X_train.shape, Y_train.shape, X_val.shape, Y_val.shape)
X_train = np.expand_dims(X_train,1)
X_val = np.expand_dims(X_val,1)
X_train = np.squeeze(X_train,4)
X_val = np.squeeze(X_val,4)
(X_train.shape, Y_train.shape, X_val.shape, Y_val.shape)
Y_train = onehot(Y_train)
Y_val = onehot(Y_val)
Y_train[:5]
mean_px = X_train.mean().astype(np.float32)
std_px = X_train.std().astype(np.float32)
def norm_input(x): return (x-mean_px)/std_px
def plots(ims, interp=False, titles=None):
ims=np.array(ims)
mn,mx=ims.min(),ims.max()
f = plt.figure(figsize=(12,24))
for i in range(len(ims)):
sp=f.add_subplot(1, len(ims), i+1)
if not titles is None: sp.set_title(titles[i], fontsize=18)
plt.imshow(ims[i], interpolation=None if interp else 'none', vmin=mn,vmax=mx)
def plot(im, interp=False):
f = plt.figure(figsize=(3,6), frameon=True)
plt.imshow(im, interpolation=None if interp else 'none')
plt.gray()
plt.close()
digits = np.squeeze(X_train, 1)
digits.shape
plots(digits[0:8], titles=Y_train.argmax(axis=1)[0:8])
def get_model_bn_do():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32,3,3, activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
def fit_model():
model = get_model_bn_do()
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=val_batches, nb_val_samples=val_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=12,
validation_data=val_batches, nb_val_samples=val_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=18,
validation_data=val_batches, nb_val_samples=val_batches.N)
return model
models = [fit_model() for i in range(6)]
for i,m in enumerate(models):
m.save_weights(model_path+'ensmb-'+str(i)+'.h5')
evals = np.array([m.evaluate(X_val, Y_val, batch_size=256) for m in models])
evals.mean(axis=0)
all_preds = np.stack([m.predict(X_val, batch_size=256) for m in models])
all_preds.shape
avg_preds = all_preds.mean(axis=0)
keras.metrics.categorical_accuracy(Y_val, avg_preds).eval()
X_test.shape
X_test = np.expand_dims(X_test,1)
X_test = np.squeeze(X_test,4)
all_preds = np.stack([m.predict(X_test, batch_size=256) for m in models])
predictions = all_preds.mean(axis=0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Replace the variable values in the cell below
Step2: Create a Dataset from BigQuery
Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step6: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
Step7: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
Step8: AutoML for text classification requires that
Step9: Let's make sure we have roughly the same number of labels for each of our three labels
Step10: Finally we will save our data, which is currently in-memory, to disk.
Step11: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Step12: Let's write the sample datatset to disk.
Step13: Note
Step14: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located
Step15: Loading the dataset
Step16: Integerize the texts
Step17: Let's now implement a function create_sequence that will
Step18: We now need to write a function that
Step19: Preparing the train/test splits
Step20: To be on the safe side, we verify that the train and test splits
Step21: Using create_sequence and encode_labels, we can now prepare the
Step22: Building a DNN model
Step23: Below we train the model on 100 epochs but adding an EarlyStopping callback that will stop the training as soon as the validation loss has not improved after a number of steps specified by PATIENCE . Note that we also give the model.fit method a Tensorboard callback so that we can later compare all the models using TensorBoard.
Step24: Building a RNN model
Step25: Let's train the model with early stoping as above.
Step26: Build a CNN model
Step27: Let's train the model.
|
<ASSISTANT_TASK:>
Python Code:
import os
from google.cloud import bigquery
import pandas as pd
%load_ext google.cloud.bigquery
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
SEED = 0
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 100
regex = '.*://(.[^/]+)/'
sub_query =
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
.format(regex)
query =
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
.format(sub_query=sub_query)
print(query)
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
print("The full dataset contains {n} titles".format(n=len(title_dataset)))
title_dataset.source.value_counts()
DATADIR = './data/'
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = 'titles_full.csv'
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')
sample_title_dataset = title_dataset.sample(n=1000)
sample_title_dataset.source.value_counts()
SAMPLE_DATASET_NAME = 'titles_sample.csv'
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')
sample_title_dataset.head()
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.0 || pip install tensorflow==2.0
import os
import shutil
import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import TensorBoard, EarlyStopping
from tensorflow.keras.layers import (
Embedding,
Flatten,
GRU,
Conv1D,
Lambda,
Dense,
)
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
print(tf.__version__)
%matplotlib inline
LOGDIR = "./text_models"
DATA_DIR = "./data"
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME)
COLUMNS = ['title', 'source']
titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()
tokenizer = Tokenizer()
tokenizer.fit_on_texts(titles_df.title)
integerized_titles = tokenizer.texts_to_sequences(titles_df.title)
integerized_titles[:3]
VOCAB_SIZE = len(tokenizer.index_word)
VOCAB_SIZE
DATASET_SIZE = tokenizer.document_count
DATASET_SIZE
MAX_LEN = max(len(sequence) for sequence in integerized_titles)
MAX_LEN
# TODO 1
def create_sequences(texts, max_len=MAX_LEN):
sequences = tokenizer.texts_to_sequences(texts)
padded_sequences = pad_sequences(sequences, max_len, padding='post')
return padded_sequences
sequences = create_sequences(titles_df.title[:3])
sequences
titles_df.source[:4]
CLASSES = {
'github': 0,
'nytimes': 1,
'techcrunch': 2
}
N_CLASSES = len(CLASSES)
# TODO 2
def encode_labels(sources):
classes = [CLASSES[source] for source in sources]
one_hots = to_categorical(classes)
return one_hots
encode_labels(titles_df.source[:4])
N_TRAIN = int(DATASET_SIZE * 0.80)
titles_train, sources_train = (
titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])
sources_train.value_counts()
sources_valid.value_counts()
X_train, Y_train = create_sequences(titles_train), encode_labels(sources_train)
X_valid, Y_valid = create_sequences(titles_valid), encode_labels(sources_valid)
X_train[:3]
Y_train[:3]
def build_dnn_model(embed_dim):
model = Sequential([
Embedding(VOCAB_SIZE + 1, embed_dim, input_shape=[MAX_LEN]), # TODO 3
Lambda(lambda x: tf.reduce_mean(x, axis=1)), # TODO 4
Dense(N_CLASSES, activation='softmax') # TODO 5
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, 'dnn')
shutil.rmtree(MODEL_DIR, ignore_errors=True)
BATCH_SIZE = 300
EPOCHS = 100
EMBED_DIM = 10
PATIENCE = 0
dnn_model = build_dnn_model(embed_dim=EMBED_DIM)
dnn_history = dnn_model.fit(
X_train, Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(dnn_history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(dnn_history.history)[['accuracy', 'val_accuracy']].plot()
dnn_model.summary()
def build_rnn_model(embed_dim, units):
model = Sequential([
Embedding(VOCAB_SIZE + 1, embed_dim, input_shape=[MAX_LEN], mask_zero=True), # TODO 3
GRU(units), # TODO 5
Dense(N_CLASSES, activation='softmax')
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, 'rnn')
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 10
UNITS = 16
PATIENCE = 0
rnn_model = build_rnn_model(embed_dim=EMBED_DIM, units=UNITS)
history = rnn_model.fit(
X_train, Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
rnn_model.summary()
def build_cnn_model(embed_dim, filters, ksize, strides):
model = Sequential([
Embedding(
VOCAB_SIZE + 1,
embed_dim,
input_shape=[MAX_LEN],
mask_zero=True), # TODO 3
Conv1D( # TODO 5
filters=filters,
kernel_size=ksize,
strides=strides,
activation='relu',
),
Flatten(), # TODO 5
Dense(N_CLASSES, activation='softmax')
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, 'cnn')
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 5
FILTERS = 200
STRIDES = 2
KSIZE = 3
PATIENCE = 0
cnn_model = build_cnn_model(
embed_dim=EMBED_DIM,
filters=FILTERS,
strides=STRIDES,
ksize=KSIZE,
)
cnn_history = cnn_model.fit(
X_train, Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(cnn_history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(cnn_history.history)[['accuracy', 'val_accuracy']].plot()
cnn_model.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Modified recommendation functions to perform leave-one-out validation
Step2: Ran recommendation algorithm for the first 10 followers (with 1,000+ tweets and shared articles) each of a conservative publication and a liberal publication not on the list of publications used for model training
Step3: Computed the average cosine similarity between user scores from the full and partial user dataset as a measure of recommendation stability
Step4: Ideology Cluster Match
|
<ASSISTANT_TASK:>
Python Code:
from bubble_popper_model import twitter_profile,twitter_links,twitter_articles
from bubble_popper_model import clean_articles,article_topics,publication_scores
from bubble_popper_model import define_bubble,burst_bubble
import tweepy
from sqlalchemy import create_engine
from sqlalchemy_utils import database_exists, create_database
import psycopg2
import pandas as pd
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np
from time import sleep
import pickle
with open ("bubble_popper_twitter.txt","r") as myfile:
lines = [line.replace("\n","") for line in myfile.readlines()]
consumer_key, consumer_secret = lines[0], lines[1]
auth = tweepy.AppAuthHandler(consumer_key, consumer_secret)
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
with open ("bubble_popper_postgres.txt","r") as myfile:
lines = [line.replace("\n","") for line in myfile.readlines()]
db, us, pw = 'bubble_popper', lines[0], lines[1]
engine = create_engine('postgresql://%s:%s@localhost:5432/%s'%(us,pw,db))
connstr = "dbname='%s' user='%s' host='localhost' password='%s'"%(db,us,pw)
conn = None; conn = psycopg2.connect(connstr)
# Use full article set for recommendations
def run_popper(articles,tweets,links,badlinks,friends,comfort_level,conn):
doc_set = clean_articles(articles,tweets)
doc_data = article_topics(doc_set)
pub_data = publication_scores(links,badlinks,friends,conn)
user_score,user_bubble = define_bubble(pub_data.squeeze(),doc_data)
recs, alt_bubble = burst_bubble(user_score,user_bubble,comfort_level,conn)
return recs,user_score,user_bubble,alt_bubble
# Use article set minus one (for each article)
def leave_one_out(articles,tweets,links,badlinks,friends,comfort_level,conn):
recsList,scoreList,bubbleList,altList = [],[],[],[]
links = [link for link in links if link not in badlinks]
for leave_out in range(len(articles)):
article_out = articles.pop(leave_out)
link_out = links.pop(leave_out)
tweets_keep = tweets.drop(tweets.index[leave_out])
friend_out = friends.pop(leave_out)
doc_set = clean_articles(articles,tweets_keep)
doc_data = article_topics(doc_set)
pub_data = publication_scores(links,badlinks,friends,conn,num_articles=len(articles))
user_score,user_bubble = define_bubble(pub_data.squeeze(),doc_data)
recs,alt_bubble = burst_bubble(user_score,user_bubble,comfort_level,conn)
recsList.append(recs)
scoreList.append(user_score)
bubbleList.append(user_bubble)
altList.append(alt_bubble)
articles.insert(leave_out,article_out)
links.insert(leave_out,link_out)
friends.insert(leave_out,friend_out)
return recsList,scoreList,bubbleList,altList
sleep_time = 5
comfort_level = 2
recsFull,scoreFull,bubbleFull,altFull = {},{},{},{}
recsMinus,scoreMinus,bubbleMinus,altMinus = {},{},{},{}
with open ("bubble_popper_users.txt","r") as myfile:
users = [line.replace("\n","") for line in myfile.readlines()]
for user in users:
tweets,friends = twitter_profile(user,api)
links,tweets = twitter_links(tweets,conn)
articles,badlinks = twitter_articles(links)
recsFull[user],scoreFull[user],bubbleFull[user],altFull[user] = run_popper(articles,tweets,links,badlinks,friends,comfort_level,conn)
recsMinus[user],scoreMinus[user],bubbleMinus[user],altMinus[user] = leave_one_out(articles,tweets,links,badlinks,friends,comfort_level,conn)
sleep(sleep_time)
pickle.dump([recsFull,scoreFull,bubbleFull,altFull,recsMinus,scoreMinus,bubbleMinus,altMinus],open('bubble_popper_results.pkl','wb'))
recStability = []
for user in scoreFull.keys():
cossim = []
for leave_out in range(len(scoreMinus[user])):
cossim.append(cosine_similarity(scoreFull[user].reshape(1,-1),scoreMinus[user][leave_out].reshape(1,-1)))
recStability.append(np.mean(cossim))
avgStability = np.mean(recStability)
avgStability
# 0 = mostly liberal, 1 = mostly conservative, 2 = mixed liberal, 3 = mixed conservative
ideoMatch = []
for user in bubbleFull.keys():
ideoMatch.append((handLabel[user],bubbleFull[user].tolist()[0],np.equal(handLabel[user],bubbleFull[user].tolist()[0])))
ideoMatch
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Total notifications
Step2: 5 or more?
Step3: And what percent of users got 25 notifications or more—becoming more or less "daily notified"?
Step4: That's lower than I expected at English Wikipedia. It only had about 1,200 users with at least 30 notifications per month, compared to 3,500 highly active users (100+ edits) per month. However, both Flow wikis have higher percentages than the non-Flow wikis.
Step5: Graphs
Step6: So, as expected, all the wikis have a pretty regular power-law distribution of notifications.
Step7: 5 or more
Step8: 25 or more
Step9: 100 or more
Step10: Histograms
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.axes as ax
%matplotlib inline
notifs = pd.read_table("./notifications_per_user.tsv")
unreads = pd.read_table("./unread_notifications_per_user.tsv")
wikis = set(notifs["wiki"])
notifs.tail()
def filter_by_wiki( df, wiki ):
return df[ df["wiki"] == wiki ].iloc[:, 1:3]
def plot_by_wiki( df, wiki, range = (5, 104), bins = 20, ax = plt ):
dist = filter_by_wiki( df, wiki )
ax.hist( dist.iloc[:, 0], bins = bins, range = range, weights = dist.iloc[:, 1])
ax.set_title(wiki)
ax.set_xlabel( "Number of notifications" )
ax.set_ylabel( "Users" )
def beyond_threshold(df, wikis, threshold, direction):
columns = [
"wiki",
"users",
"% of users",
"% of notifications"
]
results = []
for wiki in wikis:
by_wiki = filter_by_wiki(df, wiki)
total_users = by_wiki.iloc[:, 1].sum()
total_notifs = 0
for row in by_wiki.iterrows():
total_notifs += row[1][0] * row[1][1]
if direction == "under":
beyond_threshold = by_wiki[ by_wiki.iloc[:, 0] < threshold ]
elif direction == "over":
beyond_threshold = by_wiki[ by_wiki.iloc[:, 0] > threshold ]
users_beyond_threshold = beyond_threshold.iloc[:, 1].sum()
notifs_beyond_threshold = 0
for row in beyond_threshold.iterrows():
notifs_beyond_threshold += row[1][0] * row[1][1]
user_proportion = users_beyond_threshold / total_users
notifs_proportion = notifs_beyond_threshold / total_notifs
results.append([
wiki,
users_beyond_threshold,
round(user_proportion * 100, 1),
round(notifs_proportion * 100, 1)
])
results = pd.DataFrame(results, columns=columns)
return results
beyond_threshold( notifs, wikis, 5, "under")
beyond_threshold(notifs, wikis, 4, "over")
beyond_threshold(notifs, wikis, 24, "over")
beyond_threshold(notifs, wikis, 99, "over")
fig, axarr = plt.subplots( 5, 1, figsize=(12,30) )
fig.suptitle("Total notifications per user", fontsize=24)
fig.subplots_adjust(top=0.95)
i = 0
for wiki in wikis:
plot_by_wiki(notifs, wiki, ax = axarr[i])
i = i + 1
beyond_threshold(unreads, wikis, 5, "under")
beyond_threshold(unreads, wikis, 4, "over")
beyond_threshold(unreads, wikis, 24, "over")
beyond_threshold(unreads, wikis, 99, "over")
fig, axarr = plt.subplots( 5, 1, figsize=(12,30) )
fig.suptitle("Unread notifications per user", fontsize=24)
fig.subplots_adjust(top=0.95)
i = 0
for wiki in wikis:
plot_by_wiki(unreads, wiki, ax = axarr[i])
i = i + 1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data (don't change this if you're running the notebook on the cluster)
Step2: Exploring Dates
Step3: Convert into date-time type
Step4: Part 1 - Observations from the data
Step5: Part 2 - Seasonal Model
Step6: Crime per year
Step7: Let's look at residential burglary.
Step8: Normalized over the annual average
Step9: Fitting the regression line
|
<ASSISTANT_TASK:>
Python Code:
### Load libraries
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
help(plt.legend)
%%time
df = pd.read_excel('/home/data/APD/COBRA083016_2015.xlsx', sheetname='Query')
df.shape
for c in df.columns:
print(c)
df.head()
df.describe()
df.offense_id.min(), df.offense_id.max()
df.columns
crime_summary = df.groupby(['UC2 Literal', 'neighborhood']).offense_id.count()
crime_summary.index
crime_summary.reset_index().head(20)
df["Zone"] = df.beat // 100
df
df[['offense_id', 'occur_date', 'occur_time', 'rpt_date']][1:10]
df['occur_ts'] = pd.to_datetime(df.occur_date+' '+df.occur_time)
#df[['offense_id', 'occur_date', 'occur_time', 'occur_ts', 'rpt_date']][1:10]
df['occur_ts'] = pd.to_datetime(df.occur_date+' '+df.occur_time)
df['occur_month'] = df['occur_ts'].map(lambda x: x.month)
df['occur_woy'] = df.occur_ts.dt.weekofyear
df.describe()
df.shape
df.columns
df.iloc[9]
#resdf.index
resdf.loc[['AUTO_THEFT', 6]]
resdf = df.groupby(['UC2 Literal', 'occur_date']).offense_id.count()
resdf
resdf['BURGLARY-RESIDENCE'].as_matrix()
resdf['BURGLARY-RESIDENCE'].iloc(0)
resdf['BURGLARY-RESIDENCE']
%matplotlib inline
fig = plt.figure(figsize=(10,6)) # 10inx10in
#plt.plot(resdf['BURGLARY-RESIDENCE'].index, resdf['BURGLARY-RESIDENCE'])
plt.scatter(resdf['BURGLARY-RESIDENCE'].index, resdf['BURGLARY-RESIDENCE'].values, marker='x')
# plt.scatter(resdf['BURGLARY-NONRES'].index, resdf['BURGLARY-NONRES'], marker='o')
# plt.ylim(0, 500)
# plt.title('BURGLARY-RESIDENCE')
# plt.xticks(range(13), ['', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
# fig.savefig('BurglaryResidence_over_month.svg')
# x = 1
def getTheMonth(x):
return x.month
df['occur_month'] = df['occur_ts'].map(getTheMonth)
df = pd.read_excel('/home/data/APD/COBRA083016_2015.xlsx', sheetname='Query')
df['occur_ts'] = pd.to_datetime(df.occur_date+' '+df.occur_time)
df['occur_month'] = df['occur_ts'].map(lambda x: x.month)
df['occur_woy'] = df.occur_ts.dt.weekofyear
%matplotlib inline
resdf = df.groupby(['UC2 Literal', 'occur_month']).offense_id.count()
fig = plt.figure(figsize=(10,6))
plt.scatter(resdf['BURGLARY-RESIDENCE'].index, resdf['BURGLARY-RESIDENCE'], marker='x')
plt.ylim(0, 500)
plt.title('BURGLARY-RESIDENCE')
plt.xticks(range(13), ['', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
plt.savefig('quiz3-burglary-residence.png')
''
plt.savefig('quiz3-burglary-residence.png')
pd.unique(df) # how to get duplicate records?
pd.unique(df['UC2 Literal'])
len(pd.unique(df.MI_PRINX))
## load complete dataset
dff = pd.read_excel('/home/data/APD/COBRA083016.xlsx', sheetname='Query')
dff.shape
for evt in ['occur', 'poss']:
dff['%s_ts'%evt] = pd.to_datetime(dff['%s_date'%evt]+' '+dff['%s_time'%evt])
dff['rpt_ts'] = pd.to_datetime(dff.rpt_date)
', '.join(dff.columns)
dff['occur_year'] = dff.occur_ts.dt.year
dff['occur_month'] = dff.occur_ts.dt.month
dff['occur_dayweek'] = dff.occur_ts.dt.dayofweek
crime_year = dff[dff.occur_year.between(2009, 2015)].groupby(by=['UC2 Literal', 'occur_year']).offense_id.count()
%matplotlib inline
fig = plt.figure(figsize=(40,30))
crime_types = crime_year.index.levels[0]
years = crime_year.index.levels[1]
for c in range(len(crime_types)):
y_max = max(crime_year.loc[crime_types[c]])
plt.subplot(4,3,c+1)
plt.hlines(crime_year.loc[crime_types[c]].iloc[-1]*100/y_max, years[0], years[-1], linestyles="dashed", color="r")
plt.bar(crime_year.loc[crime_types[c]].index, crime_year.loc[crime_types[c]]*100/y_max, label=crime_types[c], alpha=0.5)
##plt.legend()
plt.ylim(0, 100)
plt.xticks(years+0.4, [str(int(y)) for y in years], rotation=0, fontsize=24)
plt.yticks([0,20,40,60,80,100], ['0%','20%','40%','60%','80%','100%'], fontsize=24)
plt.title(crime_types[c], fontsize=30)
None
c = 3
crime_types[c]
crime_year_month = dff[dff.occur_year.between(2009, 2015)].groupby(by=['UC2 Literal', 'occur_year', 'occur_month']).offense_id.count()
c = 3 ## 'BURGLARY-RESIDENCE'
resburglaries = crime_year_month.loc[crime_types[c]]
fig = plt.figure(figsize=(20,10))
for y in years:
plt.plot(resburglaries.loc[y].index, resburglaries.loc[y], label=("%4.0f"%y))
plt.legend()
plt.title("Seasonal Trends - %s"%crime_types[c], fontsize=20)
plt.xticks(range(13), ['', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
plt.xlim(0,13)
None
c = 3 ## 'BURGLARY-RESIDENCE'
fig = plt.figure(figsize=(20,10))
for y in years:
avg = resburglaries.loc[y].mean()
plt.hlines(avg, 1, 13, linestyle='dashed')
plt.plot(resburglaries.loc[y].index, resburglaries.loc[y], label=("%4.0f"%y))
plt.legend()
plt.title("Seasonal Trends - %s (with annuale averages)"%crime_types[c], fontsize=20)
plt.xticks(list(range(1,13)), ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
plt.xlim(0,13)
None
c = 3 ## 'BURGLARY-RESIDENCE'
fig = plt.figure(figsize=(20,10))
for y in years:
avg = resburglaries.loc[y].mean()
std = resburglaries.loc[y].std()
##plt.hlines(avg, 1, 13, linestyle='dashed')
plt.plot(resburglaries.loc[y].index, (resburglaries.loc[y]-avg)/std, label=("%4.0f"%y))
plt.legend()
plt.title("Seasonal Trends - %s (normalized)"%crime_types[c], fontsize=20)
plt.xticks(list(range(1,13)), ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
plt.xlim(0,13)
plt.ylabel("Standard deviations $\sigma_y$")
None
seasonal_adjust = resburglaries.reset_index().groupby(by=['occur_month']).offense_id.agg('mean')
### in case we want to save a DataFrame
#writer = pd.ExcelWriter('myresults.xlsx')
#df.to_excel(writer,'Results')
#writer.save()
#resdf
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then let's import all the libraries we need...
Step3: Make the covariance kernel a squared-exponential,
Step5: We can use this kernel to calculate the value of each element in our covariance matrix
Step6: Using this kernel we can then recreate Fig. 5 from Roberts et al. (2012).
Step7: If we then take the final realization, which has $\lambda = 5$, and select 5 points from it randomly we can calculate the posterior mean and variance at every point based on those five input data.
Step8: We then use our training data points to define a covariance matrix
Step9: For each of our test data points we can then make a prediction of the value at $x_{\ast}$ and the uncertainly (standard deviation)
Step10: Let's plot this up
Step11: [Note
Step12: For what follows I'm going to use the george library. The main reason for this is that the matrix inversion in numpy is not very stable for large (> (10 x 10)) matrices.
Step13: Note that the george ExpSquaredKernel has a factor of two in the denominator of the exponent. So whereas we specified that $\lambda_{\rm true} = 5.0$, we have found that $\lambda_{\rm fit} = 3.46$... (or similar)
Step14: We can also extract the expectation value for each parameter and the individual confidence intervals
|
<ASSISTANT_TASK:>
Python Code:
#%matplotlib inline
import numpy as np
import pylab as pl
from scipy import linalg as sl
def cov_kernel(x1,x2,h,lam):
Squared-Exponential covariance kernel
k12 = h**2*np.exp(-1.*(x1 - x2)**2/lam**2)
return k12
def make_K(x, h, lam):
Make covariance matrix from covariance kernel
# for a data array of length x, make a covariance matrix x*x:
K = np.zeros((len(x),len(x)))
for i in range(0,len(x)):
for j in range(0,len(x)):
if (i==j):
noise = 1e-5
else:
noise = 0.0
# calculate value of K for each separation:
K[i,j] = cov_kernel(x[i],x[j],h,lam) + noise**2
return K
# make an array of 200 evenly spaced positions between 0 and 20:
x1 = np.arange(0, 20.,0.01)
for i in range(0,3):
h = 1.0
if (i==0): lam = 0.1
if (i==1): lam = 1.0
if (i==2): lam = 5.0
# make a covariance matrix:
K = make_K(x1,h,lam)
# five realisations:
for j in range(0,5):
# draw samples from a co-variate Gaussian distribution, N(0,K):
y1 = np.random.multivariate_normal(np.zeros(len(x1)),K)
tmp2 = '23'+str(i+3+1)
pl.subplot(int(tmp2))
pl.plot(x1,y1)
tmp1 = '23'+str(i+1)
pl.subplot(int(tmp1))
pl.imshow(K)
pl.title(r"$\lambda = $"+str(lam))
pl.show()
# set number of training points
nx_training = 5
# randomly select the training points:
tmp = np.random.uniform(low=0.0, high=2000.0, size=nx_training)
tmp = tmp.astype(int)
condition = np.zeros_like(x1)
for i in tmp: condition[i] = 1.0
y_train = y1[np.where(condition==1.0)]
x_train = x1[np.where(condition==1.0)]
y_test = y1[np.where(condition==0.0)]
x_test = x1[np.where(condition==0.0)]
# define the covariance matrix:
K = make_K(x_train,h,lam)
# take the inverse:
iK = np.linalg.inv(K)
print 'determinant: ',np.linalg.det(K), sl.det(K)
mu=[];sig=[]
for xx in x_test:
# find the 1d covariance matrix:
K_x = cov_kernel(xx, x_train, h, lam)
# find the kernel for (x,x):
k_xx = cov_kernel(xx, xx, h, lam)
# calculate the posterior mean and variance:
mu_xx = np.dot(K_x.T,np.dot(iK,y_train))
sig_xx = k_xx - np.dot(K_x.T,np.dot(iK,K_x))
mu.append(mu_xx)
sig.append(np.sqrt(np.abs(sig_xx))) # note sqrt to get stdev from variance
# mu and sig are currently lists - turn them into numpy arrays:
mu=np.array(mu);sig=np.array(sig)
# make some plots:
# left-hand plot
ax = pl.subplot(121)
pl.scatter(x_train,y_train) # plot the training points
pl.plot(x1,y1,ls=':') # plot the original data they were drawn from
pl.title("Input")
# right-hand plot
ax = pl.subplot(122)
pl.plot(x_test,mu,ls='-') # plot the predicted values
pl.plot(x_test,y_test,ls=':') # plot the original values
# shade in the area inside a one standard deviation bound:
ax.fill_between(x_test,mu-sig,mu+sig,facecolor='lightgrey', lw=0, interpolate=True)
pl.title("Predicted")
pl.scatter(x_train,y_train) # plot the training points
# display the plot:
pl.show()
# set number of training points
frac = 0.05
nx_training = int(len(x1)*frac)
print "Using ",nx_training," points."
# randomly select the training points:
tmp = np.random.uniform(low=0.0, high=2000.0, size=nx_training)
tmp = tmp.astype(int)
condition = np.zeros_like(x1)
for i in tmp: condition[i] = 1.0
y_train = y1[np.where(condition==1.0)]
x_train = x1[np.where(condition==1.0)]
y_test = y1[np.where(condition==0.0)]
x_test = x1[np.where(condition==0.0)]
pl.scatter(x_train,y_train) # plot the training points
pl.plot(x1,y1,ls=':') # plot the original data they were drawn from
pl.title("Training Data")
pl.show()
import george
from george import kernels
import emcee
# we need to define three functions:
# a log likelihood, a log prior & a log posterior.
# set the loglikelihood:
def lnlike2(p, x, y):
# update kernel parameters:
gp.kernel[:] = p
# compute covariance matrix:
gp.compute(x)
# calculate the likelihood:
ll = gp.lnlikelihood(y, quiet=True)
# return
return ll if np.isfinite(ll) else 1e25
# set the logprior
def lnprior2(p):
# note that "p" contains the ln()
# of the parameter values - set your
# prior ranges appropriately!
lnh,lnlam,lnsig = p
# really crappy prior:
if (-2.<lnh<5.) and (-1.<lnlam<10.) and (-30.<lnsig<0.):
return 0.0
return -np.inf
# set the logposterior:
def lnprob2(p, x, y):
lp = lnprior2(p)
return lp + lnlike2(p, x, y) if np.isfinite(lp) else -np.inf
# initiate george with the exponential squared kernel:
kernel = 1.0*kernels.ExpSquaredKernel(30.0)+kernels.WhiteKernel(1e-5)
gp = george.GP(kernel, mean=0.0)
# put all the data into a single array:
data = (x_train,y_train)
# set your initial guess parameters
# remember george keeps these in ln() form!
initial = gp.kernel[:]
print "Initial guesses: ",np.exp(initial)
# set the dimension of the prior volume
# (i.e. how many parameters do you have?)
ndim = len(initial)
print "Number of parameters: ",ndim
# The number of walkers needs to be more than twice
# the dimension of your parameter space... unless you're crazy!
nwalkers = 10
# perturb your inital guess parameters very slightly
# to get your starting values:
p0 = [np.array(initial) + 1e-4 * np.random.randn(ndim)
for i in xrange(nwalkers)]
# initalise the sampler:
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob2, args=data)
# run a few samples as a burn-in:
print("Running burn-in")
p0, lnp, _ = sampler.run_mcmc(p0, 1000)
sampler.reset()
# take the highest likelihood point from the burn-in as a
# starting point and now begin your production run:
print("Running production")
p = p0[np.argmax(lnp)]
p0 = [p + 1e-4 * np.random.randn(ndim) for i in xrange(nwalkers)]
p0, _, _ = sampler.run_mcmc(p0, 8000)
print "Finished"
import acor
# calculate the convergence time of our
# MCMC chains:
samples = sampler.flatchain
s2 = np.ndarray.transpose(samples)
tau, mean, sigma = acor.acor(s2)
print "Convergence time from acor: ", tau
# get rid of the samples that were taken
# before convergence:
delta = int(20*tau)
samples = sampler.flatchain[delta:,:]
# extract the log likelihood for each sample:
lnl = sampler.flatlnprobability[delta:]
# find the point of maximum likelihood:
ml = samples[np.argmax(lnl),:]
# print out those values
# Note that we have unwrapped
print "ML parameters: ",
print "h: ", np.sqrt(np.exp(ml[0]))," ; lam: ",np.sqrt(np.exp(ml[1]))," ; sigma: ",np.sqrt(np.exp(ml[2]))
import corner
# Plot it.
figure = corner.corner(samples, labels=[r"$\ln(h^2)$", r"$\ln(\lambda^2)$", r"$\ln(\sigma^2)$"],
truths=ml,
quantiles=[0.16,0.5,0.84],
#levels=[0.39,0.86,0.99],
levels=[0.68,0.95,0.99],
title="Mauna Loa Data",
show_titles=True, title_args={"fontsize": 12})
q1 = corner.quantile(samples[:,0],
q=[0.16,0.5,0.84])
print "Parameter 1: ",q1[1],"(-",q1[1]-q1[0],", +",q1[2]-q1[1],")"
q2 = corner.quantile(samples[:,1],
q=[0.16,0.5,0.84])
print "Parameter 2: ",q2[1],"(-",q2[1]-q2[0],", +",q2[2]-q2[1],")"
q3 = corner.quantile(samples[:,2],
q=[0.16,0.5,0.84])
print "Parameter 3: ",q2[1],"(-",q2[1]-q2[0],", +",q2[2]-q2[1],")"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data from *.csv file
Step2: Explore the correct data
Step3: Prepare the Data for CNN
Step4: As we can see, the number of training images for each class is different, that is why we decided to skip 3095 "happy" class images.<br>
Step5: Model 1 - Overfitting the data TODO not overfitting with 35k data
Step6: Comments
Step7: Model 2 - 4 x Convolutional Layers, 1x Fully Connected
Step8: Comments
Step9: Comments
Step10: Class Accuracy
Step11: Feeding the CNN with some data (camera/file)
|
<ASSISTANT_TASK:>
Python Code:
import random
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import csv
import scipy.misc
import time
import collections
import os
import utils as ut
import importlib
import copy
importlib.reload(ut)
%matplotlib inline
plt.rcParams['figure.figsize'] = (20.0, 20.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Load the CSV data
emotions_dataset_dir = 'fer2013_full.csv'
#obtaining the number of line of the csv file
file = open(emotions_dataset_dir)
numline = len(file.readlines())
print ('Number of data in the dataset:',numline)
#Load the file in csv
ifile = open(emotions_dataset_dir, "rt")
reader = csv.reader(ifile)
hist_threshold = 350 # images above this threshold will be removed
hist_div = 100 #parameter of the histogram
print('Loading Images. It may take a while, depending on the database size.')
images, emotions, strange_im, num_strange, num_skipped = ut.load_dataset(reader, numline, hist_div, hist_threshold)
ifile.close()
print('Skipped', num_skipped, 'happy class images.')
print(str( len(images) ) + ' are left after \'strange images\' removal.')
print('Deleted ' + str( num_strange ) + ' strange images. Images are shown below')
# showing strange images
plt.rcParams['figure.figsize'] = (5.0, 5.0) # set default size of plots
idxs = np.random.choice(range(1,num_strange ), 6, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i
plt.subplot(1, 6, plt_idx+1)
plt.imshow(strange_im[idx])
plt.axis('off')
if(i == 0):
plt.title('Some of the images removed from dataset (max(histogram) thresholded)')
plt.show()
classes = [0,1,2,3,4,5]
str_emotions = ['angry','scared','happy','sad','surprised','normal']
num_classes = len(classes)
samples_per_class = 6
plt.rcParams['figure.figsize'] = (10.0, 10.0) # set default size of plots
for y, cls in enumerate(classes):
idxs = np.flatnonzero(emotions == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(images[idx])
y_h, x_h = np.histogram( images[idx], hist_div );
plt.axis('off')
if(i == 0):
plt.title(str_emotions[y] )
plt.show()
print('number of clean data:' + str(images.shape[0]) + ' 48x48 pix , 0-255 greyscale images')
n_all = images.shape[0];
n_train = 64; # number of data for training and for batch
# dividing the input data
train_data_orig = images[0:n_all-n_train,:,:]
train_labels = emotions[0:n_all-n_train]
test_data_orig = images[n_all-n_train:n_all,:,:]
test_labels = emotions[n_all-n_train:n_all]
# Convert to float
train_data_orig = train_data_orig.astype('float32')
y_train = train_labels.astype('float32')
test_data_orig = test_data_orig.astype('float32')
y_test = test_labels.astype('float32')
print('orig train data ' + str(train_data_orig.shape))
print('orig train labels ' + str(train_labels.shape) + 'from ' + str(train_labels.min()) + ' to ' + str(train_labels.max()) )
print('orig test data ' + str(test_data_orig.shape))
print('orig test labels ' + str(test_labels.shape)+ 'from ' + str(test_labels.min()) + ' to ' + str(test_labels.max()) )
for i in range (0, 5):
print('TRAIN: number of' , i, 'labels',len(train_labels[train_labels == i]))
for i in range (0, 5):
print('TEST: number of', i, 'labels',len(test_labels[test_labels == i]))
As
# Data pre-processing
n = train_data_orig.shape[0];
train_data = np.zeros([n,48**2])
for i in range(n):
xx = train_data_orig[i,:,:]
xx -= np.mean(xx)
xx /= np.linalg.norm(xx)
train_data[i,:] = xx.reshape(2304); #np.reshape(xx,[-1])
n = test_data_orig.shape[0]
test_data = np.zeros([n,48**2])
for i in range(n):
xx = test_data_orig[i,:,:]
xx -= np.mean(xx)
xx /= np.linalg.norm(xx)
test_data[i] = np.reshape(xx,[-1])
#print(train_data.shape)
#print(test_data.shape)
#print(train_data_orig[0][2][2])
#print(test_data[0][2])
plt.rcParams['figure.figsize'] = (2.0, 2.0) # set default size of plots
plt.imshow(train_data[4].reshape([48,48]));
plt.title('example image after processing');
# Convert label values to one_hot vector
train_labels = ut.convert_to_one_hot(train_labels,num_classes)
test_labels = ut.convert_to_one_hot(test_labels,num_classes)
print('train labels shape',train_labels.shape)
print('test labels shape',test_labels.shape)
# Define computational graph (CG)
batch_size = n_train # batch size
d = train_data.shape[1] # data dimensionality
nc = 6 # number of classes
# Inputs
xin = tf.placeholder(tf.float32,[batch_size,d]);
y_label = tf.placeholder(tf.float32,[batch_size,nc]);
#Size and number of filters
K0 = 8 # size of the patch
F0 = 64 # number of filters
ncl0 = K0*K0*F0
Wcl0 = tf.Variable(tf.truncated_normal([K0,K0,1,F0], stddev=tf.sqrt(2./tf.to_float(ncl0)) )); print('Wcl=',Wcl0.get_shape())
bcl0 = bias_variable([F0]); print('bcl0=',bcl0.get_shape()) #in ReLu case, small positive bias added to prevent killing of gradient when input is negative.
#Reshaping the input to size 48x48
x_2d0 = tf.reshape(xin, [-1,48,48,1]); print('x_2d=',x_2d0.get_shape())
# Convolutional layer
x = tf.nn.conv2d(x_2d0, Wcl0, strides=[1, 1, 1, 1], padding='SAME')
x += bcl0; print('x2=',x.get_shape())
# ReLU activation
x = tf.nn.relu(x)
# Fully Connected layer
nfc = 48*48*F0
x = tf.reshape(x, [batch_size,-1]); print('x3=',x.get_shape())
Wfc = tf.Variable(tf.truncated_normal([nfc,nc], stddev=tf.sqrt(2./tf.to_float(nfc+nc)) )); print('Wfc=',Wfc.get_shape())
bfc = tf.Variable(tf.zeros([nc])); print('bfc=',bfc.get_shape())
y = tf.matmul(x, Wfc); print('y1=',y.get_shape())
y += bfc; print('y2=',y.get_shape())
# Softmax
y = tf.nn.softmax(y); print('y3(SOFTMAX)=',y.get_shape())
# Loss
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))
total_loss = cross_entropy
# Optimization scheme
train_step = tf.train.AdamOptimizer(0.004).minimize(total_loss)
# Accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Run Computational Graph
n = train_data.shape[0]
indices = collections.deque()
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(1001):
# Batch extraction
if len(indices) < batch_size:
indices.extend(np.random.permutation(n))
idx = [indices.popleft() for i in range(batch_size)]
batch_x, batch_y = train_data[idx,:], train_labels[idx]
#print(batch_x.shape,batch_y.shape)
# Run CG for vao to increase the test acriable training
_,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y})
# Run CG for test set
if not i%100:
print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)
acc_test = sess.run(accuracy, feed_dict={xin: test_data, y_label: test_labels})
print('test accuracy=',acc_test)
#Definition of function that have been used in the CNN
d = train_data.shape[1]
def weight_variable2(shape, nc10):
initial2 = tf.random_normal(shape, stddev=tf.sqrt(2./tf.to_float(ncl0)) )
return tf.Variable(initial2)
def conv2dstride2(x,W):
return tf.nn.conv2d(x,W,strides=[1, 2, 2, 1], padding='SAME')
def conv2d(x,W):
return tf.nn.conv2d(x,W,strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=1/np.sqrt(d/2) )
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.01,shape=shape)
return tf.Variable(initial)
tf.reset_default_graph()
# Define computational graph (CG)
batch_size = n_train # batch size
d = train_data.shape[1] # data dimensionality
nc = 6 # number of classes
# Inputs
xin = tf.placeholder(tf.float32,[batch_size,d])
y_label = tf.placeholder(tf.float32,[batch_size,nc])
# Convolutional layer
K0 = 7 # size of the patch
F0 = 16 # number of filters
ncl0 = K0*K0*F0
K1 = 5 # size of the patch
F1 = 16 # number of filters
ncl0 = K1*K1*F1
K2 = 3 # size of the patch
F2 = 2 # number of filters
ncl0 = K2*K2*F2
nfc = int(48*48*F0/4)
nfc1 = int(48*48*F1/4)
nfc2 = int(48*48*F2/4)
keep_prob_input=tf.placeholder(tf.float32)
#First set of conv followed by conv stride 2 operation and dropout 0.5
W_conv1=weight_variable([K0,K0,1,F0]); print('W_conv1=',W_conv1.get_shape())
b_conv1=bias_variable([F0]); print('b_conv1=',b_conv1.get_shape())
x_2d0 = tf.reshape(xin, [-1,48,48,1]); print('x_2d0=',x_2d0.get_shape())
h_conv1=tf.nn.relu(conv2d(x_2d0,W_conv1)+b_conv1); print('h_conv1=',h_conv1.get_shape())
h_conv1= tf.nn.dropout(h_conv1,keep_prob_input);
# 2nd convolutional layer
W_conv2=weight_variable([K0,K0,F0,F0]); print('W_conv2=',W_conv2.get_shape())
b_conv2=bias_variable([F0]); print('b_conv2=',b_conv2.get_shape())
h_conv2 = tf.nn.relu(conv2d(h_conv1,W_conv2)+b_conv2); print('h_conv2=',h_conv2.get_shape())
h_conv2_pooled = max_pool_2x2(h_conv2); print('h_conv2_pooled=',h_conv2_pooled.get_shape())
# reshaping for fully connected
h_conv2_pooled_rs = tf.reshape(h_conv2_pooled, [batch_size,-1]); print('x_rs',h_conv2_pooled_rs.get_shape());
W_norm3 = weight_variable([nfc1, nfc]); print('W_norm3=',W_norm3.get_shape())
b_conv3 = bias_variable([nfc1]); print('b_conv3=',b_conv3.get_shape())
# fully connected layer
h_full3 = tf.matmul( W_norm3, tf.transpose(h_conv2_pooled_rs) ); print('h_full3=',h_full3.get_shape())
h_full3 = tf.transpose(h_full3); print('h_full3=',h_full3.get_shape())
h_full3 += b_conv3; print('h_full3=',h_full3.get_shape())
h_full3=tf.nn.relu(h_full3); print('h_full3=',h_full3.get_shape())
h_full3=tf.nn.dropout(h_full3,keep_prob_input); print('h_full3_dropout=',h_full3.get_shape())
#reshaping back to conv
h_full3_rs = tf.reshape(h_full3, [batch_size, 24,24,-1]); print('h_full3_rs=',h_full3_rs.get_shape())
#Second set of conv followed by conv stride 2 operation
W_conv4=weight_variable([K1,K1,F1,F1]); print('W_conv4=',W_conv4.get_shape())
b_conv4=bias_variable([F1]); print('b_conv4=',b_conv4.get_shape())
h_conv4=tf.nn.relu(conv2d(h_full3_rs,W_conv4)+b_conv4); print('h_conv4=',h_conv4.get_shape())
h_conv4 = max_pool_2x2(h_conv4); print('h_conv4_pooled=',h_conv4.get_shape())
# reshaping for fully connected
h_conv4_pooled_rs = tf.reshape(h_conv4, [batch_size,-1]); print('x2_rs',h_conv4_pooled_rs.get_shape());
W_norm4 = weight_variable([ 2304, nc]); print('W_norm4=',W_norm4.get_shape())
b_conv4 = tf.Variable(tf.zeros([nc])); print('b_conv4=',b_conv4.get_shape())
# fully connected layer
h_full4 = tf.matmul( h_conv4_pooled_rs, W_norm4 ); print('h_full4=',h_full4.get_shape())
h_full4 += b_conv4; print('h_full4=',h_full4.get_shape())
y = h_full4;
## Softmax
y = tf.nn.softmax(y); print('y(SOFTMAX)=',y.get_shape())
# Loss
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))
total_loss = cross_entropy
# Optimization scheme
train_step = tf.train.AdamOptimizer(0.001).minimize(total_loss)
# Accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Run Computational Graph
n = train_data.shape[0]
indices = collections.deque()
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(15001):
# Batch extraction
if len(indices) < batch_size:
indices.extend(np.random.permutation(n))
idx = [indices.popleft() for i in range(batch_size)]
batch_x, batch_y = train_data[idx,:], train_labels[idx]
#print(batch_x.shape,batch_y.shape)
# Run CG for vao to increase the test acriable training
_,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y, keep_prob_input: 0.2})
# Run CG for test set
if not i%50:
print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)
acc_test = sess.run(accuracy, feed_dict = {xin: test_data, y_label: test_labels, keep_prob_input: 1.0})
print('test accuracy=',acc_test)
tf.reset_default_graph()
# implementation of Conv-Relu-COVN-RELU - pool
# based on : http://cs231n.github.io/convolutional-networks/
# Define computational graph (CG)
batch_size = n_train # batch size
d = train_data.shape[1] # data dimensionality
nc = 6 # number of classes
# Inputs
xin = tf.placeholder(tf.float32,[batch_size,d]); #print('xin=',xin,xin.get_shape())
y_label = tf.placeholder(tf.float32,[batch_size,nc]); #print('y_label=',y_label,y_label.get_shape())
#for the first conc-conv
# Convolutional layer
K0 = 8 # size of the patch
F0 = 22 # number of filters
ncl0 = K0*K0*F0
#for the second conc-conv
K1 = 4 # size of the patch
F1 = F0 # number of filters
ncl1 = K1*K1*F1
#drouput probability
keep_prob_input=tf.placeholder(tf.float32)
#1st set of conv followed by conv2d operation and dropout 0.5
W_conv1=weight_variable([K0,K0,1,F0]); print('W_conv1=',W_conv1.get_shape())
b_conv1=bias_variable([F0]); print('b_conv1=',b_conv1.get_shape())
x_2d1 = tf.reshape(xin, [-1,48,48,1]); print('x_2d1=',x_2d1.get_shape())
#conv2d
h_conv1=tf.nn.relu(conv2d(x_2d1, W_conv1) + b_conv1); print('h_conv1=',h_conv1.get_shape())
#h_conv1= tf.nn.dropout(h_conv1,keep_prob_input);
# 2nd convolutional layer + max pooling
W_conv2=weight_variable([K0,K0,F0,F0]); print('W_conv2=',W_conv2.get_shape())
b_conv2=bias_variable([F0]); print('b_conv2=',b_conv2.get_shape())
# conv2d + max pool
h_conv2 = tf.nn.relu(conv2d(h_conv1,W_conv2)+b_conv2); print('h_conv2=',h_conv2.get_shape())
h_conv2_pooled = max_pool_2x2(h_conv2); print('h_conv2_pooled=',h_conv2_pooled.get_shape())
#3rd set of conv
W_conv3=weight_variable([K0,K0,F0,F0]); print('W_conv3=',W_conv3.get_shape())
b_conv3=bias_variable([F1]); print('b_conv3=',b_conv3.get_shape())
x_2d3 = tf.reshape(h_conv2_pooled, [-1,24,24,F0]); print('x_2d3=',x_2d3.get_shape())
#conv2d
h_conv3=tf.nn.relu(conv2d(x_2d3, W_conv3) + b_conv3); print('h_conv3=',h_conv3.get_shape())
# 4th convolutional layer
W_conv4=weight_variable([K1,K1,F1,F1]); print('W_conv4=',W_conv4.get_shape())
b_conv4=bias_variable([F1]); print('b_conv4=',b_conv4.get_shape())
#conv2d + max pool 4x4
h_conv4 = tf.nn.relu(conv2d(h_conv3,W_conv4)+b_conv4); print('h_conv4=',h_conv4.get_shape())
h_conv4_pooled = max_pool_2x2(h_conv4); print('h_conv4_pooled=',h_conv4_pooled.get_shape())
h_conv4_pooled = max_pool_2x2(h_conv4_pooled); print('h_conv4_pooled=',h_conv4_pooled.get_shape())
#5th set of conv
W_conv5=weight_variable([K1,K1,F1,F1]); print('W_conv5=',W_conv5.get_shape())
b_conv5=bias_variable([F1]); print('b_conv5=',b_conv5.get_shape())
x_2d5 = tf.reshape(h_conv4_pooled, [-1,6,6,F1]); print('x_2d5=',x_2d5.get_shape())
#conv2d
h_conv5=tf.nn.relu(conv2d(x_2d5, W_conv5) + b_conv5); print('h_conv5=',h_conv5.get_shape())
# 6th convolutional layer
W_conv6=weight_variable([K1,K1,F1,F1]); print('W_con6=',W_conv6.get_shape())
b_conv6=bias_variable([F1]); print('b_conv6=',b_conv6.get_shape())
b_conv6= tf.nn.dropout(b_conv6,keep_prob_input);
#conv2d + max pool 4x4
h_conv6 = tf.nn.relu(conv2d(h_conv5,W_conv6)+b_conv6); print('h_conv6=',h_conv6.get_shape())
h_conv6_pooled = max_pool_2x2(h_conv6); print('h_conv6_pooled=',h_conv6_pooled.get_shape())
# reshaping for fully connected
h_conv6_pooled_rs = tf.reshape(h_conv6, [batch_size,-1]); print('x2_rs',h_conv6_pooled_rs.get_shape());
W_norm6 = weight_variable([ 6*6*F1, nc]); print('W_norm6=',W_norm6.get_shape())
b_norm6 = bias_variable([nc]); print('b_conv6=',b_norm6.get_shape())
# fully connected layer
h_full6 = tf.matmul( h_conv6_pooled_rs, W_norm6 ); print('h_full6=',h_full6.get_shape())
h_full6 += b_norm6; print('h_full6=',h_full6.get_shape())
y = h_full6;
## Softmax
y = tf.nn.softmax(y); print('y3(SOFTMAX)=',y.get_shape())
# Loss
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))
total_loss = cross_entropy
# Optimization scheme
train_step = tf.train.AdamOptimizer(0.001).minimize(total_loss)
# Accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Run Computational Graph
n = train_data.shape[0]
indices = collections.deque()
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(20001):
# Batch extraction
if len(indices) < batch_size:
indices.extend(np.random.permutation(n))
idx = [indices.popleft() for i in range(batch_size)]
batch_x, batch_y = train_data[idx,:], train_labels[idx]
#print(batch_x.shape,batch_y.shape)
# Run CG for vao to increase the test acriable training
_,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y, keep_prob_input: 0.5})
# Run CG for test set
if not i%100:
print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)
acc_test = sess.run(accuracy, feed_dict = {xin: test_data, y_label: test_labels, keep_prob_input: 1.0})
print('test accuracy=',acc_test)
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
# Save the variables to disk.
save_path = saver.save(sess, "model_6layers.ckpt")
print("Model saved in file: %s" % save_path)
# calculating accuracy for each class separately for the test set
result_cnn = sess.run([y], feed_dict = {xin: test_data, keep_prob_input: 1.0})
#result = sess.run(y, feed_dict={xin: test_data, keep_prob_input: 1.0})
tset = test_labels.argmax(1);
result = np.asarray(result_cnn[:][0]).argmax(1);
for i in range (0,nc):
print('accuracy',str_emotions[i]+str(' '), '\t',ut.calc_partial_accuracy(tset, result, i))
#faces, marked_img = ut.get_faces_from_img('big_bang.png');
#faces, marked_img = ut.get_faces_from_img('big_bang.png');
faces, marked_img = ut.get_faces_from_img('camera');
# if some face was found in the image
if(len(faces)):
#creating the blank test vector
data_orig = np.zeros([n_train, 48,48])
#putting face data into the vector (only first few)
for i in range(0, len(faces)):
data_orig[i,:,:] = ut.contrast_stretch(faces[i,:,:]);
#preparing image and putting it into the batch
n = data_orig.shape[0];
data = np.zeros([n,48**2])
for i in range(n):
xx = data_orig[i,:,:]
xx -= np.mean(xx)
xx /= np.linalg.norm(xx)
data[i,:] = xx.reshape(2304); #np.reshape(xx,[-1])
result = sess.run([y], feed_dict={xin: data, keep_prob_input: 1.0})
plt.rcParams['figure.figsize'] = (10.0, 10.0) # set default size of plots
for i in range(0, len(faces)):
emotion_nr = np.argmax(result[0][i]);
plt_idx = (2*i)+1;
plt.subplot( 5, 2*len(faces)/5+1, plt_idx)
plt.imshow(np.reshape(data[i,:], (48,48)))
plt.axis('off')
plt.title(str_emotions[emotion_nr])
ax = plt.subplot(5, 2*len(faces)/5+1, plt_idx +1)
ax.bar(np.arange(nc) , result[0][i])
ax.set_xticklabels(str_emotions, rotation=45, rotation_mode="anchor")
ax.set_yticks([])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model using RF
Step2: Using LSTM
Step3: Create vocabulary
Step4: Create embeeding
Step5: Create the network
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
data = pd.read_csv('https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/datasets/phishing.csv')
data.head()
data.tail()
keywords = ['https', 'login', '.php', '.html', '@', 'sign']
for keyword in keywords:
data['keyword_' + keyword] = data.url.str.contains(keyword).astype(int)
data['lenght'] = data.url.str.len() - 2
domain = data.url.str.split('/', expand=True).iloc[:, 2]
data['lenght_domain'] = domain.str.len()
data['isIP'] = (domain.str.replace('.', '') * 1).str.isnumeric().astype(int)
data['count_com'] = data.url.str.count('com')
X = data.drop(['url', 'phishing'], axis=1)
y = data.phishing
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
clf = RandomForestClassifier(n_jobs=-1, n_estimators=100)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print((y_pred == y_test).mean())
import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
import keras
from keras import backend as K
print(K.tensorflow_backend._get_available_gpus())
from keras.models import Sequential
from keras.layers.recurrent import LSTM
from keras.layers.core import Dense, Dropout
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
!pip install livelossplot
from livelossplot import PlotLossesKeras
%matplotlib inline
X = data['url'].tolist()
# For vocabulary only the intersec characters is used to avoid issues with data collection
voc = set(''.join(X))
vocabulary = {x: idx + 1 for idx, x in enumerate(set(voc))}
# Max len
max_url_len = 150
X = [x[:max_url_len] for x in X]
# Convert characters to int and pad
X = [[vocabulary[x1] for x1 in x if x1 in vocabulary.keys()] for x in X]
len(X)
X_pad = sequence.pad_sequences(X, maxlen=max_url_len)
X_pad
X_train, X_test, y_train, y_test = train_test_split(X_pad, y, test_size=0.33, random_state=42)
model = Sequential()
model.add(Embedding(len(vocabulary) + 1, 128, input_length=max_url_len))
model.add(LSTM(32))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.summary()
model.fit(X_train, y_train, validation_data=[X_test, y_test],
batch_size=128, epochs=10, verbose=1,
callbacks=[PlotLossesKeras()])
y_pred = model.predict_classes(X_test)[:,0]
print((y_pred == y_test).mean())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'sandbox-2', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Add your custom code to read_csv_lines for processing your datafile
Step2: Code to connect to BigInsights on Cloud via WebHDFS - don't change this
|
<ASSISTANT_TASK:>
Python Code:
# Cluster number, e.g. 100000
cluster = ''
# Cluster username
username = ''
# Cluster password
password = ''
# file path in HDFS
webhdfs_filepath = 'yourpath/yourfile.txt'
# where to save the file in the spark service file system
local_filepath = 'yourfile.txt'
host = 'ehaasp-{0}-mastermanager.bi.services.bluemix.net'.format(cluster)
import requests
import sys
import datetime
print('READ FILE START: {0}'.format(datetime.datetime.now()))
chunk_size = 200000000 # Read in 200 Mb chunks
url = "https://{0}:8443/gateway/default/webhdfs/v1/{1}?op=OPEN".format(host, webhdfs_filepath)
# note SSL verification is been disabled
r = requests.get(url,
auth=(username, password),
verify=False,
allow_redirects=True,
stream=True)
chunk_num = 1
with open(local_filepath, 'wb') as f:
for chunk in r.iter_content(chunk_size):
if chunk: # filter out keep-alive new chunks
print('{0} writing chunk {1}'.format(datetime.datetime.now(), chunk_num))
f.write(chunk)
chunk_num = chunk_num + 1
print('READ FILE END: {0}'.format(datetime.datetime.now()))
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.read.format('com.databricks.spark.csv') \
.options(header='false', inferschema='true', delimiter='|') \
.load(local_filepath)
df.cache()
df.show()
df.count()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'mpi-esm-1-2-hr', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import torch
x, y = load_data()
maxs = torch.max(torch.abs(x), torch.abs(y))
xSigns = (maxs == torch.abs(x)) * torch.sign(x)
ySigns = (maxs == torch.abs(y)) * torch.sign(y)
finalSigns = xSigns.int() | ySigns.int()
signed_max = maxs * finalSigns
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What is Numba?
Step2: Now, test the function, first with two scalar integers
Step3: 1b) With Numpy, we can use our function to add not just scalars, but vectors as well. Using your favorite array creation routine, create two integer arrays with ten elements each, called a and b, and use your add function to add them.
Step4: Okay, so our function can add things. Now, let's use Numba's jit function to create a Numba version of our addition function
Step5: More commonly, you will use jit as a decorator, by adding @jit to the line above your function definition, but the above version shows you that at heart, @jit is just a python function that takes other functions as its argument!
Step6: Profiling
Step7: What's going on here? %timeit is running our function many times, and then reporting the average time it takes to run. This is generally a better approach than timing a single function execution, because it accounts for random events that may cause any given run to perform poorly.
Step8: So, scalars are faster than arrays (makes sense), and python's addition function is better than ours (seems reasonable). Now, let's see how fast our pre-compiled Numba addition function is.
Step9: Hold on - our new pre-compiled function is running even slower than the original python version! What's going on here?
Step10: 2b) Use the %timeit function to determine whether the Numba version of the hyptonenuse function is better than the original Python implementation.
Step11: 2c) Numba functions can call other functions, provided they are also Numba functions. Below is a function that loops through two numpy arrays and puts their sum into an output array. Modify the following function to calculate the hypotenuse instead.
Step13: Problem 3 - Fun with Fractals
Step14: Want to see what you made? Run the following code to plot the image. Feel free to pick your favorite matplotlib color map
Step16: 3b) There is more than one type of fractal in the world, however! Below is a function that determines membership in the Mandelbrot set. Modify the function using to take advantage of Numba, then modify the code above to produce a new pretty picture.
Step17: Problem 4 - Typing
Step18: This tells us the array contains integers, and each integer has been assigned 64bits in memory (or equivalently, 8 bytes). Most python functions are defined to work on arbitrary types, so that if you use the + operator, for example, you can add integers, floats, complex numbers, or even strings! However, this flexibility comes at a cost, performance-wise. Numba, on the other hand, compiles each function based on the types of its arguments, and infers the type of the result. You can see this if you run the inspect_types function on a numba function
Step19: 4a) Numba has inferred the types for this function based on how we've used it. Try out your numba_add function with two floating point numbers, then re-inspect the types of the Numba function. Are they the same?
Step20: So far we have been using what Numba refers to as "lazy" (or "call-time") decoration. Basically, we've been letting Numba do the work of figuring out how we're using the function and inferring the types for us. Alternatively, if we know how we are going to use a given function, we can use "eager" (or "compile-time") decoration. To do this, we make use of the vectorize decorator. For example, if we want to make an integer-only version of our addition function, we could write
Step21: Numpy Universal Functions
Step22: 4b) Try your ufunc out with a new target, 'parallel'. How does the speed compare? What if the array size is much larger?
Step25: Problem 5 - Direct Summation
Step27: Next, we define a function to calculate the particle interaction via direct summation
Step28: All that's left is to create a list of random particles with assigned masses
Step29: 5a) Run the direct summation code and determine how long it takes with 10, 100, 1000 particles. Is there a relationship?
Step30: Because this is an $\mathcal{O} \left(n^2 \right)$ function, each addition of 10x particles costs 100x more in computational time. Given how expensive it is, let's see if we can improve things a bit with Numba. (This also happens to be a highly parallelizable problem, but we'll get to that tomorrow.)
Step31: You can access an individual "attribute" like this
Step32: 5b) Write a jit function create_n_random_particles that takes the arguments n (number of particles), m (mass of every particle) and a domain within which to generate a random number (as in the class above).
Step33: Now we'll create our array of particles using the new function.
Step34: We don't have a distance method anymore, so we need to write a function to take care of that.
Step35: 5d) Modify the direct_sum function above to instead work a NumPy array of particles. Loop over each element in the array and calculate its total potential. Time the result and compare it to your previous version of this function.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
import math
from numba import jit, njit, vectorize
def add(x, y):
return x + y # add code here
add(1, 2) # add code here
a = np.arange(0,10) # add code here
b = np.arange(1,11) # add code here
add(a, b) # add code here
numba_add = jit(add)
print(add(a,b)) #add code here
print(numba_add(a,b)) #add code here
print(numba_add.py_func(a,b)) #add code here
%timeit add(1,2)
%timeit add(1,2) # add code here
%timeit add(a, b) # add code here
%timeit a + b # add code here
%timeit numba_add(a,b) # add code here
@jit # add code here
def hypotenuse(x, y):
x = abs(x);
y = abs(y);
t = min(x, y);
x = max(x, y);
t = t / x;
return x * math.sqrt(1+t*t)
%timeit hypotenuse(3,4) # add code here
%timeit hypotenuse.py_func(3,4) # add code here
@njit # this is an alias for @jit(nopython=True)
def ex_func(x, y, out):
for i in range(x.shape[0]):
out[i] = hypotenuse(x[i], y[i]) # change this line
in1 = np.arange(10, dtype=np.float64)
in2 = 2 * in1 + 1
out = np.empty_like(in1)
print('in1:', in1)
print('in2:', in2)
ex_func(in1, in2, out)
print('out:', out)
# This test will fail until you fix the ex1 function
np.testing.assert_almost_equal(out, np.hypot(in1, in2))
@njit # add code here
def julia(x, y, max_iters):
Given the real and imaginary parts of a complex number,
determine if it is a candidate for membership in the Julia
set given a fixed number of iterations.
i = 0
c = complex(-0.8, 0.156)
a = complex(x,y)
for i in range(max_iters):
a = a*a + c
if (a.real*a.real + a.imag*a.imag) > 1000:
return 0
return 255
@njit # add code here
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = julia(real, imag, iters)
image[y, x] = color
return image
image = np.zeros((500, 750), dtype=np.uint8)
%timeit create_fractal(-2.0, 2.0, -1.0, 1.0, image, 200)
plt.imshow(image)
plt.viridis()
plt.show()
@njit #add code here
def mandel(x, y, max_iters):
Given the real and imaginary parts of a complex number,
determine if it is a candidate for membership in the Mandelbrot
set given a fixed number of iterations.
i = 0
c = complex(x,y)
z = 0.0j
for i in range(max_iters):
z = z*z + c
if (z.real*z.real + z.imag*z.imag) >= 4:
return i
return 255
# add code here
@njit
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
image = np.zeros((500, 750), dtype=np.uint8)
create_fractal(-2.0, 2.0, -1.0, 1.0, image, 200)
plt.imshow(image)
plt.viridis()
plt.show()
a.dtype
numba_add.inspect_types()
#Add code here
numba_add(3.,4.)
numba_add.inspect_types()
@vectorize(['int64(int64, int64)'], target='cpu')
def add_ufunc(x, y):
return x + y
%timeit add_ufunc(a,b)
big_array = np.arange(0,1000000) # add code here
%timeit add_ufunc(big_array, big_array) # add code here
@vectorize(['int64(int64, int64)'], target='parallel')
def add_ufunc(x, y):
return x + y
%timeit add_ufunc(big_array,big_array)
class Point():
Arguments:
domain: the domain of random generated coordinates x,y,z,
default=1.0
Attributes:
x, y, z: coordinates of the point
def __init__(self, domain=1.0):
self.x = domain * np.random.random()
self.y = domain * np.random.random()
self.z = domain * np.random.random()
def distance(self, other):
return ((self.x - other.x)**2 +
(self.y - other.y)**2 +
(self.z - other.z)**2)**.5
class Particle(Point):
Attributes:
m: mass of the particle
phi: the potential of the particle
def __init__(self, domain=1.0, m=1.0):
Point.__init__(self, domain)
self.m = m
self.phi = 0.
def direct_sum(particles):
Calculate the potential at each particle
using direct summation method.
Arguments:
particles: the list of particles
for i, target in enumerate(particles):
for source in (particles[:i] + particles[i+1:]):
r = target.distance(source)
target.phi += source.m / r
n = 1000
particles = [Particle(m = 1 / n) for i in range(n)]
%timeit direct_sum(particles)
particle_dtype = np.dtype({'names':['x','y','z','m','phi'],
'formats':[np.double,
np.double,
np.double,
np.double,
np.double]})
myarray = np.ones(3, dtype=particle_dtype)
myarray
myarray[0]['x'] = 2.0
@njit
def create_n_random_particles(n, m, domain=1):
'''
Creates `n` particles with mass `m` with random coordinates
between 0 and `domain`
'''
parts = np.zeros((n), dtype=particle_dtype)
#attribute access only in @jitted function
for p in parts:
p.x = np.random.random() * domain
p.y = np.random.random() * domain
p.z = np.random.random() * domain
p.m = m
p.phi = 0
return parts
particles = create_n_random_particles(1000, .001, 1)
particles[:3]
@njit
def distance(part1, part2):
'''calculate the distance between two particles'''
return ((part1.x - part2.x)**2 +
(part1.y - part2.y)**2 +
(part1.z - part2.z)**2)**.5
distance(particles[0], particles[1])
%%timeit
distance(particles[0], particles[1])
@njit
def direct_sum(particles):
for i, target in enumerate(particles):
for j, source in enumerate(particles):
if i != j:
r = distance(target, source)
target.phi += source.m / r
return particles
%timeit direct_sum(particles)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Init
Step2: Experimental design
Step3: Pre-fractionation communities
Step4: Note
Step6: Simulating fragments
Step7: Simulation
Step8: Plotting fragments
Step9: Note
Step10: Note
Step11: Plotting fragment distribution w/ and w/out diffusion
Step12: Plotting
Step13: Adding diffusive boundary layer (DBL) effects
Step14: Adding isotope incorporation
Step15: Note
Step16: Making an OTU table
Step17: Plotting fragment count distributions
Step18: Notes
Step19: Adding effects of PCR
Step20: Notes
Step21: Notes
Step22: Notes
Step23: SIP metadata
Step24: Other SIPSim commands
Step25: Other SIPSim-associated commands
|
<ASSISTANT_TASK:>
Python Code:
workDir = '/home/nick/t/SIPSim/'
nprocs = 3
import os
import glob
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
if not os.path.isdir(workDir):
os.makedirs(workDir)
%cd $workDir
genomeDir = os.path.join(workDir, 'genomes_rn')
# this file
!SIPSim incorpConfigExample \
--percTaxa 34 \
--percIncorpUnif 50 \
--n_reps 1 \
> incorp.config
!cat incorp.config
!SIPSim communities \
--config incorp.config \
./genomes_rn/genome_index.txt \
> comm.txt
!cat comm.txt
!SIPSim gradient_fractions \
--BD_min 1.67323 \
--BD_max 1.7744 \
comm.txt \
> fracs.txt
!head -n 6 fracs.txt
primers = >515F
GTGCCAGCMGCCGCGGTAA
>806R
GGACTACHVGGGTWTCTAAT
F = os.path.join(workDir, '515F-806R.fna')
with open(F, 'wb') as oFH:
oFH.write(primers)
print 'File written: {}'.format(F)
# skewed-normal
!SIPSim fragments \
$genomeDir/genome_index.txt \
--fp $genomeDir \
--fr 515F-806R.fna \
--fld skewed-normal,9000,2500,-5 \
--flr None,None \
--nf 10000 \
--np $nprocs \
--tbl \
> ampFrags.txt
!head -n 5 ampFrags.txt
%%R -w 700 -h 350
df = read.delim('ampFrags.txt')
ggplot(df, aes(fragGC, fragLength, color=taxon_name)) +
geom_density2d() +
scale_color_discrete('Taxon') +
labs(x='Fragment G+C', y='Fragment length (bp)') +
theme_bw() +
theme(
text = element_text(size=16)
)
!SIPSim fragment_KDE \
ampFrags.txt \
> ampFrags_kde.pkl
!SIPSim diffusion \
ampFrags_kde.pkl \
--np $nprocs \
> ampFrags_kde_dif.pkl
n = 100000
!SIPSim KDE_sample -n $n ampFrags_kde.pkl > ampFrags_kde.txt
!SIPSim KDE_sample -n $n ampFrags_kde_dif.pkl > ampFrags_kde_dif.txt
!ls -thlc ampFrags_kde*.txt
%%R
df1 = read.delim('ampFrags_kde.txt', sep='\t')
df2 = read.delim('ampFrags_kde_dif.txt', sep='\t')
df1$data = 'no diffusion'
df2$data = 'diffusion'
df = rbind(df1, df2) %>%
gather(Taxon, BD, Clostridium_ljungdahlii_DSM_13528,
Escherichia_coli_1303, Streptomyces_pratensis_ATCC_33331) %>%
mutate(Taxon = gsub('_(ATCC|DSM)', '\n\\1', Taxon))
df %>% head(n=3)
%%R -w 800 -h 300
ggplot(df, aes(BD, fill=data)) +
geom_density(alpha=0.25) +
facet_wrap( ~ Taxon) +
scale_fill_discrete('') +
theme_bw() +
theme(
text=element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.text.x = element_text(angle=50, hjust=1)
)
!SIPSim DBL \
ampFrags_kde_dif.pkl \
--np $nprocs \
> ampFrags_kde_dif_DBL.pkl
# viewing DBL logs
!ls -thlc *pkl
!SIPSim isotope_incorp \
--comm comm.txt \
--np $nprocs \
ampFrags_kde_dif_DBL.pkl \
incorp.config \
> ampFrags_KDE_dif_DBL_inc.pkl
!ls -thlc *.pkl
%%R
df = read.delim('BD-shift_stats.txt', sep='\t')
df
!SIPSim OTU_table \
--abs 1e7 \
--np $nprocs \
ampFrags_KDE_dif_DBL_inc.pkl \
comm.txt \
fracs.txt \
> OTU.txt
!head -n 7 OTU.txt
%%R -h 350 -w 750
df = read.delim('OTU.txt', sep='\t')
p = ggplot(df, aes(BD_mid, count, fill=taxon)) +
geom_area(stat='identity', position='dodge', alpha=0.5) +
scale_x_continuous(expand=c(0,0)) +
labs(x='Buoyant density') +
labs(y='Amplicon fragment counts') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank()
)
p
%%R -h 350 -w 750
p = ggplot(df, aes(BD_mid, count, fill=taxon)) +
geom_area(stat='identity', position='fill') +
scale_x_continuous(expand=c(0,0)) +
scale_y_continuous(expand=c(0,0)) +
labs(x='Buoyant density') +
labs(y='Amplicon fragment counts') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank()
)
p
!SIPSim OTU_PCR OTU.txt > OTU_PCR.txt
!head -n 5 OTU_PCR.txt
!SIPSim OTU_subsample OTU_PCR.txt > OTU_PCR_sub.txt
!head -n 5 OTU_PCR_sub.txt
%%R -h 350 -w 750
df = read.delim('OTU_PCR_sub.txt', sep='\t')
p = ggplot(df, aes(BD_mid, rel_abund, fill=taxon)) +
geom_area(stat='identity', position='fill') +
scale_x_continuous(expand=c(0,0)) +
scale_y_continuous(expand=c(0,0)) +
labs(x='Buoyant density') +
labs(y='Taxon relative abundances') +
facet_grid(library ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank()
)
p
!SIPSim OTU_wideLong -w \
OTU_PCR_sub.txt \
> OTU_PCR_sub_wide.txt
!head -n 4 OTU_PCR_sub_wide.txt
!SIPSim OTU_sampleData \
OTU_PCR_sub.txt \
> OTU_PCR_sub_meta.txt
!head OTU_PCR_sub_meta.txt
!SIPSim -l
!SIPSimR -l
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start by creating the grid that we'll do the first part of this exercise with, and putting some data into its fields. Note that you need to create the fields that a component takes as inputs before instantiating a component - though you can put values into the arrays later if you need to (as illustrated below). For more info on working with fields, see the fields tutorial.
Step2: How did we know this was a field we needed as an input? Well, firstly because we read the component documentation (always do this!), but secondly we can get a reminder using the Landlab Component Standard Interface
Step3: Note we didn't have to instantiate the component to be able to do this! Other standard properties are output_var_names and optional_var_names; pass an input or output name to var_loc, var_type, var_units, and var_definition to get the centering ('node', 'link', etc.), array dtype (float, int), units (meters, etc.), and a descriptive string, respectively. var_help will give you a lot of this information at once
Step4: It's also a good idea to set the grid boundary conditions before component instantiation. Let's have fixed value top and bottom and closed left and right (see the boundary conditions tutorial)
Step5: You will find that all components within landlab share a similar interface. We'll examine how it looks first on the diffusion component.
Step6: We'll see some other ways of initializing (e.g., from an input file) below.
Step7: Note that we're using z to input the uplift here, which we already bound to the Landlab field mg.at_node['topographic__elevation] when we instantiated that field. This works great, but always be careful to update the values inside the array, not to reset the variable as equal to something else, i.e., to put new values in the field do
Step8: Now, let's repeat this exercise, but illustrating the way we can input fields as some parameters for components. We're going to make the diffusivity spatially variable, falling by a factor of ten as we move across the grid.
Step9: Now we re-initialize the component instance to bind the k_diff field to the component
Step10: ...and run just as before. Note this will be slower than before; the internal timestep is shorter because we've modified the diffusivities.
Step11: Running two or more components
Step12: More components means more input parameters. So this time, we're going to make our lives easier by instantiating our components from an input file. Note also that we've now switched length units to km from m.
Step13: Now instantiate the grid, set the initial conditions, and set the boundary conditions
Step14: So far, so familiar.
Step15: What's happening here is that the component is looking inside the dictionary for any keys that match its keywords, and using them. Values in the dictionary will override component defaults, but note that you cannot provide a keyword manually that is also defined in a supplied dictionary, i.e., this would result in a TypeError
Step16: You'll need to give the above code a few seconds to run.
Step17: And now let's reset the grid elevations and do everything again, but this time, with the diffusion turned on
Step18: Beautiful! We've smoothed away the fine-scale channel roughness, as expected, and produced some lovely convex-up hillslopes in its place. Note that even though the initial conditions were identical in both cases, including the roughness, the channel positions have been moved significantly by the hillslope diffusion into the channel.
Step19: Instantiate the storm generator. This time, we're going to mix an input file for some components with manual definition of others (that we already defined above).
Step20: Now run
Step21: Note that the "wobbles" in the long profile here are being created by the stochastic storm sequence. We could reduce their impact by reducing the storm-interstorm timescales, or allowing diffusion while it's not raining, but we've chosen not to here to show that the storms are having an effect.
Step22: We can also plot the location of the channels in map view.
Step23: Next we make a slope area plot.
|
<ASSISTANT_TASK:>
Python Code:
from landlab.components import LinearDiffuser
from landlab.plot import imshow_grid
from landlab import RasterModelGrid
import matplotlib as mpl
import matplotlib.cm as cm
from matplotlib.pyplot import figure, show, plot, xlabel, ylabel, title
import numpy as np
mg = RasterModelGrid((80, 80), xy_spacing=5.)
z = mg.add_zeros('topographic__elevation', at="node")
LinearDiffuser.input_var_names
LinearDiffuser.var_help('topographic__elevation')
for edge in (mg.nodes_at_left_edge, mg.nodes_at_right_edge):
mg.status_at_node[edge] = mg.BC_NODE_IS_CLOSED
for edge in (mg.nodes_at_top_edge, mg.nodes_at_bottom_edge):
mg.status_at_node[edge] = mg.BC_NODE_IS_FIXED_VALUE
lin_diffuse = LinearDiffuser(mg, linear_diffusivity=0.2)
total_t = 200000.
dt = 1000.
uplift_rate = 0.001
nt = int(total_t // dt)
# ^note if we didn't know a priori that there are a round number of steps dt in the
# total time, we'd have to take care to account for the "extra" time (see example below)
for i in range(nt):
lin_diffuse.run_one_step(dt)
z[mg.core_nodes] += uplift_rate * dt # add the uplift
# add some output to let us see we aren't hanging:
if i % 50 == 0:
print(i*dt)
# the following line makes figures show up correctly in this document (only needed for Jupyter notebook)
%matplotlib inline
# Create a figure and plot the elevations
figure(1)
im = imshow_grid(mg, 'topographic__elevation', grid_units = ['m','m'],
var_name='Elevation (m)')
figure(2)
elev_rast = mg.node_vector_to_raster(z)
ycoord_rast = mg.node_vector_to_raster(mg.node_y)
ncols = mg.number_of_node_columns
im = plot(ycoord_rast[:, int(ncols // 2)], elev_rast[:, int(ncols // 2)])
xlabel('horizontal distance (m)')
ylabel('vertical distance (m)')
title('topographic__elevation cross section')
z[:] = 0. # reset the elevations to zero
k_diff = mg.zeros('node', dtype=float)
k_diff.fill(1.)
k_diff *= (mg.node_x.max() - 0.9 * mg.x_of_node) / mg.x_of_node.max()
k_field = mg.add_field('linear_diffusivity', k_diff, at='node', clobber=True)
imshow_grid(mg, k_diff, var_name='k_diff', cmap='winter') # check it looks good
lin_diffuse = LinearDiffuser(mg, linear_diffusivity='linear_diffusivity')
# we could also have passed in `k_diff` in place of the string
for i in range(nt):
lin_diffuse.run_one_step(dt)
z[mg.core_nodes] += uplift_rate * dt # add the uplift
# add some output to let us see we aren't hanging:
if i % 50 == 0:
print(i*dt)
figure(3)
im = imshow_grid(mg, 'topographic__elevation', grid_units = ['m','m'],
var_name='Elevation (m)')
from landlab.components import FlowAccumulator, FastscapeEroder
from landlab import load_params
input_file = './coupled_params.txt'
inputs = load_params(input_file) # load the data into a dictionary
nrows = inputs['nrows']
ncols = inputs['ncols']
dx = inputs['dx']
uplift_rate = inputs['uplift_rate']
total_t = inputs['total_time']
dt = inputs['dt']
nt = int(total_t // dt) #this is how many loops we'll need
uplift_per_step = uplift_rate * dt
# illustrate what the MPD looks like:
print(inputs)
mg = RasterModelGrid((nrows, ncols), dx)
z = mg.add_zeros('topographic__elevation', at='node')
# add some roughness, as this lets "natural" channel planforms arise
initial_roughness = np.random.rand(z.size) / 100000.
z += initial_roughness
for edge in (mg.nodes_at_left_edge, mg.nodes_at_right_edge):
mg.status_at_node[edge] = mg.BC_NODE_IS_CLOSED
for edge in (mg.nodes_at_top_edge, mg.nodes_at_bottom_edge):
mg.status_at_node[edge] = mg.BC_NODE_IS_FIXED_VALUE
fr = FlowAccumulator(mg)
sp = FastscapeEroder(mg, **inputs['stream_power'])
lin_diffuse = LinearDiffuser(mg, **inputs['linear_diffuser'])
for i in range(nt):
# lin_diffuse.run_one_step(dt) no diffusion this time
fr.run_one_step() # run_one_step isn't time sensitive, so it doesn't take dt as input
sp.run_one_step(dt)
mg.at_node['topographic__elevation'][mg.core_nodes] += uplift_per_step # add the uplift
if i % 20 == 0:
print ('Completed loop %d' % i)
figure('topo without diffusion')
imshow_grid(mg, 'topographic__elevation', grid_units=['km','km'], var_name='Elevation (km)')
z[:] = initial_roughness
for i in range(nt):
lin_diffuse.run_one_step(dt) # no diffusion this time
fr.run_one_step() # run_one_step isn't time sensitive, so it doesn't take dt as input
sp.run_one_step(dt)
mg.at_node['topographic__elevation'][mg.core_nodes] += uplift_per_step # add the uplift
if i % 20 == 0:
print ('Completed loop %d' % i)
figure('topo with diffusion')
imshow_grid(mg, 'topographic__elevation', grid_units=['km','km'],
var_name='Elevation (km)')
from landlab.components import ChannelProfiler, PrecipitationDistribution
from matplotlib.pyplot import loglog
z[:] = initial_roughness
dt = 0.1
total_t = 250.
storm_inputs = load_params('./coupled_params_storms.txt')
precip = PrecipitationDistribution(total_t=total_t, delta_t=dt, **storm_inputs)
print(storm_inputs)
# make a color mapping appropriate for our time duration
norm = mpl.colors.Normalize(vmin=0, vmax=total_t)
map_color = cm.ScalarMappable(norm=norm, cmap="viridis")
out_interval = 20.
last_trunc = total_t # we use this to trigger taking an output plot
for (interval_duration, rainfall_rate) in precip.yield_storm_interstorm_duration_intensity():
if rainfall_rate > 0.:
# note diffusion also only happens when it's raining...
fr.run_one_step()
sp.run_one_step(interval_duration)
lin_diffuse.run_one_step(interval_duration)
z[mg.core_nodes] += uplift_rate * interval_duration
this_trunc = precip.elapsed_time // out_interval
if this_trunc != last_trunc: # time to plot a new profile!
print('made it to time %d' % (out_interval * this_trunc))
last_trunc = this_trunc
figure("long_profiles")
# get and plot the longest profile
cp = ChannelProfiler(mg)
cp.run_one_step()
cp.plot_profiles(color = map_color.to_rgba(precip.elapsed_time))
# no need to track elapsed time, as the generator will stop automatically
# make the figure look nicer:
figure("long_profiles")
xlabel('Distance upstream (km)')
ylabel('Elevation (km)')
title('Long profiles evolving through time')
mpl.pyplot.colorbar(map_color)
figure('topo with diffusion and storms')
imshow_grid(mg, 'topographic__elevation', grid_units=['km','km'], var_name='Elevation (km)')
cp = ChannelProfiler(mg, number_of_watersheds=7, minimum_channel_threshold=0.01, main_channel_only=False)
cp.run_one_step()
cp.plot_profiles_in_map_view()
figure('final slope-area plot')
loglog(mg.at_node['drainage_area'], mg.at_node['topographic__steepest_slope'],'.')
xlabel('Drainage area (km**2)')
ylabel('Local slope')
title('Slope-Area plot for whole landscape')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can load a dataset using the following function
Step2: Some datasets provide a description in the DESCR field
Step3: We can see the content of the datasets by printing them out
Step4: The data of the digits dataset is stored in the data member. This data represents the features of the digit image.
Step5: The target member contains the real target labels or values of the feature sets, that is the numbers that the feature sets represent.
Step6: In case of the digits dataset the 2D shapes of the images are mapped on a 8x8 matrix. You can print them out using the images member
Step7: The digits dataset is a set of images of digits that can be used to train a classifier and test the classification on unseen images. To use a Support Vector Classifier we import the svm module
Step8: We create a classifier instance with manually set parameters. The parameters can be automatically set using various methods.
Step9: The classifier instance has to be trained on the data. The fit method of the instance requires two parameters, the features and the array with the corresponding classes or labels. The features are stored in the data member. The labels are stored in the target member. We use all but the last data and target element for training or fitting.
Step10: We can use the predict method to request a guess about the last element in the data member
Step11: Storing Models
Step12: To store the model in a file, we can use the pickle module
Step13: We can serialize the classifier to a variable that we can process or save to disk
Step14: We will save the model to a file irisModel.dat.
Step15: The model can be read back into memory using the following code
Step16: We can use this unpickled classifier2 in the same way as shown above
Step17: Nearest Neighbor Classification
Step18: We can print out the unique list (or array) of classes (or targets) from the iris dataset using the following code
Step19: We can split the iris dataset in a training and testing dataset using random permutations.
Step20: Clustering
Step21: Classification
Step22: Polynomial kernel
Step23: RBF kernel (Radial Basis Function)
Step27: Logistic Regression
|
<ASSISTANT_TASK:>
Python Code:
from sklearn import datasets
diabetes = datasets.load_diabetes()
iris = datasets.load_iris()
print(iris.DESCR)
digits = datasets.load_digits()
print(digits)
print(digits.data)
print(digits.target)
print(digits.DESCR)
print(0, '\n', digits.images[0])
print()
print(1, '\n', digits.images[1])
from sklearn import svm
classifier = svm.SVC(gamma=0.001, C=100.)
classifier.fit(digits.data[:-1], digits.target[:-1])
print("Prediction:", classifier.predict(digits.data[-1:]))
print("Image:\n", digits.images[-1])
print("Label:", digits.target[-1])
classifier.fit(iris.data, iris.target)
import pickle
s = pickle.dumps(classifier)
ofp = open("irisModel.dat", mode='bw')
ofp.write(s)
ofp.close()
ifp = open("irisModel.dat", mode='br')
model = ifp.read()
ifp.close()
classifier2 = pickle.loads(model)
print("Prediction:", classifier2.predict(iris.data[0:1]))
print("Target:", iris.target[0])
import numpy
print(iris.target)
print(numpy.unique(iris.target))
numpy.random.seed(0)
indices = numpy.random.permutation(len(iris.data))
print(indices)
indices = numpy.random.permutation(len(iris.data))
print(indices)
text = "Hello"
for i in range(len(text)):
print(i, ':', text[i])
irisTrain_data = iris.data[indices[:-10]]
irisTrain_target = iris.target[indices[:-10]]
irisTest_data = iris.data[indices[-10:]]
irisTest_target = iris.target[indices[-10:]]
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(irisTrain_data, irisTrain_target)
knn.predict(irisTest_data)
irisTest_target
from sklearn import cluster
k_means = cluster.KMeans(n_clusters=3)
k_means.fit(iris.data)
print(k_means.labels_[::10])
print(iris.target[::10])
svc = svm.SVC(kernel='linear', gamma=0.001, C=100.)
svc.fit(digits.data[:-1], digits.target[:-1])
print(svc.predict(digits.data[-1:]))
print(digits.target[-1:])
svc = svm.SVC(kernel='poly', degree=3, gamma=0.001, C=100.)
svc.fit(digits.data[:-1], digits.target[:-1])
print(svc.predict(digits.data[-1:]))
print(digits.target[-1:])
svc = svm.SVC(kernel='rbf', gamma=0.001, C=100.)
svc.fit(digits.data[:-1], digits.target[:-1])
print(svc.predict(digits.data[-1:]))
print(digits.target[-1:])
from sklearn import linear_model
logistic = linear_model.LogisticRegression(C=1e5)
logistic.fit(irisTrain_data, irisTrain_target)
logistic.predict(irisTest_data)
irisTest_target
from sklearn import ensemble
rfc = ensemble.RandomForestClassifier()
rfc.fit(irisTrain_data, irisTrain_target)
rfc.predict(irisTest_data)
irisTest_target
text_s1 =
User (computing)
A user is a person who uses a computer or network service. Users generally use a system or a software product[1] without the technical expertise required to fully understand it.[1] Power users use advanced features of programs, though they are not necessarily capable of computer programming and system administration.[2][3]
A user often has a user account and is identified to the system by a username (or user name). Other terms for username include login name, screenname (or screen name), nickname (or nick) and handle, which is derived from the identical Citizen's Band radio term.
Some software products provide services to other systems and have no direct end users.
End user
See also: End user
End users are the ultimate human users (also referred to as operators) of a software product. The term is used to abstract and distinguish those who only use the software from the developers of the system, who enhance the software for end users.[4] In user-centered design, it also distinguishes the software operator from the client who pays for its development and other stakeholders who may not directly use the software, but help establish its requirements.[5][6] This abstraction is primarily useful in designing the user interface, and refers to a relevant subset of characteristics that most expected users would have in common.
In user-centered design, personas are created to represent the types of users. It is sometimes specified for each persona which types of user interfaces it is comfortable with (due to previous experience or the interface's inherent simplicity), and what technical expertise and degree of knowledge it has in specific fields or disciplines. When few constraints are imposed on the end-user category, especially when designing programs for use by the general public, it is common practice to expect minimal technical expertise or previous training in end users.[7] In this context, graphical user interfaces (GUIs) are usually preferred to command-line interfaces (CLIs) for the sake of usability.[8]
The end-user development discipline blurs the typical distinction between users and developers. It designates activities or techniques in which people who are not professional developers create automated behavior and complex data objects without significant knowledge of a programming language.
Systems whose actor is another system or a software agent have no direct end users.
User account
A user's account allows a user to authenticate to a system and potentially to receive authorization to access resources provided by or connected to that system; however, authentication does not imply authorization. To log in to an account, a user is typically required to authenticate oneself with a password or other credentials for the purposes of accounting, security, logging, and resource management.
Once the user has logged on, the operating system will often use an identifier such as an integer to refer to them, rather than their username, through a process known as identity correlation. In Unix systems, the username is correlated with a user identifier or user id.
Computer systems operate in one of two types based on what kind of users they have:
Single-user systems do not have a concept of several user accounts.
Multi-user systems have such a concept, and require users to identify themselves before using the system.
Each user account on a multi-user system typically has a home directory, in which to store files pertaining exclusively to that user's activities, which is protected from access by other users (though a system administrator may have access). User accounts often contain a public user profile, which contains basic information provided by the account's owner. The files stored in the home directory (and all other directories in the system) have file system permissions which are inspected by the operating system to determine which users are granted access to read or execute a file, or to store a new file in that directory.
While systems expect most user accounts to be used by only a single person, many systems have a special account intended to allow anyone to use the system, such as the username "anonymous" for anonymous FTP and the username "guest" for a guest account.
Usernames
Various computer operating-systems and applications expect/enforce different rules for the formats of user names.
In Microsoft Windows environments, for example, note the potential use of:[9]
User Principal Name (UPN) format - for example: UserName@orgName.com
Down-Level Logon Name format - for example: DOMAIN\accountName
Some online communities use usernames as nicknames for the account holders. In some cases, a user may be better known by their username than by their real name, such as CmdrTaco (Rob Malda), founder of the website Slashdot.
Terminology
Some usability professionals have expressed their dislike of the term "user", proposing it to be changed.[10] Don Norman stated that "One of the horrible words we use is 'users'. I am on a crusade to get rid of the word 'users'. I would prefer to call them 'people'."[11]
See also
Information technology portal iconSoftware portal
1% rule (Internet culture)
Anonymous post
Pseudonym
End-user computing, systems in which non-programmers can create working applications.
End-user database, a collection of data developed by individual end-users.
End-user development, a technique that allows people who are not professional developers to perform programming tasks, i.e. to create or modify software.
End-User License Agreement (EULA), a contract between a supplier of software and its purchaser, granting the right to use it.
User error
User agent
User experience
User space
text_s2 =
Personal account
A personal account is an account for use by an individual for that person's own needs. It is a relative term to differentiate them from those accounts for corporate or business use. The term "personal account" may be used generically for financial accounts at banks and for service accounts such as accounts with the phone company, or even for e-mail accounts.
Banking
In banking "personal account" refers to one's account at the bank that is used for non-business purposes. Most likely, the service at the bank consists of one of two kinds of accounts or sometimes both: a savings account and a current account.
Banks differentiate their services for personal accounts from business accounts by setting lower minimum balance requirements, lower fees, free checks, free ATM usage, free debit card (Check card) usage, etc. The term does not apply to any one service or limit the banks from providing the same services to non-individuals. Personal account can be classified into three categories: 1. Persons of Nature, 2. Persons of Artificial Relationship, 3. Persons of Representation.
At the turn of the 21st century, many banks started offering free checking, a checking account with no minimum balance, a free check book, and no hidden fees. This encouraged Americans who would otherwise live from check to check to open their "personal" account at financial institutions. For businesses that issue corporate checks to employees, this enables reduction in the amount of paperwork.
Finance
In the financial industry, 'personal account' (usually "PA") refers to trading or investing for yourself, rather than the company one is working for. There are often restrictions on what may be done with a PA, to avoid conflict of interest.
test_text =
A user account is a location on a network server used to store a computer username, password, and other information. A user account allows or does not allow a user to connect to a network, another computer, or other share. Any network that has multiple users requires user accounts.
from nltk import word_tokenize, sent_tokenize
sentences_s1 = sent_tokenize(text_s1)
#print(sentences_s1)
toksentences_s1 = [ word_tokenize(sentence) for sentence in sentences_s1 ]
#print(toksentences_s1)
tokens_s1 = set(word_tokenize(text_s1))
tokens_s2 = set(word_tokenize(text_s2))
#print(set.intersection(tokens_s1, tokens_s2))
unique_s1 = tokens_s1 - tokens_s2
unique_s2 = tokens_s2 - tokens_s1
#print(unique_s1)
#print(unique_s2)
testTokens = set(word_tokenize(test_text))
print(len(set.intersection(testTokens, unique_s1)))
print(len(set.intersection(testTokens, unique_s2)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Preprocess class requires many input parameters to function. Below is a list of examples.
Step2: The following processing examples are in order from Bensen et al. (2007). The final example has all of them combined.
Step3: Next, downsample the example_trace. The output downsampled trace is dictated by the variable PERIOD_RESAMPLE. The new sample rate is 1/PERIOD_RESAMPLE
Step4: Normalise the trace, either with respect to time, or the one-bit normalisation procedure.
Step5: Finally spectrally whiten the trace.
|
<ASSISTANT_TASK:>
Python Code:
from pysismo.pspreprocess import Preprocess
from obspy import read
from obspy.core import Stream
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
# list of example variables for Preprocess class
FREQMAX = 1./1 # bandpass parameters
FREQMIN = 1/20.0
CORNERS = 2
ZEROPHASE = True
ONEBIT_NORM = False # one-bit normalization
PERIOD_RESAMPLE = 0.02 # resample period to decimate traces, after band-pass
FREQMIN_EARTHQUAKE = 1/75.0 # earthquakes periods band
FREQMAX_EARTHQUAKE = 1/25.0
WINDOW_TIME = 0.5 * 1./FREQMAX_EARTHQUAKE # time window to calculate time-normalisation weights
WINDOW_FREQ = 0.0002 # freq window (Hz) to smooth ampl spectrum
# here is a list of all of the functions and variables that the Preprocess class contains
help(Preprocess)
# set the path to the desired waveform, the example HOLS.mseed is provided.
example_path = 'tools/examples/HOLS.mseed'
# import a trace from the example waveform
example_trace = read(example_path)[0]
# initialise the Preprocess class
PREPROCESS = Preprocess(FREQMIN, FREQMAX, FREQMIN_EARTHQUAKE,
FREQMAX_EARTHQUAKE, CORNERS, ZEROPHASE,
PERIOD_RESAMPLE, WINDOW_TIME, WINDOW_FREQ,
ONEBIT_NORM)
# process the band-pass filtered trace
# the bands are set from the above freqmax and freqmin parameters entered when the class is initialised.
example_trace = PREPROCESS.bandpass_filt(example_trace)
st = Stream(traces=[example_trace])
st.plot()
# Previous trace sample rate:
print 'Initial trace sample rate: ', example_trace.stats.sampling_rate
# Downsample trace
example_trace = PREPROCESS.trace_downsample(example_trace)
print 'Downsampled trace sample rate: ', example_trace.stats.sampling_rate
example_trace_copy = example_trace
# one-bit normalization
example_trace.data = np.sign(example_trace.data)
st = Stream(traces=[example_trace])
# plot the one-bit normalised trace
st.plot()
# copy the trace for time normalisation
example_trace = example_trace_copy
# process for time normalisation
example_trace = PREPROCESS.time_norm(example_trace, example_trace_copy)
st = Stream(traces=[example_trace])
# plot the time normalised trace
st.plot()
# process the whitened spectrum for the trace
example_trace = PREPROCESS.spectral_whitening(example_trace)
st = Stream(traces=[example_trace])
# plot the time normalised trace
st.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
Step2: Load data and prep model for estimation
Step3: Review data loaded from the EDB
Step4: Utility specification
Step5: Chooser data
Step6: Estimate
Step7: Estimated coefficients
Step8: Output Estimation Results
Step9: Write the model estimation report, including coefficient t-statistic and log likelihood
Step10: Next Steps
|
<ASSISTANT_TASK:>
Python Code:
import os
import larch # !conda install larch -c conda-forge # for estimation
import pandas as pd
os.chdir('test')
modelname = "trip_mode_choice"
from activitysim.estimation.larch import component_model
model, data = component_model(modelname, return_data=True)
data.coefficients
data.spec
data.chooser_data
model.load_data()
model.doctor(repair_ch_av='-')
model.loglike()
model.maximize_loglike(method='SLSQP', options={"maxiter": 1000})
model.parameter_summary()
from activitysim.estimation.larch import update_coefficients
result_dir = data.edb_directory/"estimated"
update_coefficients(
model, data, result_dir,
output_file=f"{modelname}_coefficients_revised.csv",
);
for m in model:
m.to_xlsx(
result_dir/f"{m.title}_{modelname}_model_estimation.xlsx",
data_statistics=False,
)
pd.read_csv(result_dir/f"{modelname}_coefficients_revised.csv")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importing a data set
Step2: We can view the first few rows of the file we just imported.
Step3: Plotting the data
Step4: Edit and re-plot
|
<ASSISTANT_TASK:>
Python Code:
# import the software packages needed
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
inline_rc = dict(mpl.rcParams)
# Combined land and ocean temperature averages (LOTI: Land Ocean Temperature Index)
data1 = pd.read_csv('http://github.com/adamlamee/CODINGinK12-data/raw/master/LOTI.csv', header=1).replace(to_replace="***", value=np.NaN)
data_LOTI = data1.apply(lambda x: pd.to_numeric(x, errors='ignore'))
# Only land temperature averages
data2 = pd.read_csv('http://github.com/adamlamee/CODINGinK12-data/raw/master/LAND.csv', header=1).replace(to_replace="***", value=np.NaN)
data_LAND = data2.apply(lambda x: pd.to_numeric(x, errors='ignore'))
# The .head(n) command displays the first n rows of the file.
data_LAND.head(5)
x1 = data_LOTI.Year
y1 = data_LOTI.JanDec
# plt.plot() makes a line graph, by default
fig = plt.figure(figsize=(10, 5))
plt.plot(x1, y1)
plt.title('Average land an docean temperature readings')
plt.xlabel('Year')
plt.ylabel('Percent temp change')
x2 = data_LAND.Year
y2 = data_LAND.JanDec
# plt.plot() makes a line graph, by default
fig = plt.figure(figsize=(10, 5))
plt.plot(x2, y2)
plt.title('Land temperature readings')
plt.xlabel('Year')
plt.ylabel('Percent temp change')
# Wow, this needs a title and axis labels!
fig = plt.figure(figsize=(10, 5))
plt.plot(x1, y1, label="Land and Ocean")
plt.plot(x2, y2, label="Land only")
plt.legend()
plt.show()
plt.xkcd()
fig = plt.figure(figsize=(10, 5))
plt.plot(x1, y1)
# to make normal plots again
mpl.rcParams.update(inline_rc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0. General note
Step2: 3. Compare
Step3: <img src='./tables/Jamieson_Pt_1.png'>
|
<ASSISTANT_TASK:>
Python Code:
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
from uncertainties import unumpy as unp
import pytheos as eos
eta = np.linspace(0., 0.18, 37)
jamieson_pt = eos.platinum.Jamieson1982()
jamieson_pt.print_equations()
jamieson_pt.print_parameters()
v0 = 60.421757141035926
jamieson_pt.three_r
v = v0 * (1.-eta)
temp = 1500.
p = jamieson_pt.cal_p(v, temp * np.ones_like(v))
print('for T = ', temp)
for eta_i, p_i in zip(eta, p):
print("{0: .3f} {1: .2f}".format(eta_i, p_i))
v = jamieson_pt.cal_v(p, temp * np.ones_like(p), min_strain=0.6)
print(1.-(v/v0))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that we may want to use the "funny tanh" function instead, because (a) $f(\pm 1) = \pm 1$, (b) the second derivative is a maximum at $x=1$, and (c) the effective gain (change in the variance) is close to 1. (LeCun, 2012). Here we see that the function indeed keeps the effective gain closer to 1, compared to a regular tanh
Step3: Using the vector notation, we can calculate all values of the hidden nodes at once. Given a $1 \times (I+1)$ row vector of input nodes (for a particular sample) and $(I+1) \times J$ weight matrix $W^{(h)}$,
Step6: 2. Backward propagation
Step8: 3. Weight update phase
Step10: Putting them all together
Step12: And we will also initialize the weights using the standard Gaussian.
Step17: We also want to keep track of the network training. To do this, we evaluate both the per-sample average of the squared loss ($E$) and the classification error. To calculate the latter, we first take the sign of the network outputs - these are the predictions of the network - and compare them to the target vector ${\bf t}$. For each sample, the binary error is simply the average of the number of outputs with the incorrect signs.
Step19: Training Example
Step25: Note that sometimes (often?), the network gets stuck in a local minimum. Also, the training (squared error) doesn't necessarily go down in a smooth fashion; this can be alleviated by using batch updates (in the next section.)
Step26: After training until satisfaction, plot how the network makes predictions
Step29: Example
Step30: Let's see where the network made incorrect classifications.
|
<ASSISTANT_TASK:>
Python Code:
xs=linspace(-3,3,100)
plt.plot(xs,tanh(xs)); plt.grid()
def funny_tanh(x):
return 1.7159 * tanh(x*2/3) + 0.001 * x
xs=linspace(-3,3,100)
plt.plot(xs,funny_tanh(xs)); plt.grid()
from numpy.random import standard_normal
X=standard_normal(10000)
print("X.std() is %f" % X.std()) # should be close to 1
print("tanh(X).std() is %f" % tanh(X).std())
print("funny_tanh(X).std() is %f" % funny_tanh(X).std())
def with_bias(x):
return c_[ones((x.shape[0], 1)), x]
def activations_for(x, Wh, Wo, activation_function=tanh):
Calculates the activations for the hidden and output nodes. (Feed-forward phase)
inputs:
x: N x I matrix, where each row is a particular observation.
Wh: I+1 x J matrix of hidden weights.
Wo: J+1 x K matrix of output weights.
activation_function: the activation function to use.
returns:
z: N x J matrix of hidden activations.
y: N x K matrix of output activations.
z_raw: N x J matrix of "raw" hidden activations.
y_raw: N x K matrix of "raw" output activations.
z_raw = dot(with_bias(x), Wh)
z = activation_function(z_raw)
y_raw = dot(with_bias(z), Wo)
y = activation_function(y_raw)
return z, y, z_raw, y_raw
def approximate_derivative(f, x):
Return the (first-order) approximation of the derivative of f at x.
Of course, we can just use the symbolic derivatives for the activation_function too.
epsilon=1e-8
return (f(x+epsilon) - f(x)) / epsilon
def deltas_for(t, z, y, z_raw, y_raw, Wh, Wo, activation_function=tanh):
Calculates the delta values for the output and hidden nodes.
inputs:
t: N x K matrix of target values.
For other parameters, see "activations_for" function.
returns:
dh: N x J matrix of deltas for the hidden nodes.
do: N x K matrix of deltas for the output nodes.
sigma_prime = lambda x: approximate_derivative(activation_function, x) # derivative of the activation
do = multiply(y - t, sigma_prime(y_raw))
dh = multiply(dot(do, Wo[1:].T), sigma_prime(z_raw))
return dh, do
def gradient_descent(do, dh, x, z, Wh, Wo, eta = 0.1):
Updates the hidden and output weights using gradient descent and return them.
inputs:
do: N x K matrix of deltas for the output nodes.
dh: N x J matrix of deltas for the hidden nodes.
x, z, Wo: see "activations_for".
eta: the learning rate to use.
returns:
updated Wh and Wo.
N = x.shape[0]
Wh -= eta / N * dot(with_bias(x).T, dh)
Wo -= eta / N * dot(with_bias(z).T, do)
return Wh, Wo
def update_weights(t, x, Wh, Wo, activation_function=tanh, eta=0.1):
Updates Wh, Wo and returns them from the traning set x and targets t.
inputs:
x, Wh, Wo and t.
returns:
updated Wh and Wo.
z, y, z_raw, y_raw = activations_for(x, Wh, Wo, activation_function=activation_function)
dh, do = deltas_for(t, z, y, z_raw, y_raw, Wh, Wo, activation_function=activation_function)
return gradient_descent(do, dh, x, z, Wh, Wo, eta = eta)
from numpy.random import standard_normal
def initialize_weights(I, J, K):
Generates initial hidden and output weights.
inputs:
I: number of input units.
J: number of hidden units.
K: number of output units.
returns:
Wh: (I+1) x J matrix
Wo: (J+1) x K matrix
Wh = standard_normal((I + 1, J))
Wo = standard_normal((J + 1, K))
return Wh, Wo
def errors_for(t, y):
Make predictions using the network weights Wh, Wo and
compute the average squared loss and the binary loss.
inputs:
t: N x I matrix of target values.
y: N x I matrix of output activations.
returns:
mse: N x 1 vector of mean squared errors
mbe: N x 1 vector of mean binary errors
mse = ((y-t)**2).mean()
mbe = (sign(y) != sign(t)).mean()
return mse, mbe
Some plotting support functions.
def plot_errors(ax, epoch, epochs, errors, ylabel='', ylim=1.0):
Plots the error graph.
ax.plot(arange(epoch), errors[:epoch])
ax.set_xlim([0, epochs])
ax.set_ylim([0, ylim])
ax.set_xlabel("Training epoch")
ax.set_ylabel(ylabel)
ax.set_title(ylabel)
ax.grid()
def plot_network(ax, x, z, y, Wh, Wo):
Plots the graph representation of the network.
cmap = plt.get_cmap('RdBu')
for l, z in enumerate([x, z, y]):
for i, zi in enumerate(with_bias(z).T):
if l < 2 or i > 0: # Don't show bias node for the output layer
ax.add_artist(plt.Circle((1 + l*2, -i-1), 0.25, ec='k', fc=cmap(zi[0]/2+.5)))
for l, W in enumerate([Wh, Wo]):
for i, wi in enumerate(W):
for j, wij in enumerate(wi):
ax.arrow(1.3 + l*2, -i-1, 1.3, (i-j-1)*.9, head_width=0.05, head_length=0.1, ec=cmap(wij/2+.5), lw=abs(wij)*5.0)
m = max([x.shape[1], z.shape[1], y.shape[1]])+1
ax.set_xlim([0, 2*3])
ax.set_ylim([-m-1,0])
ax.set_aspect('equal')
ax.axis('off')
# Example:
fig0, ax = plt.subplots()
plot_network(ax, array([[-1,-0.5]]), ones((1,2)), zeros((1,2)), -ones((3,2))*0.5, ones((3, 2)))
def plot_training(epoch, epochs, errors, x, z, y, t, Wh, Wo):
if epoch is None:
return
fig, axs = plt.subplots(1,3,figsize=(15,5))
aerrors=array(errors).T
plot_network(axs[0], x, z, y, Wh, Wo)
plot_errors(axs[1], epoch, epochs, aerrors[0], ylabel='RMSE', ylim=3.0)
plot_errors(axs[2], epoch, epochs, aerrors[1], ylabel='RMBE', ylim=1.0)
clear_output(wait=True)
# Print what's going on
print('epoch %d / %d' % (epoch + 1, epochs))
print('x: %s' % str(x))
print('t: %s' % str(t))
from numpy.random import randint
def train(X, T, J, epochs = 1000, activation_function = tanh, eta = 0.1):
Perform online training (one sample at a time).
Wh, Wo = initialize_weights(X.shape[1], J, T.shape[1])
errors = []
for epoch in range(epochs):
n = randint(X.shape[0])
t = T[n, newaxis]
x = X[n, newaxis]
Wh, Wo = update_weights(t, x, Wh, Wo, activation_function=activation_function, eta = eta)
# Evaluate on all targets
Z, Y, _, _ = activations_for(X, Wh, Wo, activation_function=activation_function)
z = Z[n, newaxis]
y = Y[n, newaxis]
errors.append(errors_for(T, Y))
# Yield to outside
yield epoch, epochs, errors, x, z, y, t, Wh, Wo
# The XOR pattern
XOR_X=array([
[ 1,-1],
[-1, 1],
[ 1, 1],
[-1,-1]], dtype=float)
XOR_T=array([
[ 1],
[ 1],
[-1],
[-1]], dtype=float)
training = train(XOR_X, XOR_T, J = 2, epochs = 1000) # 1000 epochs is usually more than enough (unless it gets stuck)
step1_button = Button(description="Train for one iteration") # Button to perform one iteration
step1_button.on_click(lambda b: plot_training(*next(training, (None,)*8)))
display(step1_button)
step100_button = Button(description="Train for 100 iterations") # Button to perform 100 iterations
step100_button.on_click(lambda b: [next(training, True) for i in range(1,100)] and plot_training(*next(training, (None,)*8)))
display(step100_button)
class NeuralNetwork(object):
def __init__(self, num_nodes=[2,2,1], activation_function=tanh, eta = 0.1):
self.num_nodes = num_nodes
self.activation_function = activation_function
# The derivative
self.sigma_prime = lambda x: approximate_derivative(self.activation_function, x)
self.eta = eta
self.initialize_weights()
def initialize_weights(self):
self.Ws = [standard_normal((n + 1, m)) for n, m in zip(self.num_nodes[:-1], self.num_nodes[1:])]
def activations_for(self, x):
Feed-forward calculations.
z_raws = [x]
zs = [x]
for W in self.Ws:
# list[-1] gives the last element on the list.
z_raws.append(dot(with_bias(zs[-1]), W))
zs.append(self.activation_function(z_raws[-1]))
return zs, z_raws
def deltas_for(self, t, zs, z_raws):
Calculates the delta values for the output and hidden nodes.
deltas = [multiply(zs[-1] - t, self.sigma_prime(z_raws[-1]))]
for W, z_raw in reversed(zip(self.Ws, z_raws)[1:]):
deltas.append(multiply(dot(deltas[-1], W[1:].T), self.sigma_prime(z_raw)))
deltas = list(reversed(deltas)) # Keep the lower layer to higher layer order
return deltas, zs
def gradient_descent(self, deltas, zs):
N = zs[0].shape[0]
return [W - self.eta * dot(with_bias(z).T, delta) / N for W, z, delta in zip(self.Ws, zs[:-1], deltas)]
def update_weights(self, t, x):
self.Ws = self.gradient_descent(*self.deltas_for(t, *self.activations_for(x)))
def predictions_for(self, x):
zs, _ = self.activations_for(x)
return zs[-1]
def errors_for(self, t, x):
y = self.predictions_for(x)
mse = ((y-t)**2).mean()
mbe = (sign(y) != sign(t)).mean()
return mse, mbe
def train(self, X, T, epochs = 1000):
Online training.
self.initialize_weights()
errors = []
for epoch in range(epochs):
# Use all data to update weights
self.update_weights(T, X)
# Evaluate on all targets
errors.append(self.errors_for(T, X))
# Yield to outside
yield epoch, epochs, errors
def plot_network(self, ax, x):
Plots the graph representation of the network.
cmap = plt.get_cmap('RdBu')
zs, _ = self.activations_for(x)
for l, z in enumerate(zs):
for i, zi in enumerate(with_bias(z).T):
if l < len(zs) - 1 or i > 0: # Don't show bias node for the output layer
ax.add_artist(plt.Circle((1 + l*2, -i-1), 0.25, ec='k', fc=cmap(zi[0]/2+.5)))
for l, W in enumerate(self.Ws):
for i, wi in enumerate(W):
for j, wij in enumerate(wi):
ax.arrow(1.3 + l*2, -i-1, 1.3, (i-j-1)*.9, head_width=0.05, head_length=0.1, ec=cmap(wij/2+.5), lw=abs(wij)*5.0)
m = max(self.num_nodes)+1
ax.set_xlim([0, 2*len(self.num_nodes)])
ax.set_ylim([-m-1,0])
ax.set_aspect('equal')
ax.axis('off')
def _plot_errors(self, ax, epoch, epochs, errors, ylabel='', ylim=1.0):
Plots the error graph.
ax.plot(arange(epoch), errors[:epoch])
ax.set_xlim([0, epochs])
ax.set_ylim([0, ylim])
ax.set_xlabel("Training epoch")
ax.set_ylabel(ylabel)
ax.set_title(ylabel)
ax.grid()
def plot_training(self, epoch, epochs, errors):
if epoch is None:
return
fig, axs = plt.subplots(1,2,figsize=(10,5))
aerrors=array(errors).T
self._plot_errors(axs[0], epoch, epochs, aerrors[0], ylabel='RMSE', ylim=3.0)
self._plot_errors(axs[1], epoch, epochs, aerrors[1], ylabel='RMBE', ylim=1.0)
clear_output(wait=True)
net = NeuralNetwork(num_nodes=[XOR_X.shape[1],2,XOR_T.shape[1]])
net_training = net.train(XOR_X, XOR_T, epochs = 1000)
net_train_button = Button(description="Train for 100 iterations") # Button to perform 100 iterations
net_train_button.on_click(lambda b: [next(net_training, True) for i in range(1,100)] and net.plot_training(*next(net_training, (None,)*3)))
display(net_train_button)
fig1, ax = plt.subplots(figsize=(5,5))
interact(lambda n: net.plot_network(ax, XOR_X[n-1,newaxis]) or display(fig1),
n=IntSlider(description='Make predictions for example ',min=1,max=XOR_X.shape[0])); plt.close()
from sklearn.decomposition import PCA
from sklearn.cross_validation import train_test_split, ShuffleSplit
from sklearn.preprocessing import OneHotEncoder
class ClassifierNetwork(NeuralNetwork):
def train(self, X, T, epochs = 1000, test_size = 0.2):
Mini-batch training.
# Prepare the output encoder
self.encoder = OneHotEncoder(sparse=False).fit(T[:,newaxis])
T1 = self.encoder.transform(T[:,newaxis])*2-1 # Convert to {-1, 1}
# Split into training and test
X_train, X_test, T1_train, T1_test = train_test_split(X, T1, test_size=test_size)
# Preprocess the data using PCA
self.pca = PCA(n_components = self.num_nodes[0], whiten=True, copy=True)
Xw_train = self.pca.fit_transform(X_train)
Xw_test = self.pca.transform(X_test)
# Then do the training
self.initialize_weights()
train_errors = []
test_errors = []
# Use 10% of the training data for each iteration (mini-batch)
for epoch, (batch_index, _) in enumerate(ShuffleSplit(Xw_train.shape[0], n_iter=epochs, train_size=0.1)):
self.update_weights(T1_train[batch_index], Xw_train[batch_index])
# Evaluate for training and test sets
train_errors.append(self.errors_for(T1_train, Xw_train))
test_errors.append(self.errors_for(T1_test, Xw_test))
# Yield to outside
yield epoch, epochs, train_errors, test_errors
def decode(self, Y):
Decodes the output node values back to the target vector.
return Y.argmax(axis=1) # Pick the one with the maximum activation
def predictions_for(self, x):
xw = self.pca.transform(x)
y = super(ClassifierNetwork, self).predictions_for(xw)
return self.decode(y)
def errors_for(self, t, x):
y = super(ClassifierNetwork, self).predictions_for(x)
mse = multiply(y-t,y-t).mean()
mce = (self.decode(y) != self.decode(t)).mean()
return mse, mce
def plot_training(self, fig, axs, epoch, epochs, train_errors, test_errors):
if epoch is None:
return
atrain_errors=array(train_errors).T
atest_errors=array(test_errors).T
self._plot_errors(axs[0], epoch, epochs, c_[atrain_errors[0],atest_errors[0]], ylabel='RMSE', ylim=3.0)
axs[0].legend(['Training', 'Test'], loc="best")
self._plot_errors(axs[1], epoch, epochs, c_[atrain_errors[1],atest_errors[1]], ylabel='Classification Error', ylim=1.0)
axs[1].legend(['Training', 'Test'], loc="best")
clear_output(wait=True)
display(fig)
[ax.cla() for ax in axs]
from sklearn.datasets import load_digits
digits = load_digits()
digits_net = ClassifierNetwork(num_nodes=[20,20,10])
fig2, axs = plt.subplots(1,2,figsize=(10,5))
for epoch, epochs, train_errors, test_errors in digits_net.train(digits.data, digits.target, epochs=2000):
if mod(epoch+1, 100) == 0:
digits_net.plot_training(fig2, axs, epoch, epochs, train_errors, test_errors)
plt.close()
fig3, ax = plt.subplots(figsize=(1,1))
Y = digits_net.predictions_for(digits.data)
incorrect = flatnonzero(Y != digits.target)
@interact
def plot_digits_predictions(n=(0, len(incorrect)-1)):
image_id = incorrect[n]
ax.imshow(digits.images[image_id], cmap='gray', interpolation='none')
ax.axis('off')
clear_output(wait=True)
display(fig3)
print('Target = %d, Prediction = %d' % (digits.target[image_id], Y[image_id]))
plt.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 创建客户端
Step2: 申请秘钥
Step3: 返回值为最多topk个摘要句子以及相应的权重,权重取值区间为$[0, 1]$。由于Trigram Blocking技巧,实际返回的摘要句数量可能小于topk。
Step4: 繁体中文
|
<ASSISTANT_TASK:>
Python Code:
!pip install hanlp_restful -U
from hanlp_restful import HanLPClient
HanLP = HanLPClient('https://www.hanlp.com/api', auth=None, language='zh') # auth不填则匿名,zh中文,mul多语种
text = '''
据DigiTimes报道,在上海疫情趋缓,防疫管控开始放松后,苹果供应商广达正在逐步恢复其中国工厂的MacBook产品生产。
据供应链消息人士称,生产厂的订单拉动情况正在慢慢转强,这会提高MacBook Pro机型的供应量,并缩短苹果客户在过去几周所经历的延长交货时间。
仍有许多苹果笔记本用户在等待3月和4月订购的MacBook Pro机型到货,由于苹果的供应问题,他们的发货时间被大大推迟了。
据分析师郭明錤表示,广达是高端MacBook Pro的唯一供应商,自防疫封控依赖,MacBook Pro大部分型号交货时间增加了三到五周,
一些高端定制型号的MacBook Pro配置要到6月底到7月初才能交货。
尽管MacBook Pro的生产逐渐恢复,但供应问题预计依然影响2022年第三季度的产品销售。
苹果上周表示,防疫措施和元部件短缺将继续使其难以生产足够的产品来满足消费者的强劲需求,这最终将影响苹果6月份的收入。
'''
HanLP.extractive_summarization(text, topk=3)
def highlight(text, scores):
for k, v in scores.items():
text = text.replace(k, f'<span style="background-color:rgba(255, 255, 0, {v});">{k}</span>')
from IPython.display import display, HTML
display(HTML(text))
scores = HanLP.extractive_summarization(text, topk=100)
highlight(text, scores)
text = '''
華爾街日報周二(3日)報導,根據知情人透露,日前已宣布將以440億美元買下推特(Twitter)並下市的馬斯克,曾經跟一些潛在投資人說,他可以在短短幾年後,再將這家社群媒體公司重新上市。
消息來源說,特斯拉創辦人兼執行長馬斯克表示,他計劃在買下推特後最短三年內,就展開推特的首次公開發行股票。
馬斯克買推特的交易案預期在今年稍後走完程序,包括獲得股東同意以及監管機關核准等步驟。
根據之前華爾街日報的報導,馬斯克為購買推特籌現金時,與私募股權公司等投資人討論出資事宜,Apollo Global Management有興趣參與。
私募股權公司通常都先買下公司將之私有化,把公司移出眾人注目的焦點之外以後,整頓公司,接著再把公司上市,時間常是五年左右。
華爾街日報指出,馬斯克暗示他對推特有類似的規劃的話,有助說服潛在投資人,他會很快行動,改善推特的營運和獲利。
'''
scores = HanLP.extractive_summarization(text)
scores
highlight(text, scores)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Harris Catalog
Step2: GW Galaxy catalog with $d<30Mpc$ -> VGG
Step3: VGG
Step4: Compute Ngc for VGG
Step5: Estimate GC age based on 55 MWGCs
Step6: Multiple realization
Step7: bhlib
Step8: GClib
Step9: Build BHBdata and write to file
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import numpy as np
from datetime import datetime
import random
import pandas as pd
import os
from scipy.interpolate import interp1d
import statsmodels.api as sm
## load Harris catalog
GCpG=pd.read_csv("/Users/domi/Dropbox/Research/Local_universe/data/GCpG.csv") # read csv version data
# GCpG[GCpG.duplicated()]
GCpG.drop_duplicates(['Name'],inplace=True)
## calculate Sn
Sn=GCpG.Ngc*10**(0.4*(GCpG.VMag+15.0))
sn_fit=GCpG[['VMag']]
sn_fit['Sn']=pd.Series(Sn,index=sn_fit.index)
## LOWESS model to predict value
# introduce some floats in our x-values
x = sn_fit.VMag
y = sn_fit.Sn
# lowess will return our "smoothed" data with a y value for at every x-value
lowess = sm.nonparametric.lowess(y, x, frac=0.36)
# unpack the lowess smoothed points to their values
lowess_x = list(zip(*lowess))[0]
lowess_y = list(zip(*lowess))[1]
# run scipy's interpolation. There is also extrapolation I believe
f = interp1d(lowess_x, lowess_y, bounds_error=False)
## load Galaxy catalog
Galaxy=pd.read_csv('/Users/domi/Dropbox/Research/Local_universe/data/GWGCCatalog_IV.csv',
delim_whitespace=True)
## Convert data with ~
i=0
for column in Galaxy:
i+=1
if (i>4 and i<23):
Galaxy[column]=Galaxy[column].convert_objects(convert_numeric=True)
VGG=Galaxy[Galaxy.Dist<30]
## add and convert new VMag for VGG
# VGG['VMag']=pd.Series(VGG.Abs_Mag_B + 4.83 - 5.48,index=VGG.index)
# extra=VGG.Abs_Mag_B.isnull()&VGG.Abs_Mag_I.notnull()
# VGG.VMag[extra]=VGG.Abs_Mag_I[extra] + 4.83 - 4.08
import seaborn as sns
# ## use cross checked galaxy for magnitude convertion
# ind_GCpG=GCpG.Name.isin(VGG.Name)
# ind_VGG=VGG.Name.isin(GCpG.Name)
# f_joint, (ax1, ax2, ax3) = subplots(3, sharex=True, sharey=True, figsize=(8,8))
# xlim([-25,-10])
# p_v=sns.kdeplot(GCpG.VMag[ind_GCpG].get_values(),shade=True,ax=ax1,label='VMag in Harris')
# x,y = p_v.get_lines()[0].get_data()
# v_max=x[y.argmax()]
# ax1.vlines(v_max, 0, y.max())
# p_b=sns.kdeplot(VGG.Abs_Mag_B[ind_VGG].get_values(),shade=True,ax=ax2,label='BMag in White')
# x,y = p_b.get_lines()[0].get_data()
# b_max=x[y.argmax()]
# ax2.vlines(b_max, 0, y.max())
# p_I=sns.kdeplot(VGG.Abs_Mag_I[ind_VGG].get_values(),shade=True,ax=ax3,label='IMag in White')
# x,y = p_I.get_lines()[0].get_data()
# I_max=x[y.argmax()]
# ax3.vlines(I_max, 0, y.max())
## add and convert new VMag for VGG: VMag = V_max + B/I - B/I_max
VGG['VMag']=pd.Series(VGG.Abs_Mag_B -20.515828946699045 + 19.627615047946385,index=VGG.index)
extra=VGG.Abs_Mag_B.isnull()&VGG.Abs_Mag_I.notnull()
VGG.VMag[extra]=VGG.Abs_Mag_I[extra] -20.515828946699045 + 21.008287606264027
## calculate Ngc based on Sn from f(VMag)
VGG['Ngc']=pd.Series((map(lambda x: 0 if isnan(f(x)) else int(f(x)/10**(0.4*(x+15))),VGG.VMag)),index=VGG.index)
# sum(VGG.Ngc)
GCage=pd.read_csv("/Users/domi/Dropbox/Research/Local_universe/data/55GC_age.csv")
def generate_rand_from_pdf(pdf, x_grid):
cdf = np.cumsum(pdf) # pdf from kde plot x128
cdf = cdf / cdf[-1] # normalization
values = np.random.rand(sum(VGG.Ngc)) # sample size Ngc
value_bins = np.searchsorted(cdf, values) # group Ngc into nearest 128 bins
random_from_cdf = x_grid[value_bins]
return random_from_cdf # return Ngc
age_kde=sns.kdeplot(GCage.Age,kernel='gau',bw='silverman')
#x_grid = np.linspace(min(GCage.Age)-1, max(GCage.Age)+1, len(age_kde.get_lines()[0].get_data()[1]))
age_curve=age_kde.get_lines()[0].get_data()
f2=interp1d(age_curve[0],age_curve[1],kind='cubic')
age_grid=np.linspace(min(age_curve[0]), max(age_curve[0]), 10000)
## original method, age not well spreaded.
# x_grid = np.linspace(min(GCage.Age)-1, max(GCage.Age)+1, len(age_kde.get_lines()[0].get_data()[1]))
# # define how many GC need to estimate the age
# GCage_from_kde = generate_rand_from_pdf(age_kde.get_lines()[0].get_data()[1], x_grid)
# sns.distplot(GCage_from_kde)
## Age better spreaded
GCage_from_kde = generate_rand_from_pdf(f2(age_grid), age_grid)
# sns.distplot(GCage_from_kde)
# cut off at 13.5 Gyrs
GCage_from_kde[GCage_from_kde>=13.5]=np.random.choice(GCage_from_kde[GCage_from_kde<13.5],sum(GCage_from_kde>=13.5))
# numpy.savetxt("/Users/domi/Dropbox/Research/Local_universe/data/GCage.dat", GCage_from_kde, delimiter=",")
# GCage_from_kde=numpy.loadtxt("/Users/domi/Dropbox/Research/Local_universe/data/GCage.dat",delimiter=",")
## Load BHLIB
bhlib=pd.DataFrame()
###################################
for i in range(1,325): # model
for j in range(1,11): # model id
###################################
bhe=pd.read_csv('/Users/domi/Dropbox/Research/Local_universe/data/BHsystem/%d-%d-bhe.dat' %(i,j),
usecols=[0, 2, 3, 4, 6, 8, 10, 20], names=['T_eject','Type1','Type2','M1','M2','Seperation','Ecc','Model'],
header=None, delim_whitespace=True)
bhe.Model='%d-%d' %(i,j)
bhlib=pd.concat([bhlib,bhe],ignore_index=False)
# BHB binary
bhblib=bhlib[bhlib.Type1==14*(bhlib.Type2==14)].copy(deep=True)
bhsys=bhlib[-(bhlib.Type1==14*(bhlib.Type2==14))].copy(deep=True)
bhblib=bhblib.drop(bhblib.columns[[1, 2]], axis=1)
bhblib.to_csv('/Users/domi/Dropbox/Research/Local_universe/data/bhblib.dat',
index=True, sep=' ', header=True, float_format='%2.6f')
# bhblib=pd.read_csv('/Users/domi/Dropbox/Research/Local_universe/data/bhblib.dat',
# sep=' ', index_col=0)
print(shape(bhsys),shape(bhblib),shape(bhlib))
# # index of the Galaxy with GC
# ind_GC=VGG.index[VGG.Ngc>0]
# # initialize the GClib
# GClib=pd.DataFrame()
# for row in ind_GC:
# GClib=GClib.append([VGG.ix[[row],['RA','Dec','Dist','VMag']]]*VGG.Ngc[row]) # creat GCs in each Galaxy
# GClib=GClib.reset_index()
# GClib=GClib.rename(columns = {'index':'Galaxy'})
# # write to file
# GClib.to_csv('/Users/domi/Dropbox/Research/Local_universe/data/GClib.dat',
# index=False,sep=' ', header=False, float_format='%2.6f')
GClib=pd.DataFrame()
GClib=pd.read_csv('/Users/domi/Dropbox/Research/Local_universe/data/GClib.dat',
sep=' ',header=None, names=['Galaxy','RA','Dec','Dist','VMag','Model','Age'])
# assign model to GClib
GClib['Model']=GClib['RA'].apply(lambda x: str(randint(1,325))+'-'+str(randint(1,11)))
# assign Age to GClib
GClib['Age']=np.random.choice(GCage_from_kde,size(GCage_from_kde))
# to check pd.merge correctly assign the right BH infor from bhlib to GClib based on Model
display(GClib.ix[[133],GClib.columns],pd.merge(GClib.ix[[133],GClib.columns],bhblib,on='Model'),bhblib[bhblib.Model=='64-9'])
# with pd.option_context('display.max_columns', None):
# display(bhlib.sample(5),VGG.sample(5))
# ## Slow
# BHBdata=pd.merge(GClib,bhblib,on='Model')
# BHBdata.to_csv('/Users/domi/Dropbox/Research/Local_universe/data/BHBdata.dat.gz',
# index=False,sep=' ', header=False, float_format='%2.6f',compression='gzip')
from gcpg import build
reload(build)
build.run(1)
from astropy.table import Table
help(Table.read)
from astropy.time import Time
Time('2015-11-2').gps
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1. Environment setup
Step2: You may need to restart the kernel at this point.
Step3: Modify the PATH environment variable so that skaffold is available
Step4: Step 2. Copy the predefined template to your project directory.
Step5: TFX includes the taxi template with the TFX python package.
Step6: Step 3. Browse your copied source files
Step7: Step 4. Create the artifact store bucket
|
<ASSISTANT_TASK:>
Python Code:
import os
ENDPOINT = '' # Enter your ENDPOINT here.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE = 'gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
CUSTOM_TFX_IMAGE
%%bash
LOCAL_BIN="/home/jupyter/.local/bin"
SKAFFOLD_URI="https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64"
test -d $LOCAL_BIN || mkdir -p $LOCAL_BIN
which skaffold || (
curl -Lo skaffold $SKAFFOLD_URI &&
chmod +x skaffold &&
mv skaffold $LOCAL_BIN
)
!which skaffold
PIPELINE_NAME = 'guided_project_3' # Your pipeline name
PROJECT_DIR = os.path.join(os.path.expanduser("."), PIPELINE_NAME)
PROJECT_DIR
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
%cd {PROJECT_DIR}
!python -m models.features_test
!python -m models.keras.model_test
GCS_BUCKET_NAME = GOOGLE_CLOUD_PROJECT + '-kubeflowpipelines-default'
GCS_BUCKET_NAME
!gsutil ls gs://{GCS_BUCKET_NAME} | grep {GCS_BUCKET_NAME} || gsutil mb gs://{GCS_BUCKET_NAME}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoder Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step40: Batch and pad the source and target sequences
Step43: Train
Step45: Save Parameters
Step47: Checkpoint
Step50: Sentence to Sequence
Step52: Translate
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
eos = target_vocab_to_int['<EOS>']
source_sentences = [s for s in source_text.split('\n')]
target_sentences = [s for s in target_text.split('\n')]
#print(eos)
#print(source_sentences)
#print(target_sentences)
source_id_text = [[source_vocab_to_int[w] for w in s.split()] for s in source_sentences]
target_id_text = [[target_vocab_to_int[w] for w in s.split()] + [eos] for s in target_sentences]
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
input_data = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='target')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_probability = tf.placeholder(tf.float32, None, name='keep_prob')
target_squence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')
max_target_sequence_length = tf.reduce_max(target_squence_length, name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')
return (input_data,
targets,
learning_rate,
keep_probability,
target_squence_length,
max_target_sequence_length,
source_sequence_length)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
def make_stacked_lstm_rnn_cell(rnn_size, num_layers, keep_prob):
def rnn_cell(rnn_size):
# Construct a stacked tf.contrib.rnn.LSTMCell
# __init__(num_units, use_peepholes=False, cell_clip=None, initializer=None,
# num_proj=None, proj_clip=None, num_unit_shards=None, num_proj_shards=None,
# forget_bias=1.0, state_is_tuple=True, activation=None, reuse=None)
lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
# wrapped in a tf.contrib.rnn.DropoutWrapper
# __init__(cell, input_keep_prob=1.0, output_keep_prob=1.0, state_keep_prob=1.0,
# variational_recurrent=False, input_size=None, dtype=None, seed=None)
return tf.contrib.rnn.DropoutWrapper(lstm_cell, keep_prob, keep_prob, keep_prob)
# stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(number_of_layers)])
stacked_lstm_cell = tf.contrib.rnn.MultiRNNCell([rnn_cell(rnn_size) for _ in range(num_layers)])
return stacked_lstm_cell
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# Encoder embedding - Embed the encoder input using tf.contrib.layers.embed_sequence
# embed_sequence(ids, vocab_size=None, embed_dim=None, unique=False, initializer=None, regularizer=None, trainable=True, scope=None, reuse=None)
# Returns: Tensor of [batch_size, doc_length, embed_dim] with embedded sequences.'''
enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
# Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
stacked_cell = make_stacked_lstm_rnn_cell(rnn_size, num_layers, keep_prob)
# Pass cell and embedded input to tf.nn.dynamic_rnn()
# dynamic_rnn(cell, inputs, sequence_length=None, initial_state=None, dtype=None, parallel_iterations=None, swap_memory=False, time_major=False, scope=None)
# Returns: A pair (outputs, state)
return tf.nn.dynamic_rnn(stacked_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# Create a tf.contrib.seq2seq.TrainingHelper
# __init__(inputs, sequence_length, time_major=False, name=None)
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
# Create a tf.contrib.seq2seq.BasicDecoder
# __init__(cell, helper, initial_state, output_layer=None)
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer)
# Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
# dynamic_decode(decoder, output_time_major=False, impute_finished=False, maximum_iterations=None, parallel_iterations=32, swap_memory=False, scope=None)
# Returns: (final_outputs, final_state, final_sequence_lengths)
training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True,
maximum_iterations=max_summary_length)
return training_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
# __init__(embedding, start_tokens, end_token)
# embedding: A callable that takes a vector tensor of ids (argmax ids), or the params argument for embedding_lookup. The returned tensor will be passed to the decoder input.
# start_tokens: int32 vector shaped [batch_size], the start tokens.
# end_token: int32 scalar, the token that marks end of decoding.
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id)
# Create a tf.contrib.seq2seq.BasicDecoder
# __init__(cell, helper, initial_state, output_layer=None)
# cell: An RNNCell instance.
# helper: A Helper instance.
# initial_state: A (possibly nested tuple of...) tensors and TensorArrays. The initial state of the RNNCell.
# output_layer: (Optional) An instance of tf.layers.Layer, i.e., tf.layers.Dense. Optional layer to apply to the RNN output prior to storing the result or sampling.
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer)
# Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
# dynamic_decode(decoder, output_time_major=False, impute_finished=False, maximum_iterations=None, parallel_iterations=32, swap_memory=False, scope=None)
# Returns: (final_outputs, final_state, final_sequence_lengths)
decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
impute_finished=True,
maximum_iterations=max_target_sequence_length)
return decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
# Construct the decoder LSTM cell (just like you constructed the encoder cell above)
def decode_stacked_lstm_rnn_cell(rnn_size, num_layers, keep_prob=1.0): # remove keep_prob?
def rnn_cell(rnn_size):
# Construct a stacked tf.contrib.rnn.LSTMCell
# __init__(num_units, use_peepholes=False, cell_clip=None, initializer=None,
# num_proj=None, proj_clip=None, num_unit_shards=None, num_proj_shards=None,
# forget_bias=1.0, state_is_tuple=True, activation=None, reuse=None)
lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
# wrapped in a tf.contrib.rnn.DropoutWrapper
# __init__(cell, input_keep_prob=1.0, output_keep_prob=1.0, state_keep_prob=1.0,
# variational_recurrent=False, input_size=None, dtype=None, seed=None)
# return tf.contrib.rnn.DropoutWrapper(lstm_cell, keep_prob, keep_prob, keep_prob)
return lstm_cell
# stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(number_of_layers)])
decoded_lstm_cell = tf.contrib.rnn.MultiRNNCell([rnn_cell(rnn_size) for _ in range(num_layers)])
return decoded_lstm_cell
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# Embed the target sequences
decoder_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
decoder_embeddings_input = tf.nn.embedding_lookup(decoder_embeddings, dec_input)
# Construct the decoder LSTM cell (just like you constructed the encoder cell above)
decoded_cell = decode_stacked_lstm_rnn_cell(rnn_size, num_layers)
# Create an output layer to map the outputs of the decoder to the elements of our vocabulary
output_layer = Dense(target_vocab_size,
kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
# Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length,
# max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
with tf.variable_scope("decoding") as training_scope:
training_logits = decoding_layer_train(encoder_state, decoded_cell, decoder_embeddings_input,
target_sequence_length, max_target_sequence_length,
output_layer, keep_prob)
# Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
# end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)
# function to get the inference logits.
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
with tf.variable_scope("decoding", reuse=True) as inference_scope:
inference_logits = decoding_layer_infer(encoder_state, decoded_cell, decoder_embeddings,
start_of_sequence_id, end_of_sequence_id,
max_target_sequence_length, target_vocab_size,
output_layer, batch_size, keep_prob)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
# source_sequence_length, source_vocab_size, encoding_embedding_size).
# Returns: A pair (outputs, state)
enc_output, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size, enc_embedding_size)
# Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
# Returns: Preprocessed target data
# ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
# return = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
# Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length,
# max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size,
# batch_size, keep_prob, dec_embedding_size) function.
# Returns: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
training_output, inference_output = decoding_layer(dec_input, enc_state, target_sequence_length,
max_target_sentence_length, rnn_size, num_layers,
target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, dec_embedding_size)
return training_output, inference_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 134
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 269
decoding_embedding_size = 269
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.8
display_step = 25
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
loss_list = []
valid_acc_list = []
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
loss_list.append(loss)
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
valid_acc_list.append(valid_acc)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
# Visualize the loss and accuracy
import matplotlib.pyplot as plt
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 6))
ax1.plot(loss_list, color='red')
ax1.set_title('Traning Loss')
ax1.set_ylabel('Loss value')
ax2.plot(valid_acc_list)
ax2.set_xlabel('Iterations')
ax2.set_ylabel('Accuracy')
ax2.set_title('Validation Accuracy')
plt.show()
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# Convert the sentence to lowercase
slower = sentence.lower()
# Convert words into ids using vocab_to_int
word_ids = []
for s in slower.split():
# Convert words not in the vocabulary, to the <UNK> word id.
if s not in vocab_to_int:
s = '<UNK>'
word_ids.append(vocab_to_int[s])
return word_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
#translate_sentence = 'he saw a old yellow truck .' # il a vu un vieux camion jaune = He saw an old yellow truck
#translate_sentence = 'what a beautiful day' # californie est beau au mois de hiver = California is beautiful in winter
#translate_sentence = 'what time is it' # chine est sec en septembre , et il = China is dry in September, and
#translate_sentence = 'lets go for a ride' # elle est au volant d' une petite voiture rouge = She is driving a small red car
#translate_sentence = 'lets watch a movie' # elle aime une une voiture rouge = She likes a red car
translate_sentence = 'have a great day' # elle est généralement beau en californie = She is generally beautiful in california
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This example script loops indefinitely, pausing for a few seconds each time. When a signal comes in, the sleep() call is interrupted and the signal handler receive_signal prints the signal number. After the signal handler returns, the loop continues.
Step2: Retrieving Registered Handlers
Step3: Sending Signals
Step4: Ignoring Signals
Step5: The signal handlers were all registered in the main thread because this is a requirement of the signal module implementation for Python, regardless of underlying platform support for mixing threads and signals. Although the receiver thread calls signal.pause(), it does not receive the signal. The signal.alarm(2) call near the end of the example prevents an infinite block, since the receiver thread will never exit.
|
<ASSISTANT_TASK:>
Python Code:
# %load signal_signal.py
import signal
import os
import time
def receive_signal(signum, stack):
print('Received:', signum)
# Register signal handlers
signal.signal(signal.SIGUSR1, receive_signal)
signal.signal(signal.SIGUSR2, receive_signal)
# Print the process ID so it can be used with 'kill'
# to send this program signals.
print('My PID is:', os.getpid())
while True:
print('Waiting...')
time.sleep(3)
!python signal_signal.py
import signal
def alarm_received(n, stack):
return
signal.signal(signal.SIGALRM, alarm_received)
signals_to_names = {
getattr(signal, n): n
for n in dir(signal)
if n.startswith('SIG') and '_' not in n
}
for s, name in sorted(signals_to_names.items()):
handler = signal.getsignal(s)
if handler is signal.SIG_DFL:
handler = 'SIG_DFL'
elif handler is signal.SIG_IGN:
handler = 'SIG_IGN'
print('{:<10} ({:2d}):'.format(name, s), handler)
import signal
import time
def receive_alarm(signum, stack):
print('Alarm :', time.ctime())
# Call receive_alarm in 2 seconds
signal.signal(signal.SIGALRM, receive_alarm)
signal.alarm(2)
print('Before:', time.ctime())
time.sleep(4)
print('After :', time.ctime())
import signal
import threading
import os
import time
def signal_handler(num, stack):
print('Received signal {} in {}'.format(
num, threading.currentThread().name))
signal.signal(signal.SIGUSR1, signal_handler)
def wait_for_signal():
print('Waiting for signal in',
threading.currentThread().name)
signal.pause()
print('Done waiting')
# Start a thread that will not receive the signal
receiver = threading.Thread(
target=wait_for_signal,
name='receiver',
)
receiver.start()
time.sleep(0.1)
def send_signal():
print('Sending signal in', threading.currentThread().name)
os.kill(os.getpid(), signal.SIGUSR1)
sender = threading.Thread(target=send_signal, name='sender')
sender.start()
sender.join()
# Wait for the thread to see the signal (not going to happen!)
print('Waiting for', receiver.name)
signal.alarm(2)
receiver.join()
import signal
import time
import threading
def signal_handler(num, stack):
print(time.ctime(), 'Alarm in',
threading.currentThread().name)
signal.signal(signal.SIGALRM, signal_handler)
def use_alarm():
t_name = threading.currentThread().name
print(time.ctime(), 'Setting alarm in', t_name)
signal.alarm(1)
print(time.ctime(), 'Sleeping in', t_name)
time.sleep(3)
print(time.ctime(), 'Done with sleep in', t_name)
# Start a thread that will not receive the signal
alarm_thread = threading.Thread(
target=use_alarm,
name='alarm_thread',
)
alarm_thread.start()
time.sleep(0.1)
# Wait for the thread to see the signal (not going to happen!)
print(time.ctime(), 'Waiting for', alarm_thread.name)
alarm_thread.join()
print(time.ctime(), 'Exiting normally')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: $z = (x + 2)^2 * 3$
|
<ASSISTANT_TASK:>
Python Code:
x = Variable(torch.ones(2,2), requires_grad=True) # requires_grad: calculate gradients
print(x)
print(x.data)
print(x.grad)
y = x + 2
print(y)
z = y * y * 3
out = z.sum()
print(z, out)
out.backward() # backpropagation
print(x.grad)
Q = torch.eye(3)
Q
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: The Euro problem
Step3: We can make a uniform prior and update it with 140 heads and 110 tails
Step4: And here's what the posterior looks like.
Step5: We can summarize the posterior several ways, including the mean
Step6: Median
Step7: The peak of the posterior, known as the Maximum Aposteori Probability (MAP)
Step8: And a 90% credible interval
Step9: We can look up a particular value in the posterior PMF, but the result doesn't mean much, because we could have divided the range (0-100) into as many pieces as we like, and the result would be different.
Step11: Different priors
Step13: And another that makes a triangular prior.
Step14: Here's what they look like
Step16: If we update them both with the same data
Step17: The results are almost identical; the remaining difference is unlikely to matter in practice.
Step20: The binomial likelihood function
Step21: I left out the binomial coefficient ${n}\choose{k}$ because it does not depend on p, so it's the same for all hypotheses.
Step22: Here's what the posterior looks like.
Step34: The Beta distribution
Step35: Here's how we use it.
Step36: And here's the posterior.
Step37: Amazing, no?
Step38: Exercise
Step39: Now, we will estimate the probability that Rhode is actually a better shooter than Wei. To do this, we will make a cumulative distribution for Rhode and Wei's beta distributions.
Step40: With a relatively small number of steps in the cdf (500) and small number of samples (1000), we see that Rhode is not that likely to be a better shooter than Wei. We assign a ~54% chance that Rhode is a better shooter then Wei.
Step41: Exercise Suppose that instead of observing coin tosses directly, you measure the outcome using an instrument that is not always correct. Specifically, suppose there is a probability y that an actual heads is reported as tails, or actual tails reported as heads.
Step42: Exercise This exercise is inspired by a question posted by a “redditor” named dominosci on Reddit’s statistics “subreddit” at http
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
import numpy as np
import thinkbayes2
from thinkbayes2 import Pmf, Cdf, Suite, Beta
import thinkplot
% matplotlib inline
class Euro(Suite):
def Likelihood(self, data, hypo):
Computes the likelihood of `data` given `hypo`.
data: string 'H' or 'T'
hypo: probability of heads, 0-100
returns: float
if hypo ==
x = hypo
if data == 'H':
return x/100
else:
return 1 - x/100
suite = Euro(range(0, 101))
dataset = 'H' * 140 + 'T' * 110
for data in dataset:
suite.Update(data)
thinkplot.Pdf(suite)
suite.Mean()
suite.Percentile(50)
suite.MAP()
suite.CredibleInterval(90)
suite.Prob(50)
def UniformPrior(label='uniform'):
Makes a Suite with a uniform prior.
suite = Euro(range(0, 101), label=label)
return suite
def TrianglePrior(label='triangle'):
Makes a Suite with a triangle prior.
suite = Euro(label=label)
for x in range(0, 51):
suite[x] = x
for x in range(51, 101):
suite[x] = 100-x
suite.Normalize()
return suite
triangle = TrianglePrior()
uniform = UniformPrior()
suites = [triangle, uniform]
thinkplot.Pdfs(suites)
thinkplot.Config(xlabel='x', ylabel='Probability')
def RunUpdate(suite, heads=140, tails=110):
Updates the Suite with the given number of heads and tails.
suite: Suite object
heads: int
tails: int
dataset = 'H' * heads + 'T' * tails
for data in dataset:
suite.Update(data)
for suite in suites:
RunUpdate(suite)
thinkplot.Pdfs(suites)
thinkplot.Config(xlabel='x', ylabel='Probability')
class Euro2(thinkbayes2.Suite):
Represents hypotheses about the probability of heads.
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: integer value of x, the probability of heads (0-100)
data: tuple of (number of heads, number of tails)
x = hypo / 100.0
heads, tails = data
like = x**heads * (1-x)**tails
return like
suite = Euro2(range(0, 101))
dataset = 140, 110
suite.Update(dataset)
thinkplot.Pdf(suite)
from scipy import special
class Beta:
Represents a Beta distribution.
See http://en.wikipedia.org/wiki/Beta_distribution
def __init__(self, alpha=1, beta=1, label=None):
Initializes a Beta distribution.
self.alpha = alpha
self.beta = beta
self.label = label if label is not None else '_nolegend_'
def Update(self, data):
Updates a Beta distribution.
data: pair of int (heads, tails)
heads, tails = data
self.alpha += heads
self.beta += tails
def Mean(self):
Computes the mean of this distribution.
return self.alpha / (self.alpha + self.beta)
def MAP(self):
Computes the value with maximum a posteori probability.
a = self.alpha - 1
b = self.beta - 1
return a / (a + b)
def Random(self):
Generates a random variate from this distribution.
return random.betavariate(self.alpha, self.beta)
def Sample(self, n):
Generates a random sample from this distribution.
n: int sample size
size = n,
return np.random.beta(self.alpha, self.beta, size)
def EvalPdf(self, x):
Evaluates the PDF at x.
return x ** (self.alpha - 1) * (1 - x) ** (self.beta - 1)
def MakePmf(self, steps=101, label=None):
Returns a Pmf of this distribution.
Note: Normally, we just evaluate the PDF at a sequence
of points and treat the probability density as a probability
mass.
But if alpha or beta is less than one, we have to be
more careful because the PDF goes to infinity at x=0
and x=1. In that case we evaluate the CDF and compute
differences.
The result is a little funny, because the values at 0 and 1
are not symmetric. Nevertheless, it is a reasonable discrete
model of the continuous distribution, and behaves well as
the number of values increases.
if label is None and self.label is not None:
label = self.label
if self.alpha < 1 or self.beta < 1:
cdf = self.MakeCdf()
pmf = cdf.MakePmf()
return pmf
xs = [i / (steps - 1) for i in range(steps)]
probs = [self.EvalPdf(x) for x in xs]
pmf = Pmf(dict(zip(xs, probs)), label=label)
return pmf
def MakeCdf(self, steps=101):
Returns the CDF of this distribution.
xs = [i / (steps - 1) for i in range(steps)]
ps = special.betainc(self.alpha, self.beta, xs)
cdf = Cdf(xs, ps)
return cdf
def Percentile(self, ps):
Returns the given percentiles from this distribution.
ps: scalar, array, or list of [0-100]
ps = np.asarray(ps) / 100
xs = special.betaincinv(self.alpha, self.beta, ps)
return xs
beta = Beta()
beta.Update((140, 110))
beta.Mean()
thinkplot.Pdf(beta.MakePmf())
beta2 = Beta(200,200)
thinkplot.Pdf(beta2.MakePmf())
beta2.Update((140, 110))
thinkplot.Pdf(beta2.MakePmf())
beta2.Mean()
# Solution goes here
# Solution goes here
# Solution goes here
betaRhode = Beta(4,2)
betaWei = Beta(4,2)
thinkplot.Pdf(betaRhode.MakePmf())
betaRhode.Update((15+1+2+2, 10+1+0))
betaWei.Update((15+1+2+1, 10+1+1))
thinkplot.Pdf(betaRhode.MakePmf())
thinkplot.Pdf(betaWei.MakePmf())
rhodeSample = betaRhode.MakeCdf(500).Sample(1000)
weiSample = betaWei.MakeCdf(500).Sample(1000)
np.mean(rhodeSample > weiSample)
rhodeRematch = np.random.binomial(25, rhodeSample)
print(rhodeRematch)
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A typical call to the Gaussian Mixture Model algorithm looks like this
Step2: Let's start with the 1-D example given in Ivezic, Figure 6.8, which compares a Mixture Model to KDE.
Step3: Hmm, that doesn't look so great for the 5000 point distribution. Plot the BIC values and see if anything looks awry.
Step4: That said, I'd say that there are too many components here. So, I'd be inclined to explore this a bit further if it were my data.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo("B36fzChfyGU")
# Execute this cell
import numpy as np
from sklearn.mixture import GMM
X = np.random.normal(size=(1000,2)) #1000 points in 2D
gmm = GMM(3) #three components
gmm.fit(X)
log_dens = gmm.score(X)
BIC = gmm.bic(X)
# Execute this cell
# Ivezic, Figure 6.8
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from scipy import stats
from astroML.plotting import hist
from sklearn.mixture import GMM
from sklearn.neighbors import KernelDensity
#------------------------------------------------------------
# Generate our data: a mix of several Cauchy distributions
# this is the same data used in the Bayesian Blocks figure
np.random.seed(0)
N = 10000
mu_gamma_f = [(5, 1.0, 0.1),
(7, 0.5, 0.5),
(9, 0.1, 0.1),
(12, 0.5, 0.2),
(14, 1.0, 0.1)]
true_pdf = lambda x: sum([f * stats.cauchy(mu, gamma).pdf(x)
for (mu, gamma, f) in mu_gamma_f])
x = np.concatenate([stats.cauchy(mu, gamma).rvs(int(f * N))
for (mu, gamma, f) in mu_gamma_f])
np.random.shuffle(x)
x = x[x > -10]
x = x[x < 30]
#------------------------------------------------------------
# plot the results
fig = plt.figure(figsize=(10, 10))
fig.subplots_adjust(bottom=0.08, top=0.95, right=0.95, hspace=0.1)
N_values = (500, 5000)
subplots = (211, 212)
k_values = (10, 100)
for N, k, subplot in zip(N_values, k_values, subplots):
ax = fig.add_subplot(subplot)
xN = x[:N]
t = np.linspace(-10, 30, 1000)
kde = KernelDensity(0.1, kernel='gaussian')
kde.fit(xN[:, None])
dens_kde = np.exp(kde.score_samples(t[:, None]))
# Compute density via Gaussian Mixtures
# we'll try several numbers of clusters
n_components = np.arange(3, 16)
gmms = [GMM(n_components=n).fit(xN[:,None]) for n in n_components]
BICs = [gmm.bic(xN[:,None]) for gmm in gmms]
i_min = np.argmin(BICs)
t = np.linspace(-10, 30, 1000)
logprob, responsibilities = gmms[i_min].score_samples(t[:,None])
# plot the results
ax.plot(t, true_pdf(t), ':', color='black', zorder=3,
label="Generating Distribution")
ax.plot(xN, -0.005 * np.ones(len(xN)), '|k', lw=1.5)
ax.plot(t, np.exp(logprob), '-', color='gray',
label="Mixture Model\n(%i components)" % n_components[i_min])
ax.plot(t, dens_kde, '-', color='black', zorder=3,
label="Kernel Density $(h=0.1)$")
# label the plot
ax.text(0.02, 0.95, "%i points" % N, ha='left', va='top',
transform=ax.transAxes)
ax.set_ylabel('$p(x)$')
ax.legend(loc='upper right')
if subplot == 212:
ax.set_xlabel('$x$')
ax.set_xlim(0, 20)
ax.set_ylim(-0.01, 0.4001)
plt.show()
# Execute this cell
# Ivezic, Figure 6.6
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import norm
from sklearn.mixture import GMM
from astroML.datasets import fetch_sdss_sspp
from astroML.decorators import pickle_results
from astroML.plotting.tools import draw_ellipse
#------------------------------------------------------------
# Get the Segue Stellar Parameters Pipeline data
data = fetch_sdss_sspp(cleaned=True)
# Note how X was created from two columns of data
X = np.vstack([data['FeH'], data['alphFe']]).T
# truncate dataset for speed
X = X[::5]
#------------------------------------------------------------
# Compute GMM models & AIC/BIC
N = np.arange(1, 14)
#@pickle_results("GMM_metallicity.pkl")
def compute_GMM(N, covariance_type='full', n_iter=1000):
models = [None for n in N]
for i in range(len(N)):
#print N[i]
models[i] = GMM(n_components=N[i], n_iter=n_iter, covariance_type=covariance_type)
models[i].fit(X)
return models
models = compute_GMM(N)
AIC = [m.aic(X) for m in models]
BIC = [m.bic(X) for m in models]
i_best = np.argmin(BIC)
gmm_best = models[i_best]
print "best fit converged:", gmm_best.converged_
print "BIC: n_components = %i" % N[i_best]
#------------------------------------------------------------
# compute 2D density
FeH_bins = 51
alphFe_bins = 51
H, FeH_bins, alphFe_bins = np.histogram2d(data['FeH'], data['alphFe'], (FeH_bins, alphFe_bins))
Xgrid = np.array(map(np.ravel,
np.meshgrid(0.5 * (FeH_bins[:-1]
+ FeH_bins[1:]),
0.5 * (alphFe_bins[:-1]
+ alphFe_bins[1:])))).T
log_dens = gmm_best.score(Xgrid).reshape((51, 51))
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(12, 5))
fig.subplots_adjust(wspace=0.45, bottom=0.25, top=0.9, left=0.1, right=0.97)
# plot data
ax = fig.add_subplot(141)
ax.scatter(data['FeH'][::10],data['alphFe'][::10],marker=".",color='k',edgecolors='None')
ax.set_xlabel(r'$\rm [Fe/H]$')
ax.set_ylabel(r'$\rm [\alpha/Fe]$')
ax.xaxis.set_major_locator(plt.MultipleLocator(0.3))
ax.set_xlim(-1.101, 0.101)
ax.text(0.93, 0.93, "Input",
va='top', ha='right', transform=ax.transAxes)
# plot density
ax = fig.add_subplot(142)
ax.imshow(H.T, origin='lower', interpolation='nearest', aspect='auto',
extent=[FeH_bins[0], FeH_bins[-1],
alphFe_bins[0], alphFe_bins[-1]],
cmap=plt.cm.binary)
ax.set_xlabel(r'$\rm [Fe/H]$')
ax.set_ylabel(r'$\rm [\alpha/Fe]$')
ax.xaxis.set_major_locator(plt.MultipleLocator(0.3))
ax.set_xlim(-1.101, 0.101)
ax.text(0.93, 0.93, "Density",
va='top', ha='right', transform=ax.transAxes)
# plot AIC/BIC
ax = fig.add_subplot(143)
ax.plot(N, AIC, '-k', label='AIC')
ax.plot(N, BIC, ':k', label='BIC')
ax.legend(loc=1)
ax.set_xlabel('N components')
plt.setp(ax.get_yticklabels(), fontsize=7)
# plot best configurations for AIC and BIC
ax = fig.add_subplot(144)
ax.imshow(np.exp(log_dens),
origin='lower', interpolation='nearest', aspect='auto',
extent=[FeH_bins[0], FeH_bins[-1],
alphFe_bins[0], alphFe_bins[-1]],
cmap=plt.cm.binary)
ax.scatter(gmm_best.means_[:, 0], gmm_best.means_[:, 1], c='w')
for mu, C, w in zip(gmm_best.means_, gmm_best.covars_, gmm_best.weights_):
draw_ellipse(mu, C, scales=[1], ax=ax, fc='none', ec='k')
ax.text(0.93, 0.93, "Converged",
va='top', ha='right', transform=ax.transAxes)
ax.set_xlim(-1.101, 0.101)
ax.set_ylim(alphFe_bins[0], alphFe_bins[-1])
ax.xaxis.set_major_locator(plt.MultipleLocator(0.3))
ax.set_xlabel(r'$\rm [Fe/H]$')
ax.set_ylabel(r'$\rm [\alpha/Fe]$')
plt.show()
# Execute this cell
# Ivezic, Figure 6.7
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from sklearn.mixture import GMM
from astroML.datasets import fetch_great_wall
from astroML.decorators import pickle_results
#------------------------------------------------------------
# load great wall data
X = fetch_great_wall()
#------------------------------------------------------------
# Create a function which will save the results to a pickle file
# for large number of clusters, computation will take a long time!
#@pickle_results('great_wall_GMM.pkl')
def compute_GMM(n_clusters, n_iter=1000, min_covar=3, covariance_type='full'):
clf = GMM(n_clusters, covariance_type=covariance_type,
n_iter=n_iter, min_covar=min_covar)
clf.fit(X)
print "converged:", clf.converged_
return clf
#------------------------------------------------------------
# Compute a grid on which to evaluate the result
Nx = 100
Ny = 250
xmin, xmax = (-375, -175)
ymin, ymax = (-300, 200)
Xgrid = np.vstack(map(np.ravel, np.meshgrid(np.linspace(xmin, xmax, Nx),
np.linspace(ymin, ymax, Ny)))).T
#------------------------------------------------------------
# Compute the results
#
# we'll use 100 clusters. In practice, one should cross-validate
# with AIC and BIC to settle on the correct number of clusters.
clf = compute_GMM(n_clusters=100)
log_dens = clf.score(Xgrid).reshape(Ny, Nx)
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(10, 5))
fig.subplots_adjust(hspace=0, left=0.08, right=0.95, bottom=0.13, top=0.9)
ax = fig.add_subplot(211, aspect='equal')
ax.scatter(X[:, 1], X[:, 0], s=1, lw=0, c='k')
ax.set_xlim(ymin, ymax)
ax.set_ylim(xmin, xmax)
ax.xaxis.set_major_formatter(plt.NullFormatter())
plt.ylabel(r'$x\ {\rm (Mpc)}$')
ax = fig.add_subplot(212, aspect='equal')
ax.imshow(np.exp(log_dens.T), origin='lower', cmap=plt.cm.binary,
extent=[ymin, ymax, xmin, xmax])
ax.set_xlabel(r'$y\ {\rm (Mpc)}$')
ax.set_ylabel(r'$x\ {\rm (Mpc)}$')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sorted based on total integration in 3 Band (3+6+7)
Step2: The correlation is seen on all plot, however the last plot has better variance (I think).
Step3: See the detail of each object in report_8_nonAlmacal.txt
Step4: There are 3C454.3 and 3C279
Step5: Make a report only on this objects
|
<ASSISTANT_TASK:>
Python Code:
resume_sorted3 = sorted(resume, key=lambda data: data[11])
resume_sorted6 = sorted(resume, key=lambda data: data[12])
resume_sorted7 = sorted(resume, key=lambda data: data[13])
res3 = np.array(resume_sorted3)
res6 = np.array(resume_sorted6)
res7 = np.array(resume_sorted7)
plt.figure(figsize=(15,4))
plt.subplot(131)
plt.plot(np.linspace(1,134,134), np.sqrt(res3[:,11].astype(float)), 'r-', np.linspace(1,134,134), np.sqrt(res3[:,12].astype(float)), 'g-', np.linspace(1,134,134), np.sqrt(res3[:,13].astype(float)), 'b-')
plt.ylabel(r"$\sqrt{t}$")
plt.title("Sorted on B3")
plt.subplot(132)
plt.plot(np.linspace(1,134,134), np.sqrt(res6[:,11].astype(float)), 'r-', np.linspace(1,134,134), np.sqrt(res6[:,12].astype(float)), 'g-', np.linspace(1,134,134), np.sqrt(res6[:,13].astype(float)), 'b-')
plt.ylabel(r"$\sqrt{t}$")
plt.title("Sorted on B6")
plt.subplot(133)
plt.plot(np.linspace(1,134,134), np.sqrt(res7[:,11].astype(float)), 'r-', np.linspace(1,134,134), np.sqrt(res7[:,12].astype(float)), 'g-', np.linspace(1,134,134), np.sqrt(res7[:,13].astype(float)), 'b-')
plt.ylabel(r"$\sqrt{t}$")
plt.title("Sorted on B7")
resume_sorted = sorted(resume, key=lambda data: data[11]+data[12]+data[13])
res = np.array(resume_sorted)
plt.figure(figsize=(10,5))
plt.plot(np.linspace(1,134,134), np.sqrt(res[:,11].astype(float)), 'r-', np.linspace(1,134,134), np.sqrt(res[:,12].astype(float)), 'g-', np.linspace(1,134,134), np.sqrt(res[:,13].astype(float)), 'b-')
plt.ylabel(r"$\sqrt{t}$")
plt.axvline(x=114, c='black', ls=':')
plt.axvline(x=104, c='magenta', ls=':')
print("List of primary sample: ")
primary_name = res[:,0][-20:]
print primary_name
print("List of secondary sample: ")
secondary_name = res[:,0][-30:-20]
print(secondary_name)
file_listcal = "alma_sourcecat_searchresults_20180419.csv"
listcal = q.read_calibratorlist(file_listcal, fluxrange=[0.1, 999999])
for i, obj in enumerate(primary_name):
for j, cal in enumerate(listcal):
if obj == cal[0]: # same name
print(cal[3])
for i, obj in enumerate(secondary_name):
for j, cal in enumerate(listcal):
if obj == cal[0]: # same name
print(cal[3])
# List of nonalmacal sample (primary and secondary)
nonalmacal_priority = ['J0541-0211', 'J1733-3722', 'J1610-3958', 'J1743-0350', 'J2253+1608',
'J1851+0035', 'J0541-0541', 'J0601-7036', 'J1130-1449', 'J1305-4928',
'J0336+3218', 'J0006-0623', 'J1717-3342', 'J1833-210B', 'J0237+2848',
'J0750+1231', 'J1751+0939', 'J0948+0022', 'J1107-4449', 'J1256-0547',
'J0747-3310', 'J1516+1932', 'J0438+3004', 'J2134-0153', 'J2226+0052',
'J1830+0619', 'J0426+2327', 'J1626-2951', 'J1225+1253', 'J1058-8003']
report, resume = q.make_report_from_sqldb("calibrators_brighterthan_0.1Jy_20180419.db", nonalmacal_priority, \
maxFreqRes=999999999, array='12m', \
excludeCycle0=True, \
selectPol=False, \
silent=True)
len(resume)
q.write_report(report, "report_8_nonALMACAL_priority.txt")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Derive the Convolution !!
Step2: Train a convolutional neural net
Step3: Applied to image
|
<ASSISTANT_TASK:>
Python Code:
# Which is easily implemented on python :
def _convolve(x, w, type='valid'):
# x and w are np vectors
conv = []
for i in range(len(x)):
if type == 'valid':
conv.append((x[i: i+len(w)] * w).sum())
return np.array(conv)
def convolve(X, w):
# Convolves a batch X to w
w = np.array(w)
X = np.array(X)
conv = []
for i in range(len(X)):
conv.append(_convolve(X[i], w))
return np.array(conv)
from utils import *
import utils
reload(utils)
from utils import *
(x_train, y_train), (x_test, y_test) = load_up_down(50)
plt.plot(x_train.T)
plt.show()
# Rename y_silver to X and y_gold to Y
X, Y = [x_train, ], y_train
# Initilize the parameters
Ws = [0.5, 0.5]
alphas = (0.01, 0.01)
# Load Trainer
t = Trainer(X, Y, Ws, alphas)
# Define Prediction and Loss
t.pred = lambda X : convolve(X[0], (t.Ws[0], t.Ws[1])).mean(axis=1)
t.loss = lambda : (np.power((t.Y - t.pred(t.X)), 2) * 1 / 2.).mean()
print t.pred(X)
t.acc = lambda X, Y : t.pred(X)
# Define the gradient functions
dl_dp = lambda : -(t.Y - t.pred(X))
dl_dw0 = lambda : (t.X[0][:-1]).mean()
dl_dw1 = lambda : (t.X[0][1:]).mean()
t.dWs = (dl_dw0, dl_dw1)
# Start training
anim = t.animated_train(is_notebook=True)
from IPython.display import HTML
HTML(anim.to_html5_video())
t.loss()
from scipy import signal
# Load MNIST
(x_train, y_train), (x_test, y_test) = load_MNIST()
img = x_train[2]
# Design the kernels
kernels = [[[-1, 2, -1],[-1, 2, -1],[-1, 2, -1]],
[[-1, -1, -1],[2, 2, 2],[-1, -1, -1]],
[[2, -1, -1],[-1, 2, -1],[-1, -1, 2]],
[[-1, -1, 2],[-1, 2, -1],[2, -1, -1]], ]
# Plot and convolve them to the image
for i, k in enumerate(kernels):
i = i*2+1
plt.subplot(3,4,i)
plt.imshow(k, cmap='gray')
plt.subplot(3,4,i+1)
conv = signal.convolve2d(img, k)
plt.imshow(conv > 1.5, cmap='gray')
plt.subplot(349)
plt.imshow(img, cmap='gray')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Migrate building-geometry files
Step2: The zone.shp file is the the same! (repeated this procedure with site.shp and surroundings.shp)
Step3: OK. So. This is going to be interesting.
Step4: Clearly, this information goes into the typology.dbf file - we just need to figure out how to map the uses.
Step5: Migrate inputs/building-properties/air_conditioning_systems.dbf
|
<ASSISTANT_TASK:>
Python Code:
import os
import cea
import geopandas
import pandas as pd
from packaging import version
import cea.inputlocator
from cea.utilities.dbf import dbf_to_dataframe, dataframe_to_dbf
# Constants
SCENARIO_TO_MIGRATE = r"c:\Users\darthoma\Documents\CityEnergyAnalyst\projects\2.29.0\kleinalbis"
EXAMPLE_SCENARIO = r"c:\Users\darthoma\Documents\CityEnergyAnalyst\projects\working-with-databases\kleinalbis"
# the (new) database to use for migration - user should prepare this in advance...
DATABASE = r"c:\Users\darthoma\Documents\GitHub\CityEnergyAnalyst\cea\databases\CH"
CONSTRUCTION_STANDARD_XLSX = os.path.join(DATABASE, "archetypes", "CONSTRUCTION_STANDARD.xlsx")
# we can't use the input locator for the v2.29.0 scenario, but we _can_ (and should) use it for v2.31.1.
locator = cea.inputlocator.InputLocator(scenario=EXAMPLE_SCENARIO)
# let's first compare the two files (and practice reading/writing shape files)
old_zone_shp = geopandas.GeoDataFrame.from_file(
os.path.join(SCENARIO_TO_MIGRATE, "inputs", "building-geometry", "zone.shp"))
old_zone_shp
new_zone_shp = geopandas.GeoDataFrame.from_file(locator.get_zone_geometry())
new_zone_shp
old_age_dbf = dbf_to_dataframe(os.path.join(SCENARIO_TO_MIGRATE, "inputs", "building-properties", "age.dbf"))
old_age_dbf
# i think that should be migrated to typology.dbf
new_typology_dbf = dbf_to_dataframe(locator.get_building_typology())
new_typology_dbf
old_occupancy_dbf = dbf_to_dataframe(
os.path.join(SCENARIO_TO_MIGRATE, "inputs", "building-properties", "occupancy.dbf"))
# old_occupancy_dbf
# NOTE: this stuff is good for the migration script
def lookup_standard(year, standards_df):
# find first standard that is similar to the year
standard = standards_df[(standards_df.YEAR_START < year) & (year < standards_df.YEAR_END)].iloc[0]
return standard.STANDARD
def convert_occupancy(name, occupancy_dbf):
# FIXME: do this smorter
row = occupancy_dbf[occupancy_dbf.Name == name].iloc[0]
uses = set(row.to_dict().keys()) - set(["Name", "REFERENCE"])
uses = sorted(uses, cmp=lambda a, b: cmp(float(row[a]), float(row[b])), reverse=True)
result = {
"1ST_USE": uses[0],
"1ST_USE_R": float(row[uses[0]]),
"2ND_USE": uses[1],
"2ND_USE_R": float(row[uses[1]]),
"3RD_USE": uses[2],
"3RD_USE_R": float(row[uses[2]])}
if pd.np.isclose(result["2ND_USE_R"], 0.0):
result["1ST_USE_R"] = 1.0
result["2ND_USE_R"] = 0.0
result["3RD_USE_R"] = 0.0
result["2ND_USE"] = "NONE"
result["3RD_USE"] = "NONE"
elif pd.np.isclose(result["3RD_USE_R"], 0.0):
result["1ST_USE_R"] = 1.0 - result["2ND_USE_R"]
result["3RD_USE_R"] = 0.0
result["3RD_USE"] = "NONE"
result["1ST_USE_R"] = 1.0 - result["2ND_USE_R"] - result["3RD_USE_R"]
return result
def merge_age_and_occupancy_to_typology(age_dbf, occupancy_dbf, standards_df):
# merge age.dbf and occupancy.dbf to typology.dbf
typology_dbf_columns = ["Name", "YEAR", "STANDARD", "1ST_USE", "1ST_USE_R", "2ND_USE", "2ND_USE_R", "3RD_USE", "3RD_USE_R"]
typology_dbf = pd.DataFrame(columns=typology_dbf_columns)
for rindex, row in age_dbf.iterrows():
typology_row = {
"Name": row.Name,
"YEAR": row.built,
"STANDARD": lookup_standard(row.built, standards_df)}
typology_row.update(convert_occupancy(row.Name, occupancy_dbf))
typology_dbf = typology_dbf.append(typology_row, ignore_index=True)
return typology_dbf
standards_df = pd.read_excel(CONSTRUCTION_STANDARD_XLSX, "STANDARD_DEFINITION")
typology_dbf = merge_age_and_occupancy_to_typology(old_age_dbf, old_occupancy_dbf, standards_df)
typology_dbf
air_conditioning_systems_dbf = dbf_to_dataframe(
os.path.join(SCENARIO_TO_MIGRATE, "inputs", "building-properties", "air_conditioning_systems.dbf"))
air_conditioning_systems_dbf
dbf_to_dataframe(locator.get_building_air_conditioning())
locator.get_database_construction_standards()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Loading
|
<ASSISTANT_TASK:>
Python Code:
# Imports
import csv
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from itertools import groupby
from operator import itemgetter
# Load the series data
info = pd.read_csv('../data/bls/series.csv')
def series_info(blsid, info=info):
return info[info.blsid == blsid]
# Use this function to lookup specific BLS series info.
series_info("LNS14000025")
# Load each series, grouping by BLS ID
def load_series_records(path='../data/bls/records.csv'):
with open(path, 'r') as f:
reader = csv.DictReader(f)
for blsid, rows in groupby(reader, itemgetter('blsid')):
# Read all the data from the file and sort
rows = list(rows)
rows.sort(key=itemgetter('period'))
# Extract specific data from each row, namely:
# The period at the month granularity
# The value as a float
periods = [pd.Period(row['period']).asfreq('M') for row in rows]
values = [float(row['value']) for row in rows]
yield pd.Series(values, index=periods, name=blsid)
series = pd.concat(list(load_series_records()), axis=1)
series
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Stacking a set of parallel jobs
Step3: Now, we construct a sequence of tasks to compute, where the output of one
Step4: To fit the stack, we call the stack method on the manager, and since
Step5: If we instead want to append these errors as features, we can simply
Step6: The initialize method primarily allocates memory of input data and
Step7: The output now is a list of arrays, the first contains the same predictions
Step8: ..warning
|
<ASSISTANT_TASK:>
Python Code:
from mlens.parallel import ParallelProcessing, Job, Learner
from mlens.index import FoldIndex
from mlens.utils.dummy import OLS
import numpy as np
np.random.seed(2)
X = np.arange(20).reshape(10, 2)
y = np.random.rand(10)
indexer = FoldIndex(folds=2)
learner = Learner(estimator=OLS(),
indexer=indexer,
name='ols')
manager = ParallelProcessing(n_jobs=-1)
out = manager.map(learner, 'fit', X, y, return_preds=True)
print(out)
from mlens.parallel import Transformer, Pipeline
from mlens.utils.dummy import Scale
from sklearn.base import BaseEstimator, TransformerMixin
def error_scorer(p, y):
return np.abs(p - y)
class Error(BaseEstimator, TransformerMixin):
Transformer that computes the errors of a base learners
def __init__(self, scorer):
self.scorer = scorer
def fit(self, X, y):
return self
def transform(self, X, y):
return self.scorer(X, y), y
tasks = []
for i in range(3):
if i != 0:
pipeline = Pipeline([('err', Error(error_scorer))], return_y=True)
transformer = Transformer(
estimator=pipeline,
indexer=indexer,
name='sc-%i' % (i + 1)
)
tasks.append(transformer)
learner = Learner(
estimator=OLS(),
preprocess='sc-%i' % (i+1) if i != 0 else None,
indexer=indexer,
name='ols-%i' % (i + 1)
)
tasks.append(learner)
out = manager.stack(
tasks, 'fit', X, y, return_preds=True, split=False)
print(out)
out = manager.initialize(
'fit', X, y, None, return_preds=['ols-1', 'ols-3'], stack=True, split=False)
out = manager.process(tasks, out)
print(out)
manager.clear()
learner = Learner(estimator=OLS(), indexer=indexer)
with ParallelProcessing() as mananger:
manager.stack(learner, 'fit', X, y, split=False)
out = manager.stack(learner, 'predict', X, split=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Markdown
Step2: Pandas
Step3: Matplotlib
|
<ASSISTANT_TASK:>
Python Code:
a = 1 + 2
a
a + 1
_
? sum
! ps -xa | grep python
import time
%time time.sleep(1)
import numpy as np
np.array([[1,2,3], [7,1,2]])
data = np.array([1,2,3,4,5])
data
data[1:-2]
data + 1
data * 2
data * data
np.sum(data * data)
data.dot(data)
data > 2
data[data > 2]
replaces = data[:]
replaces[data > 2] = -1
replaces
matrix = np.random.rand(5, 4)
matrix
matrix[1]
matrix[:, 1]
matrix[(1, 2), :]
import random
vec_len = 1000
v1 = [random.random() for _ in range(vec_len)]
v2 = [random.random() for _ in range(vec_len)]
%timeit [a * b for a, b in zip(v1, v2)]
np_v1, np_v2 = np.array(v1), np.array(v2)
%timeit np_v1 * np_v2
np.arange(1, 5, 0.7)
np.linspace(1, 5, 13)
first = np.array([[1, 2], [3, 4]])
second = np.array([[10, 20], [30, 40]])
np.hstack((first, second))
np.vstack((first, second))
import pandas as pd
df = pd.read_csv('iris.csv')
df.head(3)
df[['Sepal.length', 'Petal.length']].head(5)
df[df['Sepal.length'] > 7.5]
df.describe()
%pylab inline
df['Sepal.length'].plot(kind='hist');
import matplotlib.pyplot as plt
x = np.arange(1, 10, 0.2)
y = np.sin(x)
plt.plot(x, y);
plt.scatter(x, y);
plt.plot(x, y, marker='o');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Elbow Method
Step2: 8 Blobs Dataset
Step3: Datasets without Centers
Step4: Silhouette Score
Step6: Intercluster Distance Map
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append("../..")
import numpy as np
import yellowbrick as yb
import matplotlib.pyplot as plt
from functools import partial
from sklearn.datasets import make_blobs as sk_make_blobs
from sklearn.datasets import make_circles, make_moons
# Helpers for easy dataset creation
N_SAMPLES = 1000
N_FEATURES = 12
SHUFFLE = True
# Make blobs partial
make_blobs = partial(sk_make_blobs, n_samples=N_SAMPLES, n_features=N_FEATURES, shuffle=SHUFFLE)
from sklearn.cluster import KMeans
from yellowbrick.cluster import KElbowVisualizer
## Make 8 blobs dataset
X, y = make_blobs(centers=8)
visualizer = KElbowVisualizer(KMeans(), k=(4,12))
visualizer.fit(X)
visualizer.poof()
visualizer = KElbowVisualizer(KMeans(), k=(4,12), metric="silhouette")
visualizer.fit(X)
visualizer.poof()
visualizer = KElbowVisualizer(KMeans(), k=(4,12), metric="calinski_harabaz")
visualizer.fit(X)
visualizer.poof()
## Make cicles dataset
X, y = make_circles(n_samples=N_SAMPLES)
visualizer = KElbowVisualizer(KMeans(), k=(4,12))
visualizer.fit(X)
visualizer.poof()
## Make moons dataset
X, y = make_moons(n_samples=N_SAMPLES)
visualizer = KElbowVisualizer(KMeans(), k=(4,12))
visualizer.fit(X)
visualizer.poof()
from yellowbrick.cluster import SilhouetteVisualizer
## Make 8 blobs dataset
X, y = make_blobs(centers=8)
visualizer = SilhouetteVisualizer(KMeans(6))
visualizer.fit(X)
visualizer.poof()
def prop_to_size(prop, mi=0, ma=5, power=0.5):
Scale a property to be used as a size
prop = np.asarray(prop)
return mi + (ma - mi)*(((prop - prop.min()) / (prop.max() - prop.min()))**power)
from sklearn.manifold import MDS
## Make 12 blobs dataset
X, y = make_blobs(centers=12)
## Fit KMeans model on dataset
model = KMeans(9).fit(X)
from matplotlib.lines import Line2D
def intercluster_distance(model, ax=None):
# Create the figure if an axes isn't passed in
if ax is None:
fig, ax = plt.subplots(figsize=(9,6))
else:
fig = plt.gcf()
## Get centers
## TODO: is this how sklearn stores centers in all models?
C = model.cluster_centers_
## Compute the sizes of the clusters
scores = np.bincount(model.predict(X))
size = prop_to_size(scores, 400, 25000)
## Use MDS to plot centers
Cm = MDS().fit_transform(C)
ax.scatter(Cm[:,0], Cm[:,1], s=size, c='#2e719344', edgecolor='#2e719399', linewidth=1)
## Annotate the clustes with their labels
for i, pt in enumerate(Cm):
ax.text(s=str(i), x=pt[0], y=pt[1], va="center", ha="center", fontweight='bold', size=13)
## Set the title
ax.set_title("Intercluster Distance Map (via Multidimensional Scaling)")
# Create origin grid
ax.set_xticks([0])
ax.set_yticks([0])
ax.set_xticklabels([])
ax.set_yticklabels([])
ax.set_xlabel("PC2")
ax.set_ylabel("PC1")
# Create a regular legend with target "size" descriptor
# handles = tuple([
# Line2D([0], [0], color="none", marker="o", markersize=i, markerfacecolor='none', markeredgecolor="#999999", markeredgewidth=1, markevery=i)
# for i in [3,9,18]
# ])
# ax.legend([handles], ['membership',], loc='best')
# Create the size legend on an inner axes
lax = fig.add_axes([.9, 0.25, 0.3333, 0.5], frameon=False, facecolor="none")
make_size_legend(scores, size, lax)
return ax
intercluster_distance(model)
from matplotlib.patches import Circle
def make_size_legend(scores, areas, ax=None):
# Create the figure if an axes isn't passed in
if ax is None:
_, ax = plt.subplots()
## Compute the sizes of the clusters
radii = np.sqrt(areas / np.pi)
scaled = np.interp(radii, (radii.min(), radii.max()), (.1, 1))
print(size, radii)
# Compute the locations of the 25th, 50th, and 75th percentiles of the score
indices = np.array([
np.where(scores==np.percentile(scores, p, interpolation='nearest'))[0][0]
for p in (25, 50, 75)
])
# Draw circles with their various sizes
for idx in indices:
center = (-0.30, 1-scaled[idx])
c = Circle(center, scaled[idx], facecolor="none", edgecolor="#2e7193", linewidth=1.5, linestyle="--", label="bob")
ax.add_patch(c)
ax.annotate(
scores[idx], (-0.30, 1-(2*scaled[idx])), xytext=(1, 1-(2*scaled[idx])),
arrowprops=dict(arrowstyle="wedge", color="#2e7193"), va='center', ha='center',
)
# Draw size legend title
ax.text(s="membership", x=0, y=1.2, va='center', ha='center')
ax.set_xlim(-1.4,1.4)
ax.set_ylim(-1.4,1.4)
ax.set_xticks([])
ax.set_yticks([])
for name in ax.spines:
ax.spines[name].set_visible(False)
ax.grid(False)
return ax
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Understanding coordinate frames
Step2: Coordinate frame definitions
Step3: It is quite clear that the MRI surfaces (head, brain) are not well aligned
Step4: Defining the head↔MRI trans using the GUI
Step5: Alignment without MRI
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import numpy as np
from mayavi import mlab
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
trans_fname = op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
raw = mne.io.read_raw_fif(raw_fname)
trans = mne.read_trans(trans_fname)
src = mne.read_source_spaces(op.join(subjects_dir, 'sample', 'bem',
'sample-oct-6-src.fif'))
mne.viz.plot_alignment(raw.info, trans=trans, subject='sample',
subjects_dir=subjects_dir, surfaces='head-dense',
show_axes=True, dig=True, eeg=[], meg='sensors',
coord_frame='meg')
mlab.view(45, 90, distance=0.6, focalpoint=(0., 0., 0.))
print('Distance from head origin to MEG origin: %0.1f mm'
% (1000 * np.linalg.norm(raw.info['dev_head_t']['trans'][:3, 3])))
print('Distance from head origin to MRI origin: %0.1f mm'
% (1000 * np.linalg.norm(trans['trans'][:3, 3])))
mne.viz.plot_alignment(raw.info, trans=None, subject='sample', src=src,
subjects_dir=subjects_dir, dig=True,
surfaces=['head-dense', 'white'], coord_frame='meg')
mne.viz.plot_alignment(raw.info, trans=trans, subject='sample',
src=src, subjects_dir=subjects_dir, dig=True,
surfaces=['head-dense', 'white'], coord_frame='meg')
# mne.gui.coregistration(subject='sample', subjects_dir=subjects_dir)
sphere = mne.make_sphere_model(info=raw.info, r0='auto', head_radius='auto')
src = mne.setup_volume_source_space(sphere=sphere, pos=10.)
mne.viz.plot_alignment(
raw.info, eeg='projected', bem=sphere, src=src, dig=True,
surfaces=['brain', 'outer_skin'], coord_frame='meg', show_axes=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-1', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Todo
Step2: SUppose we are given a data set $(y_i, x_i)$ for $i=1\dots N$
|
<ASSISTANT_TASK:>
Python Code:
import scipy.linalg as la
LL = np.zeros(N)
for rr in range(N):
ss = s*np.ones(N)
ss[rr] = q
D_r = np.diag(1/ss)
V_r = np.dot(np.sqrt(D_r), W)
b = y/np.sqrt(ss)
a_r,re,ra, cond = la.lstsq(V_r, b)
e = (y-np.dot(W, a_r))/np.sqrt(ss)
LL[rr] = -0.5*np.dot(e.T, e)
print(LL[rr])
#plt.plot(x, y, 'o')
#plt.plot(x, np.dot(W, a_r),'-')
#plt.plot(e)
plt.plot(LL)
plt.show()
import numpy as np
import scipy as sc
import scipy.linalg as la
def cond_Gauss(Sigma, mu, idx1, idx2, x2):
Sigma11 = Sigma[idx1, idx1].reshape((len(idx1),len(idx1)))
Sigma12 = Sigma[idx1, idx2].reshape((len(idx1),len(idx2)))
Sigma22 = Sigma[idx2, idx2].reshape((len(idx2),len(idx2)))
# print(Sigma11)
# print(Sigma12)
# print(Sigma22)
mu1 = mu[idx1]
mu2 = mu[idx2]
G = np.dot(Sigma12, la.inv(Sigma22))
cond_Sig_1 = Sigma11 - np.dot(G, Sigma12.T)
cond_mu_1 = mu1 + np.dot(G, (x2-mu2))
return cond_mu_1, cond_Sig_1
mu = np.array([0,0])
#P = np.array([2])
#A = np.array([1])
idx1 = [0]
idx2 = [1]
x2 = 5
P = np.array(3).reshape((len(idx1), len(idx1)))
A = np.array(-1).reshape((len(idx2), len(idx1)))
rho = np.array(0)
#Sigma = np.array([[P, A*P],[P*A, A*P*A + rho ]])
I = np.eye(len(idx2))
Sigma = np.concatenate((np.concatenate((P,np.dot(P, A.T)),axis=1), np.concatenate((np.dot(A, P),np.dot(np.dot(A, P), A.T ) + rho*I ),axis=1)))
print(Sigma)
#print(mu)
cond_mu_1, cond_Sig_1 = cond_Gauss(Sigma, mu, idx1, idx2, x2)
print('E[x_1|x_2 = {}] = '.format(x2) , cond_mu_1)
print(cond_Sig_1)
# Use this code to generate a dataset
N = 30
K = 4
s = 0.1
q = 10*s
x = 2*np.random.randn(N)
e = np.sqrt(s) * np.random.randn(N)
# Create the vandermonde matrix
A = x.reshape((N,1))**np.arange(K).reshape(1,K)
w = np.array([0,-1,0.5,0])
y = np.dot(A, w) + e
plt.plot(x, y, 'o')
#plt.plot(e)
plt.show()
# Sig = [P, A.T; A A*A.T+rho*I]
N1 = 3
N2 = 7
P = np.random.randn(N1,N1)
A = np.random.randn(N2,N1)
#Sig11 = np.mat(P)
#Sig12 = np.mat(A.T)
#Sig21 = np.mat(A)
#Sig22 = Sig21*Sig12
Sig11 = np.mat(P)
Sig12 = np.mat(A.T)
Sig21 = np.mat(A)
Sig22 = Sig21*Sig12
print(Sig11.shape)
print(Sig12.shape)
print(Sig21.shape)
print(Sig22.shape)
W = np.bmat([[Sig11, Sig12],[Sig21, Sig22]])
Sig22.shape
3500*1.18*12
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
x = np.array([3.7, 2.3, 6.9, 7.5])
N = len(x)
lam = np.arange(0.05,10,0.01)
ll = -N*np.log(lam) - np.sum(x)/lam
plt.plot(lam, np.exp(ll))
plt.plot(np.mean(x), 0, 'ok')
plt.show()
xx = np.arange(0, 10, 0.01)
lam = 1000
p = 1/lam*np.exp(-xx/lam)
plt.plot(xx, p)
plt.plot(x, np.zeros((N)), 'ok')
plt.ylim((0,1))
plt.show()
1-(5./6.)**4
1-18/37
import numpy as np
N = 7
A = np.diag(np.ones(7))
ep = 0.5
a = 1
idx = [1, 2, 3, 4, 5, 6, 0]
A = ep*A + (1-ep)*A[:,idx]
C = np.array([[a, 1-a, 1-a, a, a, 1-a, 1-a],[1-a, a, a, 1-a, 1-a, a, a]])
p = np.ones((1,N))/N
print(A)
y = [1, 1, 0, 0, 0]
print(p)
p = C[y[0] , :]*p
print(p/np.sum(p, axis=1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Exploration
Step2: Data Munging
Step3: At present, TPOT requires all the data to be in numerical format. As we can see below, our data set has 5 categorical variables which contain non-numerical values
Step4: We then check the number of levels that each of the five categorical variables have.
Step5: As we can see, Sex and Embarked have few levels. Let's find out what they are.
Step6: We then code these levels manually into numerical values. For nan i.e. the missing values, we simply replace them with a placeholder value (-999). In fact, we perform this replacement for the entire data set.
Step7: Since Name and Ticket have so many levels, we drop them from our analysis for the sake of simplicity. For Cabin, we encode the levels as digits using Scikit-learn's MultiLabelBinarizer and treat them as new features.
Step8: Drop the unused features from the dataset.
Step9: We then add the encoded features to form the final dataset to be used with TPOT.
Step10: Keeping in mind that the final dataset is in the form of a numpy array, we can check the number of features in the final dataset as follows.
Step11: Finally we store the class labels, which we need to predict, in a separate variable.
Step12: Data Analysis using TPOT
Step13: After that, we proceed to calling the fit, score and export functions on our training dataset. To get a better idea of how these functions work, refer the TPOT documentation here.
Step14: Let's have a look at the generated code. As we can see, the random forest classifier performed the best on the given dataset out of all the other models that TPOT currently evaluates on. If we ran TPOT for more generations, then the score should improve further.
Step15: Make predictions on the submission data
Step16: The most important step here is to check for new levels in the categorical variables of the submission dataset that are absent in the training set. We identify them and set them to our placeholder value of '-999', i.e., we treat them as missing values. This ensures training consistency, as otherwise the model does not know what to do with the new levels in the submission dataset.
Step17: We then carry out the data munging steps as done earlier for the training dataset.
Step18: While calling MultiLabelBinarizer for the submission data set, we first fit on the training set again to learn the levels and then transform the submission dataset values. This further ensures that only those levels that were present in the training dataset are transformed. If new levels are still found in the submission dataset then it will return an error and we need to go back and check our earlier step of replacing new levels with the placeholder value.
|
<ASSISTANT_TASK:>
Python Code:
# Import required libraries
from tpot import TPOT
from sklearn.cross_validation import train_test_split
import pandas as pd
import numpy as np
# Load the data
titanic = pd.read_csv('data/titanic_train.csv')
titanic.head(5)
titanic.groupby('Sex').Survived.value_counts()
titanic.groupby(['Pclass','Sex']).Survived.value_counts()
id = pd.crosstab([titanic.Pclass, titanic.Sex], titanic.Survived.astype(float))
id.div(id.sum(1).astype(float), 0)
titanic.rename(columns={'Survived': 'class'}, inplace=True)
titanic.dtypes
for cat in ['Name', 'Sex', 'Ticket', 'Cabin', 'Embarked']:
print("Number of levels in category '{0}': \b {1:2.2f} ".format(cat, titanic[cat].unique().size))
for cat in ['Sex', 'Embarked']:
print("Levels for catgeory '{0}': {1}".format(cat, titanic[cat].unique()))
titanic['Sex'] = titanic['Sex'].map({'male':0,'female':1})
titanic['Embarked'] = titanic['Embarked'].map({'S':0,'C':1,'Q':2})
titanic = titanic.fillna(-999)
pd.isnull(titanic).any()
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
CabinTrans = mlb.fit_transform([{str(val)} for val in titanic['Cabin'].values])
CabinTrans
titanic_new = titanic.drop(['Name','Ticket','Cabin','class'], axis=1)
assert (len(titanic['Cabin'].unique()) == len(mlb.classes_)), "Not Equal" #check correct encoding done
titanic_new = np.hstack((titanic_new.values,CabinTrans))
np.isnan(titanic_new).any()
titanic_new[0].size
titanic_class = titanic['class'].values
training_indices, validation_indices = training_indices, testing_indices = train_test_split(titanic.index, stratify = titanic_class, train_size=0.75, test_size=0.25)
training_indices.size, validation_indices.size
tpot = TPOT(generations=5, verbosity=2)
tpot.fit(titanic_new[training_indices], titanic_class[training_indices])
tpot.score(titanic_new[validation_indices], titanic.loc[validation_indices, 'class'].values)
tpot.export('tpot_titanic_pipeline.py')
# %load tpot_titanic_pipeline.py
import numpy as np
import pandas as pd
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import RandomForestClassifier
# NOTE: Make sure that the class is labeled 'class' in the data file
tpot_data = pd.read_csv('PATH/TO/DATA/FILE', sep='COLUMN_SEPARATOR')
training_indices, testing_indices = train_test_split(tpot_data.index, stratify = tpot_data['class'].values, train_size=0.75, test_size=0.25)
result1 = tpot_data.copy()
# Perform classification with a random forest classifier
rfc1 = RandomForestClassifier(n_estimators=500, max_features=min(49, len(result1.columns) - 1))
rfc1.fit(result1.loc[training_indices].drop('class', axis=1).values, result1.loc[training_indices, 'class'].values)
result1['rfc1-classification'] = rfc1.predict(result1.drop('class', axis=1).values)
# Read in the submission dataset
titanic_sub = pd.read_csv('data/titanic_test.csv')
titanic_sub.describe()
for var in ['Cabin']: #,'Name','Ticket']:
new = list(set(titanic_sub[var]) - set(titanic[var]))
titanic_sub.ix[titanic_sub[var].isin(new), var] = -999
titanic_sub['Sex'] = titanic_sub['Sex'].map({'male':0,'female':1})
titanic_sub['Embarked'] = titanic_sub['Embarked'].map({'S':0,'C':1,'Q':2})
titanic_sub = titanic_sub.fillna(-999)
pd.isnull(titanic_sub).any()
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
SubCabinTrans = mlb.fit([{str(val)} for val in titanic['Cabin'].values]).transform([{str(val)} for val in titanic_sub['Cabin'].values])
titanic_sub = titanic_sub.drop(['Name','Ticket','Cabin'], axis=1)
# Form the new submission data set
titanic_sub_new = np.hstack((titanic_sub.values,SubCabinTrans))
np.any(np.isnan(titanic_sub_new))
# Ensure equal number of features in both the final training and submission dataset
assert (titanic_new.shape[1] == titanic_sub_new.shape[1]), "Not Equal"
# Generate the predictions
submission = tpot.predict(titanic_sub_new)
# Create the submission file
final = pd.DataFrame({'PassengerId': titanic_sub['PassengerId'], 'Survived': submission})
final.to_csv('data/submission.csv', index = False)
final.shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-1', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lesson
Step2: Project 1
Step5: Transforming Text into Numbers
|
<ASSISTANT_TASK:>
Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
layer_0 = np.zeros((1, vocab_size))
layer_0
word2Index = {}
for i, word in enumerate(vocab):
word2Index[word] = i
word2Index
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent \
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
## Your code here
for word in review.split(" "):
layer_0[0][word2Index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
if label == 'POSITIVE':
return 1
else:
return 0
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.