Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
10,600
|
<ASSISTANT_TASK:>
Python Code:
import plotly.plotly as py
from data_prep import *
py.sign_in('charleyjoo', 'rx9n91vitl')
%matplotlib inline
plt.ioff()
set_directory()
conn = dbi.connect('crash_database')
get_tables(conn)
conn.close()
crash_data = get_data()
crash_data.head()
plot_x_vs_y()
fig = plot_scatter(x='Average AADT', y='Total Accident Count')
plt.xlim(xmin=0)
py.iplot_mpl(fig, strip_style = True, filename='ipython/mpl_example')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Explore the Database
Step2: Show the names of all tables in the database.
Step3: Get the crash data table as a dataframe and show the first five rows.
Step4: Interactive Plotting Funtion
Step5: Interactive Plotting Funtion Using plotly Package
|
10,601
|
<ASSISTANT_TASK:>
Python Code:
# Author: Marijn van Vliet <w.m.vanvliet@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, apply_inverse
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
# Read evoked data
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
# Read inverse solution
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
inv = read_inverse_operator(fname_inv)
# Apply inverse solution, set pick_ori='vector' to obtain a
# :class:`mne.VectorSourceEstimate` object
snr = 3.0
lambda2 = 1.0 / snr ** 2
stc = apply_inverse(evoked, inv, lambda2, 'dSPM', pick_ori='vector')
# Use peak getter to move visualization to the time point of the peak magnitude
_, peak_time = stc.magnitude().get_peak(hemi='lh')
brain = stc.plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir)
# You can save a brain movie with:
# brain.save_movie(time_dilation=20, tmin=0.05, tmax=0.16, framerate=10,
# interpolation='linear', time_viewer=True)
stc_max, directions = stc.project('pca', src=inv['src'])
# These directions must by design be close to the normals because this
# inverse was computed with loose=0.2
print('Absolute cosine similarity between source normals and directions: '
f'{np.abs(np.sum(directions * inv["source_nn"][2::3], axis=-1)).mean()}')
brain_max = stc_max.plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir,
time_label='Max power')
brain_normal = stc.project('normal', inv['src'])[0].plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir,
time_label='Normal')
fname_inv_fixed = (
data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-fixed-inv.fif')
inv_fixed = read_inverse_operator(fname_inv_fixed)
stc_fixed = apply_inverse(
evoked, inv_fixed, lambda2, 'dSPM', pick_ori='vector')
brain_fixed = stc_fixed.plot(
initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot the source estimate
Step2: Plot the activation in the direction of maximal power for this data
Step3: The normal is very similar
Step4: You can also do this with a fixed-orientation inverse. It looks a lot like
|
10,602
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
def fx(X):
X = np.atleast_2d(X)
# Return objective & gradient
return np.sum(np.square(X), axis=1)[:,None], 2*X
from gpflowopt.domain import ContinuousParameter
domain = ContinuousParameter('x1', -2, 2) + ContinuousParameter('x2', -1, 2)
domain
from gpflowopt.optim import SciPyOptimizer
optimizer = SciPyOptimizer(domain, method='SLSQP')
optimizer.set_initial([-1,-1])
optimizer.optimize(fx)
from gpflowopt.optim import MCOptimizer
optimizer = MCOptimizer(domain, 200)
optimizer.optimize(fx)
from gpflowopt.bo import BayesianOptimizer
from gpflowopt.design import FactorialDesign
from gpflowopt.acquisition import ExpectedImprovement
import gpflow
# The Bayesian Optimizer does not expect gradients to be returned
def fx(X):
X = np.atleast_2d(X)
# Return objective & gradient
return np.sum(np.square(X), axis=1)[:,None]
X = FactorialDesign(2, domain).generate()
Y = fx(X)
# initializing a standard BO model, Gaussian Process Regression with
# Matern52 ARD Kernel
model = gpflow.gpr.GPR(X, Y, gpflow.kernels.Matern52(2, ARD=True))
alpha = ExpectedImprovement(model)
# Now we must specify an optimization algorithm to optimize the acquisition
# function, each iteration.
acqopt = SciPyOptimizer(domain)
# Now create the Bayesian Optimizer
optimizer = BayesianOptimizer(domain, alpha, optimizer=acqopt)
with optimizer.silent():
r = optimizer.optimize(fx, n_iter=15)
print(r)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Domain
Step2: Optimize
Step3: And here is how we optimize it Monte-Carlo. We can pass the same function as the gradients are automatically discarded.
Step4: Bayesian Optimization
|
10,603
|
<ASSISTANT_TASK:>
Python Code:
# import the dataset
from quantopian.interactive.data.eventvestor import shareholder_meetings
# or if you want to import the free dataset, use:
# from quantopian.data.eventvestor import shareholder_meetings_free
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
# Let's use blaze to understand the data a bit using Blaze dshape()
shareholder_meetings.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
shareholder_meetings.count()
# Let's see what the data looks like. We'll grab the first three rows.
shareholder_meetings[:3]
# get tesla's sid first
tesla_sid = symbols('TSLA').sid
meetings = shareholder_meetings[('2012-12-31' < shareholder_meetings['asof_date']) &
(shareholder_meetings['asof_date'] <'2015-01-01') &
(shareholder_meetings.sid == tesla_sid)]
# When displaying a Blaze Data Object, the printout is automatically truncated to ten rows.
meetings.sort('asof_date')
df = odo(meetings, pd.DataFrame)
df = df[['record_date','meeting_date','sid']]
df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's go over the columns
Step2: Now suppose we want a DataFrame of the Blaze Data Object above, but only want the record_date, meeting_date, and sid.
|
10,604
|
<ASSISTANT_TASK:>
Python Code:
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
img_RGB = cv2.imread('demo1.jpg')
plt.imshow(cv2.cvtColor(img_RGB, cv2.COLOR_BGR2RGB))
print('Shape_RGB:', img_RGB.shape)
print('Type_RGB:', img_RGB.dtype)
print('RGB intensity at 300,250:',img_RGB[300,250])
img_RGB_lineintensity = img_RGB[600]
plt.plot(img_RGB_lineintensity)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read image and Check inspect values of image at different locations
Step2: RGB pixel intensity 0-255
Step3: RGB line intensity 0-255
|
10,605
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
import time
from six.moves import cPickle as pickle
from six.moves import range
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tic = time.time()
try:
tf.global_variables_initializer().run()
except AttributeError:
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %3d : %f' % (step, l))
print('Training accuracy : %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy : %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy : %.1f%%' % accuracy(test_prediction.eval(), test_labels))
print('GradientDecent time : %.3f s' % (time.time() - tic))
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 3001
with tf.Session(graph=graph) as session:
tic = time.time()
try:
tf.global_variables_initializer().run()
except AttributeError:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
print("StochasticGradientDescent Time: %.3f s" % (time.time() - tic))
batch_size = 128
nodes = 1024
num_steps = 3001
nngraph = tf.Graph()
with nngraph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, nodes]))
biases = tf.Variable(tf.zeros([nodes]))
z = tf.matmul(tf_train_dataset, weights) + biases
# Hidden Layer
u = np.sqrt(6.0) / np.sqrt(nodes + num_labels)
hidden_weights = tf.Variable(
tf.random_uniform([nodes, num_labels], minval=-u, maxval=u))
hidden_bias = tf.Variable(tf.zeros([num_labels]))
logits = tf.matmul(tf.nn.relu(z), hidden_weights) + hidden_bias
def forward_prop_tensor(dataset):
return tf.nn.softmax(
tf.matmul(
tf.nn.relu(tf.matmul(dataset, weights) + biases), hidden_weights
) + hidden_bias
)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = forward_prop_tensor(tf_valid_dataset)
test_prediction = forward_prop_tensor(tf_test_dataset)
with tf.Session(graph=nngraph) as session:
tic = time.time()
try:
tf.global_variables_initializer().run()
except AttributeError:
tf.initialize_all_variables().run()
print("One-Hidden-Layer NueralNetworkGraph Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
print("StochasticGradientDescent Time: %.3f s" % (time.time() - tic))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
Step6: Let's run it
Step7: Problem
|
10,606
|
<ASSISTANT_TASK:>
Python Code:
import nltk
from nltk.corpus import names
import random
name = [(n,'M') for n in names.words('male.txt')] + [(n,'F') for n in names.words('female.txt')]
random.shuffle(name)
name[:10]
def gender_feature(name): return {'last_letter': name[-1]}
featuresets = [(gender_feature(n), g) for (n,g) in name]
train, test = featuresets[500:], featuresets[:500]
classifier = nltk.NaiveBayesClassifier.train(train)
classifier.classify({'last_letter': 'a'})
nltk.classify.accuracy(classifier, test)
classifier.show_most_informative_features()
def gender_feature(name): return {'last_letter': name[-1], 'first_letter': name[0]}
featuresets = [(gender_feature(n), g) for (n,g) in name]
train, test = featuresets[500:], featuresets[:500]
classifier = nltk.NaiveBayesClassifier.train(train)
nltk.classify.accuracy(classifier, test)
def gender_feature(name): return {'last_letter': name[-1], 'first_letter': name[0], 'len': len(name)}
featuresets = [(gender_feature(n), g) for (n,g) in name]
train, test = featuresets[500:], featuresets[:500]
classifier = nltk.NaiveBayesClassifier.train(train)
nltk.classify.accuracy(classifier, test)
classifier.show_most_informative_features()
from nltk.corpus import movie_reviews
movie_reviews.categories()
movie_reviews.fileids('neg')[:5], movie_reviews.fileids('pos')[:5]
documents = [(list(movie_reviews.words(f)), c)
for c in movie_reviews.categories() for f in movie_reviews.fileids(c)]
random.shuffle(documents)
# 選擇2000個字作為feature
all_words = nltk.FreqDist(w.lower() for w in movie_reviews.words())
word_features = [w for (w, c) in all_words.most_common()[:2000]]
word_features[:5]
def document_features(doc):
document_words = set(doc) # 自動排除重複字
features = {w: w in document_words for w in word_features}
return features
tmp = document_features(['the', 'she'])
[key for key in tmp if tmp[key]]
featuresets = [(document_features(d), c) for (d, c) in documents]
train, test = featuresets[100:], featuresets[:100]
classifier = nltk.NaiveBayesClassifier.train(train)
nltk.classify.accuracy(classifier, test)
classifier.show_most_informative_features()
import nltk
from nltk.corpus import brown
fdist = nltk.FreqDist()
fdist.update([w[-1:] for w in brown.words()])
fdist.update([w[-2:] for w in brown.words()])
fdist.update([w[-3:] for w in brown.words()])
common_suf = [k for (k, c) in fdist.most_common()[:100]]
common_suf[:5]
def pos_features(word):
features = {suffix:word.lower().endswith(suffix) for suffix in common_suf}
return features
featuresets = [(pos_features(n), pos) for (n, pos) in brown.tagged_words(categories='news')]
size = int(len(featuresets) * 0.1)
train, test = featuresets[size:], featuresets[:size]
size
classifier = nltk.DecisionTreeClassifier.train(train)
nltk.classify.accuracy(classifier, test)
# DecisionTree可以列出結構
print classifier.pretty_format(depth=10)
def pos_features(sentence, i):
features = {"suf_1": sentence[i][-1:], "suf_2": sentence[i][-2:], "suf_3": sentence[i][-3:]}
if i == 0:
features['prev'] = '*'
else:
features['prev'] = sentence[i-1]
return features
pos_features(brown.sents()[0], 8)
brown.sents()[0][7:10]
tagged_sents = brown.tagged_sents(categories='news')
featuresets = []
for tagged_sent in tagged_sents:
untagged_sent = nltk.tag.untag(tagged_sent)
for i, (word, tag) in enumerate(tagged_sent):
featuresets.append( (pos_features(untagged_sent, i), tag) )
size = int(len(featuresets) * 0.1)
train_set, test_set = featuresets[size:], featuresets[:size]
classifier = nltk.NaiveBayesClassifier.train(train_set)
nltk.classify.accuracy(classifier, test_set)
classifier.show_most_informative_features()
def pos_features(sentence, i, history):
features = {"suf_1": sentence[i][-1:], "suf_2": sentence[i][-2:], "suf_3": sentence[i][-3:]}
if i == 0:
features['prev-word'] = '*'
features['prev-tag'] = '*'
else:
features['prev-word'] = sentence[i-1]
features['prev-tag'] = history[i-1]
return features
# 自己定義自己的tagger class
class ConsecutivePosTagger(nltk.TaggerI):
def __init__(self, train_sents):
train_set = []
for tagged_sent in train_sents:
untagged_sent = nltk.tag.untag(tagged_sent)
history = []
# 利用enumerate產生 (index, value)
for i, (word, tag) in enumerate(tagged_sent):
featureset = pos_features(untagged_sent, i, history)
train_set.append( (featureset, tag) )
history.append(tag)
self.classifier = nltk.NaiveBayesClassifier.train(train_set)
def tag(self, sentence):
history = []
for i, word in enumerate(sentence):
featureset = pos_features(sentence, i, history)
tag = self.classifier.classify(featureset)
history.append(tag)
return zip(sentence, history)
tagged_sents = brown.tagged_sents(categories='news')
size = int(len(tagged_sents) * 0.1)
train_sents, test_sents = tagged_sents[size:], tagged_sents[:size]
tagger = ConsecutivePosTagger(train_sents)
tagger.evaluate(test_sents)
import random
from nltk.corpus import brown
tagged_sents = list(brown.tagged_sents(categories='news'))
random.shuffle(tagged_sents)
# 用10%的資料作為test set
size = int(len(tagged_sents) * 0.1)
train_set, test_set = tagged_sents[size:], tagged_sents[:size]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 準備一個function,用來產生feature,這裡的feature是用最後的英文字母。
Step2: 嘗試不同的feature,例如加入第一個字母,或加入姓名的長度。
Step3: Document Classification
Step4: Part-of-Speech Tagging
Step5: Exploiting Context
Step6: Sequence Classification
Step7: Evaluation
|
10,607
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
datapath = '../../../test/convergence/output/'
# load results
path = datapath + 'task1/'
files = !ls {path}
results = {name: np.fromfile(path + name) for name in files if name!='dt'}
dts = np.fromfile(path + 'dt')
tvecs = {
i: dts[i] * np.arange(len(results['heun'+str(i)]))
for i in range(len(dts))
}
plt.plot(tvecs[0], results['true'], label='Analytic')
plt.plot(tvecs[3], results['heun3'], label='Heun')
plt.plot(tvecs[3], results['implicit3'], label='Implicit')
plt.xlabel('$t$'); plt.ylabel('$x(t)$'); plt.legend()
# load results
path = datapath + 'task2/'
files = !ls {path}
results = {name: np.fromfile(path + name) for name in files if name!='dt'}
dts = np.fromfile(path + 'dt')
tvecs = {
i: dts[i] * np.arange(len(results['heun'+str(i)]))
for i in range(len(dts))
}
plt.plot(tvecs[0], results['true'], label='Analytic')
plt.plot(tvecs[3], results['heun3'], label='Heun')
plt.plot(tvecs[3], results['implicit3'], label='Implicit')
plt.xlabel('$t$'); plt.ylabel('$x(t)$'); plt.legend()
# load results
path = datapath + 'task3/'
files = !ls {path}
results = {name: np.fromfile(path + name) for name in files if name!='dt'}
plt.plot(results['true'], label='Analytic')
plt.plot(results['heun'], label='Heun')
plt.plot(results['implicit'], label='Implicit')
plt.legend(), plt.ylabel('$X(t_n)$'); plt.xlabel('$n$')
plt.ylim(0.999, 1.001)
# Load results
path = datapath + 'task4/'
files = !ls {path}
results = {name: np.fromfile(path + name).reshape((-1,3)) for name in files if name!='dt'}
dts = np.fromfile(path + 'dt')
plt.plot(results['true'])
plt.title('True trajectory')
integ_names = ['heun', 'implicit_slow', 'implicit_mid']
dt_indices = range(len(dts))
fg, axs = plt.subplots(nrows=len(dts), ncols=len(integ_names),
sharey='row', sharex=True,
figsize=(3*len(integ_names),2*len(dts)))
for ax_row, dt_idx in zip(axs, dt_indices):
for ax, name in zip(ax_row, integ_names):
mag = results[name + str(dt_idx)]
true = results['true'][::10**dt_idx]
time = dts[dt_idx] * np.arange(mag.shape[0])
ax.plot(time, mag-true)
ax.set_title('{} | {:1.0e} '.format(name, dts[dt_idx]))
plt.tight_layout()
intes = ['heun', 'implicit_slow', 'implicit_mid', 'implicit_fast']
dt_indices = range(2,4)
fg, axs = plt.subplots(nrows=3, ncols=len(intes), figsize=(5*len(intes), 12))
for axrow, direction in zip(axs, range(3)):
for ax, inte in zip(axrow, intes):
for idx in dt_indices:
numerical = results[inte+str(idx)][:,direction]
time = dts[idx] * np.arange(numerical.size)
ax.plot(time, numerical, label='dt={:1.0e}'.format(dts[idx]))
actual = results['true'][:,direction]
time = dts[0] * np.arange(actual.size)
ax.plot(time, actual, 'k--', label='actual')
ax.legend()
ax.set_title(inte)
ax.set_ylim(0 if direction==2 else -1.1 ,1.1)
x = np.fromfile(datapath+'task5/example_sol').reshape((-1,3))
plt.plot(x);
fnames = !ls {datapath}task5/implicit*
global_sols = [np.fromfile(fname).reshape((-1,3)) for fname in fnames]
# Compute difference between solutions at consecutive timesteps
diffs = np.diff(global_sols, axis=0)
# Take err as L2 norm
err = np.linalg.norm(diffs, axis=2)
# Compute expected error
Eerr = np.mean(err, axis=1)
# Load the dt values
dts = np.fromfile(datapath+'task5/dt')[1:]
# Fit a straight line
a,b = np.linalg.lstsq(np.stack([np.ones_like(dts), np.log2(dts)]).T, np.log2(Eerr))[0]
plt.plot(np.log2(dts), np.log2(Eerr), 'x', ms=10, label='Observations')
plt.plot(np.log2(dts), a + np.log2(dts)*b, '-', label='Fit b={:.3f}'.format(b))
plt.xlabel('$\\log_2 \\Delta t$', fontsize=14)
plt.ylabel('$\\log_2 E (\\left| Y_T-Y_T^{\\Delta t}\\right|)^2$', fontsize=14)
plt.legend()
fnames = !ls {datapath}task5/heun*
global_sols = [np.fromfile(fname).reshape((-1,3)) for fname in fnames]
# Compute difference between solutions at consecutive timesteps
diffs = np.diff(global_sols, axis=0)
# Take err as L2 norm
err = np.linalg.norm(diffs, axis=2)
# Compute expected error
Eerr = np.mean(err, axis=1)
# Fit a straight line
a,b = np.linalg.lstsq(np.stack([np.ones_like(dts), np.log2(dts)]).T, np.log2(Eerr))[0]
plt.plot(np.log2(dts), np.log2(Eerr), 'x', ms=10, label='Observations')
plt.plot(np.log2(dts), a + np.log2(dts)*b, '-', label='Fit b={:.3f}'.format(b))
plt.xlabel('$\\log_2 \\Delta t$', fontsize=14)
plt.ylabel('$\\log_2 E (\\left| Y_T-Y_T^{\\Delta t}\\right|)^2$', fontsize=14)
plt.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Task 1 - 1D non-stiff SDE
Step2: The following plot shows $x(t)$ for
Step3: Task 2 - 1D stiff SDE
Step4: The plot of $x(t)$ shows that the explicit solver performs poorly on the stiff problem, as expected. The implicit solution looks accurate.
Step5: Task 3 - 1D unstable system
Step6: The implicit solver blows up for these unstable problems. The explicit solver is able to track the trajectory closely.
Step7: Task 4 - Zero-temperature LLG convergence
Step8: Below is an example of the true trajectory of the x,y,z coordinates of magnetisation.
Step9: Residual plots
Step10: From the above results we make three observations
Step11: Task 5 - stochastic LLG global convergence
Step12: Implicit midpoint
Step13: Heun
|
10,608
|
<ASSISTANT_TASK:>
Python Code:
import pyspark
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.classification import LogisticRegressionWithSGD
from pyspark.mllib.tree import DecisionTree
sc = pyspark.SparkContext()
sc
raw_rdd = sc.textFile("./titanic.csv")
raw_rdd.count()
raw_rdd.take(5)
header = raw_rdd.first()
data_rdd = raw_rdd.filter(lambda line: line != header)
data_rdd.takeSample(False, 5, 0)
def row_to_labeled_point(line):
'''
Builds a LabelPoint consisting of:
survival (truth): 0=no, 1=yes
ticket class: 0=1st class, 1=2nd class, 2=3rd class
age group: 0=child, 1=adults
gender: 0=man, 1=woman
'''
passenger_id, klass, age, sex, survived = [segs.strip('"') for segs in line.split(',')]
klass = int(klass[0]) - 1
if (age not in ['adults', 'child'] or
sex not in ['man', 'women'] or
survived not in ['yes', 'no']):
raise RuntimeError('unknown value')
features = [
klass,
(1 if age == 'adults' else 0),
(1 if sex == 'women' else 0)
]
return LabeledPoint(1 if survived == 'yes' else 0, features)
labeled_points_rdd = data_rdd.map(row_to_labeled_point)
labeled_points_rdd.takeSample(False, 5, 0)
training_rdd, test_rdd = labeled_points_rdd.randomSplit([0.7, 0.3], seed = 0)
training_count = training_rdd.count()
test_count = test_rdd.count()
training_count, test_count
model = DecisionTree.trainClassifier(training_rdd,
numClasses=2,
categoricalFeaturesInfo={
0: 3,
1: 2,
2: 2
})
predictions_rdd = model.predict(test_rdd.map(lambda x: x.features))
truth_and_predictions_rdd = test_rdd.map(lambda lp: lp.label).zip(predictions_rdd)
accuracy = truth_and_predictions_rdd.filter(lambda v_p: v_p[0] == v_p[1]).count() / float(test_count)
print('Accuracy =', accuracy)
print(model.toDebugString())
model = LogisticRegressionWithSGD.train(training_rdd)
predictions_rdd = model.predict(test_rdd.map(lambda x: x.features))
labels_and_predictions_rdd = test_rdd.map(lambda lp: lp.label).zip(predictions_rdd)
accuracy = labels_and_predictions_rdd.filter(lambda v_p: v_p[0] == v_p[1]).count() / float(test_count)
print('Accuracy =', accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we create a SparkContext, the main object in the Spark API. This call may take a few seconds to return as it fires up a JVM under the covers.
Step2: Sample the data
Step3: We query RDD for the number of lines in the file. The call here causes the file to be read and the result computed. This is a Spark action.
Step4: We query for the first five rows of the RDD. Even though the data is small, we shouldn't get into the habit of pulling the entire dataset into the notebook. Many datasets that we might want to work with using Spark will be much too large to fit in memory of a single machine.
Step5: We see a header row followed by a set of data rows. We filter out the header to define a new RDD containing only the data rows.
Step6: We take a random sample of the data rows to better understand the possible values.
Step7: We see that the first value in every row is a passenger number. The next three values are the passenger attributes we might use to predict passenger survival
Step8: We apply the function to all rows.
Step9: We take a random sample of the resulting points to inspect them.
Step10: Split for training and test
Step11: Train and test a decision tree classifier
Step12: We now apply the trained model to the feature values in the test set to get the list of predicted outcomines.
Step13: We bundle our predictions with the ground truth outcome for each passenger in the test set.
Step14: Now we compute the test error (% predicted survival outcomes == actual outcomes) and display the decision tree for good measure.
Step15: Train and test a logistic regression classifier
|
10,609
|
<ASSISTANT_TASK:>
Python Code:
import math
def multiple35(n):
n3 = (n - 1) // 3
sum3 = 3 * n3 * (n3 + 1) // 2
n5 = (n - 1) // 5
sum5 = 5 * n5 * (n5 + 1) // 2
n15 = (n - 1) // 15
sum15 = 15 * n15 * (n15 + 1) // 2
return sum3 + sum5 - sum15
print(multiple35(1000))
def fib(limit):
a = 1
b = 2
while b <= limit:
a, b = b, a + b
return [a, b]
def even_sum(limit):
a, b = fib(limit)
if a % 2:
if b % 2:
## odd, odd
return (b - 1) // 2
## odd, even
return (a - 1) // 2
## even, odd
return (a + b - 1) // 2
fib(100)
even_sum(100)
print(even_sum(4000000))
def check_divisor(n, i):
while not n % i:
n //= i
return n
def largest_prime_factor(n):
n = check_divisor(n, 2)
if n == 1:
return 2
i = 3
while i <= math.sqrt(n):
n = check_divisor(n, i)
i += 2
if n > 2:
return n
return i - 2
print(largest_prime_factor(600851475143))
def isPal(s):
return s == s[::-1]
def firstPal(n):
def largestPal(n):
for i in range(100, 1000):
if n % i == 0 and n / i >= 100 && n / i < 1000:
return n
return largestPal(firstPal(n - 1))
# pals = []
# for num in range(101101, 1000000):
# if not isPal(str(num)):
# continue
# for i in range(100, int(math.sqrt(num) + 1)):
# if num / i > 999:
# continue
# if num % i == 0:
# pals.append(num)
# break;
# pals = []
# for i in range(100, 1000):
# for j in range(100, 1000):
# ij = i * j
# if isPal(str(ij)):
# pals.append(ij)
n = 1000000
print(max([pal for pal in pals if pal < n]))
def gcd(a, b):
while b != 0:
a, b = b, a % b
return a
def lcm(a, b):
return a * b // gcd(a, b)
def smallest_multiple(n):
s_m = 1
for num in range(1, n + 1):
s_m = lcm(s_m, num)
return s_m
print(smallest_multiple(20))
def sum_square_diff(n):
return n * (n + 1) * (n - 1) * (3 * n + 2) // 12
print(sum_square_diff(100))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 2 - Even Fibonacci numbers
Step2: While k = 1, 2, 3...
Step3: Problem 3 - Largest prime factor
Step4: Problem 4 - Largest palindrome product
Step5: Problem 5 - Smallest multiple
Step6: Problem 6 - Sum square difference
|
10,610
|
<ASSISTANT_TASK:>
Python Code:
import bnn
#get
!wget http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
!wget http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
#unzip
!gzip -d t10k-images-idx3-ubyte.gz
!gzip -d t10k-labels-idx1-ubyte.gz
#read labels
labels = []
with open("/home/xilinx/jupyter_notebooks/bnn/t10k-labels-idx1-ubyte","rb") as lbl_file:
#read magic number and number of labels (MSB first) -> MNIST header
magicNum = int.from_bytes(lbl_file.read(4), byteorder="big")
countLbl = int.from_bytes(lbl_file.read(4), byteorder="big")
#now the labels are following byte-wise
for idx in range(countLbl):
labels.append(int.from_bytes(lbl_file.read(1), byteorder="big"))
lbl_file.close()
len(labels)
lfcW1A1_classifier = bnn.LfcClassifier(bnn.NETWORK_LFCW1A1,"mnist",bnn.RUNTIME_HW)
lfcW1A1_classifier.classes
result_W1A1 = lfcW1A1_classifier.classify_mnists("/home/xilinx/jupyter_notebooks/bnn/t10k-images-idx3-ubyte")
lfcW1A2_classifier = bnn.LfcClassifier(bnn.NETWORK_LFCW1A2,"mnist",bnn.RUNTIME_HW)
result_W1A2 = lfcW1A2_classifier.classify_mnists("/home/xilinx/jupyter_notebooks/bnn/t10k-images-idx3-ubyte")
with open("/home/xilinx/jupyter_notebooks/bnn/10_mnist_pictures", "wb") as out_file:
with open("/home/xilinx/jupyter_notebooks/bnn/t10k-images-idx3-ubyte","rb") as img_file:
#copy magic number
out_file.write(img_file.read(4))
#set number of images
img_file.read(4)
out_file.write(bytearray.fromhex('0000000A'))
#copy row and column information
out_file.write(img_file.read(8))
#copy 10 pictures (one is 28x28, 1 pixel is 1 byte)
out_file.write(img_file.read(28*28*10))
img_file.close()
out_file.close()
print("SW Inference with W1A1:")
sw_lfcW1A1_classifier = bnn.LfcClassifier(bnn.NETWORK_LFCW1A1,"mnist",bnn.RUNTIME_SW)
sw_resultW1A1 = sw_lfcW1A1_classifier.classify_mnists("/home/xilinx/jupyter_notebooks/bnn/10_mnist_pictures")
print("\nSW Inference with W1A2:")
sw_lfcW1A2_classifier = bnn.LfcClassifier(bnn.NETWORK_LFCW1A2,"mnist",bnn.RUNTIME_SW)
sw_resultW1A2 = sw_lfcW1A2_classifier.classify_mnists("/home/xilinx/jupyter_notebooks/bnn/10_mnist_pictures")
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
hw_time = [lfcW1A1_classifier.usecPerImage,
lfcW1A2_classifier.usecPerImage]
sw_time = [sw_lfcW1A1_classifier.usecPerImage,
sw_lfcW1A2_classifier.usecPerImage]
x_axis = ('W1A1', 'W1A2')
y_pos = np.arange(len(x_axis))
plt.bar(y_pos-0.25, hw_time, 0.25)
plt.bar(y_pos+0.25, sw_time, 0.25)
plt.xticks(y_pos, x_axis)
plt.legend(["hardware","software"])
plt.semilogy()
plt.show()
#compare against labels
countRight = 0
for idx in range(len(labels)):
if labels[idx] == result_W1A1[idx]:
countRight += 1
accuracyW1A1 = countRight*100/len(labels)
countRight = 0
for idx in range(len(labels)):
if labels[idx] == result_W1A2[idx]:
countRight += 1
accuracyW1A2 = countRight*100/len(labels)
print("Accuracy W1A1: ", accuracyW1A1)
print("Accuracy W1A2: ", accuracyW1A2)
from pynq import Xlnk
xlnk = Xlnk()
xlnk.xlnk_reset()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. LFC and MNIST
Step2: 2. Hardware Inference
Step3: Case 2
Step4: 3. Software accelerated inference
Step5: Start inference
Step6: As it can be seen, pure software runtime is much slower.
Step7: Accuracy
Step8: 5. Reset the device
|
10,611
|
<ASSISTANT_TASK:>
Python Code:
%run setup.ipynb
%matplotlib inline
# load in selected missense variants
tbl_variants_selected = etl.frompickle('../data/tbl_variants_missense_selected.pkl')
tbl_variants_selected.nrows()
# load in haplotypes
callset_haps = np.load('../data/haps_phase1.npz')
haps = allel.HaplotypeArray(callset_haps['haplotypes'])
pos = allel.SortedIndex(callset_haps['POS'])
pos.shape, haps.shape
def lewontin_d_prime(h, i, j, a=1, b=1):
Compute LD between a pair of alleles.
Parameters
----------
h : array
Haplotype array.
i : int
First variant index.
j : int
Second variant index.
a : int
First variant allele.
b : int
Second variant allele.
Returns
-------
ld : float
# setup
h = allel.HaplotypeArray(h)
n_a = n_b = 0 # allele counts
n_ab = 0 # haplotype counts
n = 0 # allele number (i.e., number of calls)
# iterate over haplotypes, counting alleles and haplotypes
for k in range(h.n_haplotypes):
# access alleles
allele_ik = h[i, k]
allele_jk = h[j, k]
# only count if allele non-missing at both sites
if allele_ik < 0 or allele_jk < 0:
continue
# accumulate
if allele_ik == a:
n_a += 1
if allele_jk == b:
n_b += 1
if allele_ik == a and allele_jk == b:
n_ab += 1
n += 1
# log('D_prime counts:', 'i', i, 'j', j, 'a', a, 'b', b, 'n', n, 'n_a', n_a, 'n_b', n_b)
# bail out if no data or either allele is absent or fixed
if n == 0 or n_a == 0 or n_b == 0 or n == n_a or n == n_b:
return None
# N.B., compute D prime using counts rather than frequencies to avoid floating-point errors
# N.B., preserve the sign of D prime to retain information about linkage versus repulsion
# compute coefficient of linkage disequilibrium * n**2
D_ab = (n * n_ab) - (n_a * n_b)
# compute normalisation coefficient * n**2
if D_ab >= 0:
D_max = min(n_a * (n - n_b), (n - n_a) * n_b)
else:
D_max = min(n_a * n_b, (n - n_a) * (n - n_b))
# compute D prime
D_prime = D_ab / D_max
# log('D_prime', D_prime, i, j, a, b, n, n_a, n_b, D_ab, D_max)
# if np.isnan(D_prime):
# log('nan')
# log(D_prime, i, j, a, b, n, n_a, n_b, D_ab, D_max)
return D_prime
pos_selected = allel.SortedIndex(sorted(tbl_variants_selected.values('POS').set()))
pos_selected
tbl_variants_selected
pos_selected.shape
loc_selected = pos.locate_keys(pos_selected)
np.count_nonzero(loc_selected)
haps_selected = haps[loc_selected]
haps_selected
ac = haps_selected.count_alleles()
ac.displayall()
def compute_allele_af(ax=None):
global allele_af
recs = list(tbl_variants_selected.records())
n = len(recs)
allele_af = np.zeros(n, dtype='f8')
for i in range(n):
i_pos = recs[i].POS
i_allele = recs[i].ALTIX + 1
i_vidx = pos_selected.locate_key(i_pos)
# log('row', i, i_vidx, i_pos, i_allele)
x = ac[i_vidx, i_allele] * 100 / haps_selected.shape[1]
allele_af[i] = x
compute_allele_af()
def compute_ld():
global ld
recs = list(tbl_variants_selected.records())
n = len(recs)
ld = np.zeros((n, n), dtype='f8')
for i in range(n):
i_pos = recs[i].POS
i_allele = recs[i].ALTIX + 1
i_vidx = pos_selected.locate_key(i_pos)
# log('row', i, i_vidx, i_pos, i_allele)
for j in range(i+1, n):
j_pos = recs[j].POS
j_allele = recs[j].ALTIX + 1
j_vidx = pos_selected.locate_key(j_pos)
# log('col', j, j_vidx, j_pos, j_allele)
v = lewontin_d_prime(haps_selected, i_vidx, j_vidx, i_allele, j_allele)
# log('D_prime', v)
ld[i, j] = v
ld[j, i] = v
compute_ld()
ld[11]
def plot_allele_af(ax=None, **kwargs):
n = len(allele_af)
if ax is None:
fig, ax = plt.subplots(figsize=(7, 2))
left = np.arange(n) + 0.2
ax.bar(left, allele_af, align='edge', width=0.6, **kwargs)
ax.set_ylabel('Allele frequency (%)')
ax.set_xlim(0, n)
ax.set_xticks([])
ax.set_yticks(range(0, 60, 10))
ax.set_xticklabels([])
plot_allele_af()
def fig_pw_ld():
fig = plt.figure(figsize=(7, 7.3), dpi=120)
gs = mpl.gridspec.GridSpec(2, 2, height_ratios=[1.3, 6], width_ratios=[7, .5])
# sns.despine(ax=ax, offset=5)
#sns.heatmap(ld, vmin=-1, vmax=1, center=0, square=True, ax=ax, cmap='Blues', cbar_kws=dict(ticks=[-1, -.5, 0, .5, 1]))
ax = fig.add_subplot(gs[0, 0])
sns.despine(ax=ax)
plot_allele_af(ax, color='k')
ax = fig.add_subplot(gs[1, 0])
im = ax.pcolormesh(ld, vmin=-1, vmax=1, cmap='Blues', shading='flat', edgecolors='gray', linewidths=.5, antialiased=True)
labels = ['%s:%s>%s %s' % (rec.POS, rec.REF, rec.ALT, rec['AGAP004707-RA'].rjust(6))
for rec in tbl_variants_selected.records()]
# ax.invert_yaxis()
ticks = np.arange(ld.shape[0]) + .5
ax.set_xticks(ticks)
ax.set_yticks(ticks)
ax.set_xticklabels(labels, rotation=90, ha='center', va='top', fontdict=dict(family='monospace'))
ax.set_yticklabels(labels, rotation=0, va='center', ha='right', fontdict=dict(family='monospace'));
ax.set_xlim(0, ld.shape[0])
ax.set_ylim(0, ld.shape[0])
ax.xaxis.set_tick_params(length=0)
ax.yaxis.set_tick_params(length=0)
for i in range(ld.shape[0] + 1):
ax.add_patch(plt.Rectangle((i-1, i-1), 1, 1, color='gray'))
cax = fig.add_subplot(gs[1, 1])
fig.colorbar(im, cax=cax, )
# cax.set_title("Linkage disequilibrium (D')", loc='left')
cax.set_ylabel("Linkage disequilibrium (D')", va='top')
fig.tight_layout(pad=0.1)
fig.savefig('../artwork/fig_ld.png', dpi=300, bbox_inches='tight')
fig_pw_ld()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pairwise LD for functional SNPs
Step2: Compute and plot allele frequencies
Step3: Compute and plot pairwise LD
|
10,612
|
<ASSISTANT_TASK:>
Python Code:
# There are several libraries to install
#!pip3 install tensorflow numpy matplotlib scikit-image
%matplotlib inline
import itertools
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import skimage
from skimage import io, transform
from pylab import rcParams
rcParams['figure.figsize'] = 10, 7
User Parameters
# The train image will be scaled to a square of dimensions `train_size x train_size`
train_size = 32
# When generating the image, the network will generate for an image of
# size `test_size x test_size`
test_size = 2048
# Path to load the image you want upscaled
image_path = '../img/colors.jpg'
if not image_path:
print('Please specify an image for training the network')
else:
image = transform.resize(io.imread(image_path), (train_size, train_size))
# Just a quick line to get rid of the alpha channel if it exists
# (e.g. for transparent png files)
image = image if len(image.shape) < 3 or image.shape[2] == 3 else image[:,:,:3]
io.imshow(image)
X = tf.placeholder('float32', (None, 2))
def model(X, w):
h1 = tf.nn.tanh(tf.matmul(X, w['h1']))
h2 = tf.nn.tanh(tf.matmul(h1, w['h2']))
h3 = tf.nn.tanh(tf.matmul(h2, w['h3']))
h4 = tf.nn.tanh(tf.matmul(h3, w['h4']))
h5 = tf.nn.tanh(tf.matmul(h4, w['h4']))
h6 = tf.nn.tanh(tf.matmul(h5, w['h4']))
h7 = tf.nn.tanh(tf.matmul(h6, w['h4']))
h8 = tf.nn.tanh(tf.matmul(h7, w['h4']))
return tf.nn.sigmoid(tf.matmul(h8, w['out']))
def init_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.1))
# (None, None) refers to (batch_size, n_colors)
Y = tf.placeholder("float32", (None, None))
w = {
'h1': init_weights([2, 20]),
'h2': init_weights([20, 20]),
'h3': init_weights([20, 20]),
'h4': init_weights([20, 20]),
'h5': init_weights([20, 20]),
'h6': init_weights([20, 20]),
'h7': init_weights([20, 20]),
'h8': init_weights([20, 20]),
'out': init_weights([20, 3]),
}
out = model(X, w)
cost = tf.reduce_mean(tf.squared_difference(out, Y))
train_op = tf.train.AdamOptimizer().minimize(cost)
# Feel free to adjust the number of epochs to your liking.
n_epochs = 5e+4
# Create function to generate a coordinate matrix (i.e. matrix of normalised coordinates)
# Pardon my lambda
generate_coord = lambda size: (
np.array(list(itertools.product(np.linspace(0,1,size),np.linspace(0,1,size)))).reshape(size ** 2, 2))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Training data
x = generate_coord(train_size)
# Labels
reshaped_image = np.array(image.reshape(train_size ** 2, -1))
for epoch in range(int(n_epochs + 1)):
_, c = sess.run([train_op, cost], feed_dict={X: x, Y: reshaped_image})
# Print progress
if epoch % (n_epochs/10) == 0:
print('{:0.0%} \t Loss: {}'.format(epoch/n_epochs, c).expandtabs(7))
# Generate
new_image = sess.run(out, feed_dict={X: generate_coord(test_size)})
plt.imshow(new_image.reshape(test_size, test_size, -1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Load image
Step3: Model
Step4: Architecture
Step5: Training
Step6: Evaluation
|
10,613
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('..')
import socnet as sn
sn.graph_width = 320
sn.graph_height = 180
g = sn.load_graph('4-dijkstra.gml', has_pos=True)
for n, m in g.edges():
g.edge[n][m]['label'] = g.edge[n][m]['c']
sn.show_graph(g, elab=True)
class Heap(object):
def __init__(self, g):
self.g = g
self.h = []
self.indices = {}
def _parent(self, i):
return (i - 1) // 2
def _left(self, i):
return 2 * i + 1
def _right(self, i):
return 2 * i + 2
def _key(self, i):
return self.g.node[self.h[i]]['d']
def _swap(self, i, j):
self.h[i], self.h[j] = self.h[j], self.h[i]
self.indices[self.h[i]] = i
self.indices[self.h[j]] = j
def empty(self):
return len(self.h) == 0
def fix(self, n):
i = self.indices[n]
p = self._parent(i)
while i > 0 and self._key(p) > self._key(i):
self._swap(i, p)
i = p
p = self._parent(i)
def put(self, n):
self.indices[n] = len(self.h)
self.h.append(n)
self.fix(n)
def get(self):
n = self.h[0]
self._swap(0, len(self.h) - 1)
del self.h[-1]
del self.indices[n]
i = 0
while True:
l = self._left(i)
r = self._right(i)
c = i
if l < len(self.h) and self._key(l) < self._key(c):
c = l
if r < len(self.h) and self._key(r) < self._key(c):
c = r
if i == c:
break
self._swap(i, c)
i = c
return n
from math import inf, isinf
def snapshot(g, frames):
for n in g.nodes():
if isinf(g.node[n]['d']):
g.node[n]['label'] = '∞'
else:
g.node[n]['label'] = str(g.node[n]['d'])
frame = sn.generate_frame(g, nlab=True)
frames.append(frame)
red = (255, 0, 0)
blue = (0, 0, 255)
green = (0,255,0)
frames = []
sn.reset_node_colors(g)
sn.reset_edge_colors(g)
snapshot(g, frames)
h = Heap(g)
for n in g.nodes():
g.node[n]['d'] = inf
h.put(n)
s=0
g.node[s]['d'] = 0
h.fix(0)
while not h.empty():
n = h.get()
g.node[n]['color'] = green
snapshot (g, frames)
for m in g.successors(n):
g.edge[n][m]['color'] = red
d = g.node[n]['d'] + g.edge[n][m]['c']
snapshot(g, frames)
if g.node[m]['d'] > d:
g.node[m]['d'] = d
h.fix(m)
g.edge[n][m]['color'] = sn.edge_color
snapshot(g, frames)
g.node[n]['color'] = blue
snapshot(g, frames)
snapshot(g, frames)
sn.reset_edge_colors(g)
sn.reset_node_colors(g)
snapshot(g, frames)
sn.show_animation(frames)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A seguir, vamos configurar as propriedades visuais
Step2: Por fim, vamos carregar e visualizar um grafo
Step3: Algoritmo de Dijkstra
Step4: Exercício
|
10,614
|
<ASSISTANT_TASK:>
Python Code:
t = (1,2.0,'Three')
t
t[0]
# Slicing
t[1:]
# Reversing a tuple
t[::-1]
t[0] = 10
t[3] = 4
t.append(4)
# Finding the index of the value: 2.0
t.index(2.0)
# Number of values in a tuple
t.count('Three')
t = ('hello', {'element': 'Oxygen', 'weight': 15.999}, 1234)
t
# accessing the 2nd value in the tuple.
t[1]
# accessing the values in the dictionary which is the 2nd element in our tuple.
t[1]['element']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can slice a tuple like you do in Lists.
Step2: Remember that we are slicing the tuple for display purpose only. We cannot change a tuple.
Step3: So we have an error saying that tuple do not support item assignment.
Step4: Basic functions for tuples
Step5: A tuple value can be any python object, it can be a List, dictonary etc.
|
10,615
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets import load_linnerud
linnerud = load_linnerud()
chinups = linnerud.data[:,0]
plt.hist( # complete
plt.hist( # complete
# complete
# complete
plt.hist(# complete
plt.hist(chinups, histtype = 'step')
# this is the code for the rug plot
plt.plot(chinups, np.zeros_like(chinups), '|', color='k', ms = 25, mew = 4)
# execute this cell
from sklearn.neighbors import KernelDensity
def kde_sklearn(data, grid, bandwidth = 1.0, **kwargs):
kde_skl = KernelDensity(bandwidth = bandwidth, **kwargs)
kde_skl.fit(data[:, np.newaxis])
log_pdf = kde_skl.score_samples(grid[:, np.newaxis]) # sklearn returns log(density)
return np.exp(log_pdf)
grid = # complete
PDFtophat = kde_sklearn( # complete
plt.plot( # complete
PDFtophat1 = # complete
# complete
# complete
# complete
PDFgaussian = # complete
PDFepanechnikov = # complete
x = np.arange(0, 6*np.pi, 0.1)
y = np.cos(x)
plt.plot(x,y, lw = 2)
plt.xlabel('X')
plt.ylabel('Y')
plt.xlim(0, 6*np.pi)
import seaborn as sns
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x,y, lw = 2)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_xlim(0, 6*np.pi)
sns.set_style( # complete
# complete
# default color palette
current_palette = sns.color_palette()
sns.palplot(current_palette)
# set palette to colorblind
sns.set_palette("colorblind")
current_palette = sns.color_palette()
sns.palplot(current_palette)
iris = sns.load_dataset("iris")
iris
# note - hist, kde, and rug all set to True, set to False to turn them off
with sns.axes_style("dark"):
sns.distplot(iris['petal_length'], bins=20, hist=True, kde=True, rug=True)
plt.scatter( # complete
with sns.axes_style("darkgrid"):
xexample = np.random.normal(loc = 0.2, scale = 1.1, size = 10000)
yexample = np.random.normal(loc = -0.1, scale = 0.9, size = 10000)
plt.scatter(xexample, yexample)
# hexbin w/ bins = "log" returns the log of counts/bin
# mincnt = 1 displays only hexpix with at least 1 source present
with sns.axes_style("darkgrid"):
plt.hexbin(xexample, yexample, bins = "log", cmap = "viridis", mincnt = 1)
plt.colorbar()
with sns.axes_style("darkgrid"):
sns.kdeplot(xexample, yexample,shade=False)
sns.jointplot(x=iris['petal_length'], y=iris['petal_width'])
sns.jointplot( # complete
sns.pairplot(iris[["sepal_length", "sepal_width", "petal_length", "petal_width"]])
sns.pairplot(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"],
hue = "species", diag_kind = 'kde')
g = sns.PairGrid(iris, vars = ["sepal_length", "sepal_width", "petal_length", "petal_width"],
hue = "species", diag_sharey=False)
g.map_lower(sns.kdeplot)
g.map_upper(plt.scatter, edgecolor='white')
g.map_diag(sns.kdeplot, lw=3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 1) Density Estimation
Step2: Problem 1a
Step3: Already with this simple plot we see a problem - the choice of bin centers and number of bins suggest that there is a 0% probability that middle aged men can do 10 chinups. Intuitively this seems incorrect, so lets examine how the histogram changes if we change the number of bins or the bin centers.
Step4: These small changes significantly change the output PDF. With fewer bins we get something closer to a continuous distribution, while shifting the bin centers reduces the probability to zero at 9 chinups.
Step5: Ending the lie
Step6: Of course, even rug plots are not a perfect solution. Many of the chinup measurements are repeated, and those instances cannot be easily isolated above. One (slightly) better solution is to vary the transparency of the rug "whiskers" using alpha = 0.3 in the whiskers plot call. But this too is far from perfect.
Step7: Problem 1e
Step8: In this representation, each "block" has a height of 0.25. The bandwidth is too narrow to provide any overlap between the blocks. This choice of kernel and bandwidth produces an estimate that is essentially a histogram with a large number of bins. It gives no sense of continuity for the distribution. Now, we examine the difference (relative to histograms) upon changing the the width (i.e. kernel) of the blocks.
Step9: It turns out blocks are not an ideal representation for continuous data (see discussion on histograms above). Now we will explore the resulting PDF from other kernels.
Step10: So, what is the optimal choice of bandwidth and kernel? Unfortunately, there is no hard and fast rule, as every problem will likely have a different optimization. Typically, the choice of bandwidth is far more important than the choice of kernel. In the case where the PDF is likely to be gaussian (or close to gaussian), then Silverman's rule of thumb can be used
Step11: Seaborn
Step12: We can see that the plot has significantly changed styles.
Step13: The folks behind seaborn have thought a lot about color palettes, which is a good thing. Remember - the choice of color for plots is one of the most essential aspects of visualization. A poor choice of colors can easily mask interesting patterns or suggest structure that is not real. To learn more about what is available, see the seaborn color tutorial.
Step14: which we will now change to colorblind, which is clearer to those that are colorblind.
Step15: Now that we have covered the basics of seaborn (and the above examples truly only scratch the surface of what is possible), we will explore the power of seaborn for higher dimension data sets. We will load the famous Iris data set, which measures 4 different features of 3 different types of Iris flowers. There are 150 different flowers in the data set.
Step16: Now that we have a sense of the data structure, it is useful to examine the distribution of features. Above, we went to great pains to produce histograms, KDEs, and rug plots. seaborn handles all of that effortlessly with the distplot function.
Step17: Of course, this data set lives in a 4D space, so plotting more than univariate distributions is important (and as we will see tomorrow this is particularly useful for visualizing classification results). Fortunately, seaborn makes it very easy to produce handy summary plots.
Step18: Of course, when there are many many data points, scatter plots become difficult to interpret. As in the example below
Step19: Here, we see that there are many points, clustered about the origin, but we have no sense of the underlying density of the distribution. 2D histograms, such as plt.hist2d(), can alleviate this problem. I prefer to use plt.hexbin() which is a little easier on the eyes (though note - these histograms are just as subject to the same issues discussed above).
Step20: While the above plot provides a significant improvement over the scatter plot by providing a better sense of the density near the center of the distribution, the binedge effects are clearly present. An even better solution, like before, is a density estimate, which is easily built into seaborn via the kdeplot function.
Step21: This plot is much more appealing (and informative) than the previous two. For the first time we can clearly see that the distribution is not actually centered on the origin. Now we will move back to the Iris data set.
Step22: But! Histograms and scatter plots can be problematic as we have discussed many times before.
Step23: That is much nicer than what was presented above. However - we still have a problem in that our data live in 4D, but we are (mostly) limited to 2D projections of that data. One way around this is via the seaborn version of a pairplot, which plots the distribution of every variable in the data set against each other. (Here is where the integration with pandas DataFrames becomes so powerful.)
Step24: For data sets where we have classification labels, we can even color the various points using the hue option, and produce KDEs along the diagonal with diag_type = 'kde'.
Step25: Even better - there is an option to create a PairGrid which allows fine tuned control of the data as displayed above, below, and along the diagonal. In this way it becomes possible to avoid having symmetric redundancy, which is not all that informative. In the example below, we will show scatter plots and contour plots simultaneously.
|
10,616
|
<ASSISTANT_TASK:>
Python Code:
# Set up Spark Context
from pyspark import SparkContext, SparkConf
SparkContext.setSystemProperty('spark.executor.memory', '4g')
conf = SparkConf()
conf.set('spark.sql.autoBroadcastJoinThreshold', 200*1024*1024) # 200MB for map-side joins
conf.set('spark.executor.instances', 12)
sc = SparkContext('yarn-client', 'Spark-demo', conf=conf)
# Setup HiveContext
from pyspark.sql import HiveContext, Row
hc = HiveContext(sc)
hc.sql("use demo")
hc.sql("DESCRIBE crimes").show()
crimes = hc.table("crimes")
crimes.limit(5).toPandas()
crimes.select("resolution").distinct().toPandas()
total = crimes.count()
num_resolved = crimes.filter(crimes.resolution != 'NONE').count()
print str(total) + " crimes total, out of which " + str(num_resolved) + " were resolved"
c1 = crimes.select(crimes.longitude.cast("float").alias("long"),
crimes.latitude.cast("float").alias("lat"))
c1.describe().toPandas()
c2 = c1.filter('lat < 37 or lat > 38')
print c2.count()
c2.head(3)
hc.sql("add jar /home/jupyter/notebooks/jars/guava-11.0.2.jar")
hc.sql("add jar /home/jupyter/notebooks/jars/spatial-sdk-json.jar")
hc.sql("add jar /home/jupyter/notebooks/jars/esri-geometry-api.jar")
hc.sql("add jar /home/jupyter/notebooks/jars/spatial-sdk-hive.jar")
hc.sql("create temporary function ST_Contains as 'com.esri.hadoop.hive.ST_Contains'")
hc.sql("create temporary function ST_Point as 'com.esri.hadoop.hive.ST_Point'")
cf = hc.sql(
SELECT date_str, time, longitude, latitude, resolution, category, district, dayofweek, description
FROM crimes
WHERE longitude < -121.0 and latitude < 38.0
).repartition(50)
cf.registerTempTable("cf")
crimes2 = hc.sql(
SELECT date_str, time, dayofweek, category, district, description, longitude, latitude,
if (resolution == 'NONE',0.0,1.0) as resolved,
neighborho as neighborhood
FROM sf_neighborhoods JOIN cf
WHERE ST_Contains(sf_neighborhoods.shape, ST_Point(cf.longitude, cf.latitude))
).cache()
crimes2.registerTempTable("crimes2")
crimes2.limit(5).toPandas()
ngrp = crimes2.groupBy('neighborhood')
ngrp_pd = ngrp.avg('resolved').toPandas()
ngrp_pd['count'] = ngrp.count().toPandas()['count']
ngrp_pd.columns = ['neighborhood', '% resolved', 'count']
data = ngrp_pd.sort(columns = '% resolved', ascending=False)
print data.set_index('neighborhood').head(10)
import matplotlib
matplotlib.style.use('ggplot')
data.iloc[:10].plot('neighborhood', '% resolved', kind='bar', legend=False, figsize=(12,5), fontsize=15)
from IPython.display import HTML
import folium
map_width=1000
map_height=600
sf_lat = 37.77
sf_long = -122.4
def inline_map(m, width=map_width, height=map_height):
m.create_map()
srcdoc = m.HTML.replace('"', '"')
embed = HTML('<iframe srcdoc="{}" '
'style="width: {}px; height: {}px; '
'border: none"></iframe>'.format(srcdoc, width, height))
return embed
map_sf = folium.Map(location=[sf_lat, sf_long], zoom_start=12, width=map_width, height=map_height)
map_sf.geo_json(geo_path='data/sfn.geojson', data=ngrp_pd,
columns=['neighborhood', '% resolved'],
key_on='feature.properties.neighborho',
threshold_scale=[0, 0.3, 0.4, 0.5, 1.0],
fill_color='OrRd', fill_opacity=0.6, line_opacity=0.6,
legend_name='P(resolved)')
inline_map(map_sf)
import pandas as pd
crimes3 = hc.sql(
SELECT cast(SUBSTR(date_str,7,4) as int) as year,
cast(SUBSTR(date_str,1,2) as int) as month,
cast(SUBSTR(time,1,2) as int) as hour,
category, district, dayofweek, description, neighborhood, longitude, latitude, resolved
FROM crimes2
).cache()
crimes3.limit(5).toPandas()
trainData = crimes3.filter(crimes3.year>=2011).filter(crimes3.year<=2013).cache()
testData = crimes3.filter(crimes3.year==2014).cache()
print "training set has " + str(trainData.count()) + " instances"
print "test set has " + str(testData.count()) + " instances"
def eval_metrics(lap):
tp = float(len(lap[(lap['label']==1) & (lap['prediction']==1)]))
tn = float(len(lap[(lap['label']==0) & (lap['prediction']==0)]))
fp = float(len(lap[(lap['label']==0) & (lap['prediction']==1)]))
fn = float(len(lap[(lap['label']==1) & (lap['prediction']==0)]))
precision = tp / (tp+fp)
recall = tp / (tp+fn)
accuracy = (tp+tn) / (tp+tn+fp+fn)
return {'precision': precision, 'recall': recall, 'accuracy': accuracy}
from IPython.display import Image
Image(filename='pipeline.png')
from pyspark.ml.feature import StringIndexer, VectorAssembler, Tokenizer, HashingTF
from pyspark.ml import Pipeline
inx1 = StringIndexer(inputCol="category", outputCol="cat-inx")
inx2 = StringIndexer(inputCol="dayofweek", outputCol="dow-inx")
inx3 = StringIndexer(inputCol="district", outputCol="dis-inx")
inx4 = StringIndexer(inputCol="neighborhood", outputCol="ngh-inx")
inx5 = StringIndexer(inputCol="resolved", outputCol="label")
parser = Tokenizer(inputCol="description", outputCol="words")
hashingTF = HashingTF(numFeatures=50, inputCol="words", outputCol="hash-inx")
vecAssembler = VectorAssembler(inputCols =["month", "hour", "cat-inx", "dow-inx", "dis-inx", "ngh-inx", "hash-inx"],
outputCol="features")
from pyspark.ml.classification import LogisticRegression
lr = LogisticRegression(maxIter=20, regParam=0.1, labelCol="label")
pipeline_lr = Pipeline(stages=[inx1, inx2, inx3, inx4, inx5, parser, hashingTF, vecAssembler, lr])
model_lr = pipeline_lr.fit(trainData)
results_lr = model_lr.transform(testData)
m = eval_metrics(results_lr.select("label", "prediction").toPandas())
print "precision = " + str(m['precision']) + ", recall = " + str(m['recall']) + ", accuracy = " + str(m['accuracy'])
from pyspark.ml.classification import RandomForestClassifier
rf = RandomForestClassifier(numTrees=250, maxDepth=5, maxBins=50, seed=42)
pipeline_rf = Pipeline(stages=[inx1, inx2, inx3, inx4, inx5, parser, hashingTF, vecAssembler, rf])
model_rf = pipeline_rf.fit(trainData)
results_rf = model_rf.transform(testData)
m = eval_metrics(results_rf.select("label", "prediction").toPandas())
print "precision = " + str(m['precision']) + ", recall = " + str(m['recall']) + ", accuracy = " + str(m['accuracy'])
hc.sql("add jar /home/jupyter/notebooks/jars/guava-11.0.2.jar")
hc.sql("add jar /home/jupyter/notebooks/jars/esri-geometry-api.jar")
hc.sql("add jar /home/jupyter/notebooks/jars/spatial-sdk-hive.jar")
hc.sql("add jar /home/jupyter/notebooks/jars/spatial-sdk-json.jar")
hc.sql("create temporary function ST_Centroid as 'com.esri.hadoop.hive.ST_Centroid'")
hc.sql("create temporary function ST_X as 'com.esri.hadoop.hive.ST_X'")
hc.sql("create temporary function ST_Y as 'com.esri.hadoop.hive.ST_Y'")
df_centroid = hc.sql(
SELECT neighborho as neighborhood,
ST_X(ST_Centroid(sf_neighborhoods.shape)) as cent_longitude,
ST_Y(ST_Centroid(sf_neighborhoods.shape)) as cent_latitude
FROM sf_neighborhoods
)
df_centroid.cache()
df = results_rf.select("neighborhood", "label", "prediction").toPandas()
map_sf = folium.Map(location=[sf_lat, sf_long], zoom_start=12, width=map_width, height=map_height)
n_list = results_rf.select("neighborhood").distinct().toPandas()['neighborhood'].tolist() # list of neighborhoods
for n in df_centroid.collect():
if n.neighborhood in n_list:
m = eval_metrics(df[df['neighborhood']==n.neighborhood])
map_sf.simple_marker([n.cent_latitude, n.cent_longitude], \
popup = n.neighborhood + ": accuracy = %.2f" % m['accuracy'])
inline_map(map_sf)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: And now we create a HiveContext to enable Spark to access data from HIVE
Step2: Let's take a look at the dataset - first 5 rows
Step3: Exploring the Dataset
Step4: Let's define a crime as 'resolved' if it has any string except "NONE" in the resolution column.
Step5: Let's look at the longitude/latitude values in more detail. Spark provides the describe() function to see this some basic statistics of these columns
Step6: Notice that the max values for longitude (-120.5) and latitude (90.0) seem strange. Those are not inside the SF area. Let's see how many bad values like this exist in the dataset
Step9: Seems like this is a data quality issue where some data points just have a fixed (bad) value of -120.5, 90.
Step10: Question
Step11: And as a bar chart
Step12: Using the Python Folium package I draw an interactive map of San Francisco, and color-code each neighborhood with the percent of resolved crimes
Step14: Preparing The Feature Matrix
Step15: For this demo, I create a training set for my model from the data in years 2011-2013 and a testing/validation set from the data in year 2014.
Step16: For convenience, I define a function to compute our classification metrics (we will use it later)
Step17: Next, I use Spark-ML to create a pipeline of transformation to generate the feature vector for each crime event
Step18: Predictive Models
Step19: Similarly, create the same pipeline with the Random Forest classifier
Step21: Measure Model Accuracy Per Neighborhood
Step22: Now I draw a map, this time showing a marker with the accuracy for each neighborhood, using the results from the Random Forest model.
|
10,617
|
<ASSISTANT_TASK:>
Python Code:
!!gcc -c sample.c
!!gcc -shared -o sample.dll sample.o -Wl,--out-implib,libsample.a
import ctypes
ctypes.util.find_library('sample.o')
_mod = ctypes.cdll.LoadLibrary('sample')
in_mandel = _mod.in_mandel
in_mandel.argtypes = (ctypes.c_double, ctypes.c_double, ctypes.c_int)
in_mandel.restype = ctypes.c_int
in_mandel(1., 4., 1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Wrapping into py 3.4
Step2: Load the dll
Step3: Define a wrapper to the dll function
Step4: Use it
|
10,618
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'inm-cm4-8', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
10,619
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-cm6-1', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
10,620
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import *
Image('three_receiver_cal/pics/macgyver.jpg', width='50%')
import skrf as rf
%matplotlib inline
from pylab import *
rf.stylely()
from skrf.calibration import TwoPortOnePath
from skrf.media import RectangularWaveguide
from skrf import two_port_reflect as tpr
from skrf import mil
raw = rf.read_all_networks('three_receiver_cal/data/')
# pull frequency information from measurements
frequency = raw['short'].frequency
# the media object
wg = RectangularWaveguide(frequency=frequency, a=120*mil, z0=50)
# list of 'ideal' responses of the calibration standards
ideals = [wg.short(nports=2),
tpr(wg.delay_short( 90,'deg'), wg.match()),
wg.match(nports=2),
wg.thru()]
# corresponding measurements to the 'ideals'
measured = [raw['short'],
raw['quarter wave delay short'],
raw['load'],
raw['thru']]
# the Calibration object
cal = TwoPortOnePath(measured = measured, ideals = ideals )
Image('three_receiver_cal/pics/symmetric DUT.jpg', width='75%')
dutf = raw['attenuator (forward)']
dutr = raw['attenuator (reverse)']
# note the correction algorithm is different depending on what is passed to
# apply_cal
corrected_full = cal.apply_cal((dutf, dutr))
corrected_fakeflip = cal.apply_cal((dutf,dutf))
corrected_partial = cal.apply_cal(dutf)
f, ax = subplots(2,2, figsize=(8,8))
for m in [0,1]:
for n in [0,1]:
ax_ = ax[m,n]
ax_.set_title('$S_{%i%i}$'%(m+1,n+1))
corrected_full.plot_s_db(m,n, label='Full Correction',ax=ax_ )
corrected_fakeflip.plot_s_db(m,n, label='Pseudo-full Correction', ax=ax_)
if n==0:
corrected_partial.plot_s_db(m,n, label='Partial Correction', ax=ax_)
tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example
Step2: Correction Options
Step3: Full Correction (TwoPortOnePath)
|
10,621
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nuist', 'sandbox-2', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
10,622
|
<ASSISTANT_TASK:>
Python Code:
import requests
url = 'http://www.github.com/ibm'
response = requests.get(url)
print(response.status_code)
import requests
url = 'http://www.github.com/ibm'
response = requests.get(url)
print(response.status_code)
if response.status_code == 200:
print('Response status - OK ')
print(response.headers)
else:
print('Error making the HTTP request ',response.status_code )
import requests
url = 'http://www.github.com/ibm'
response = requests.get(url)
print(response.status_code)
if response.status_code == 200:
print('Response status - OK ')
print(response.text)
else:
print('Error making the HTTP request ',response.status_code )
import requests
url = "https://api.github.com/orgs/ibm"
response = requests.get(url)
if response.status_code == 200:
print('Response status - OK ')
print(response.headers['X-RateLimit-Remaining'])
else:
print('Error making the HTTP request ',response.status_code )
import requests
def GithubAPI(url):
Make a HTTP request for the given URL and send the response body
back to the calling function
# Use basic authentication
response = requests.get(url, auth=("ENTER USER ID","ENTER PASSWORD"))
if response.status_code == 200:
print('Response status - OK ')
print(response.headers['X-RateLimit-Remaining'])
return response.text
else:
print('Error making the HTTP request ',response.status_code )
return None
def main():
url = "https://api.github.com/orgs/ibm"
txt_response = GithubAPI(url)
if txt_response:
print(txt_response)
main()
import requests
import json
def GithubAPI(url):
Make a HTTP request for the given URL and send the response body
back to the calling function
response = requests.get(url)
if response.status_code == 200:
print('Response status - OK ')
return response.json()
else:
print('Error making the HTTP request ',response.status_code )
return None
def main():
url = "https://api.github.com/orgs/ibm"
txt_response = GithubAPI(url)
if txt_response:
print('The number of public repos are : ',txt_response['public_repos'])
main()
import requests
import json
def GithubAPI(url):
Make a HTTP request for the given URL and send the response body
back to the calling function
response = requests.get(url, auth("ENTER USER ID","ENTER PASSWORD"))
if response.status_code == 200:
print('Response status - OK ')
return response.json()
else:
print('Error making the HTTP request ',response.status_code )
return None
def main():
url = "https://api.github.com/orgs/ibm"
response_json = GithubAPI(url)
if response_json:
print('The number of public repos are : ',response_json['public_repos'])
repo_url = response_json['repos_url']
repo_response = GithubAPI(repo_url)
for repo in repo_response:
print([repo['id'],repo['name']])
main()
import requests
import json
def GithubAPI(url):
Make a HTTP request for the given URL and send the response body
back to the calling function
response = requests.get(url, auth = ("ENTER USER ID","ENTER PASSWORD"))
if response.status_code == 200:
print('Response status - OK ')
return response.json()
else:
print('Error making the HTTP request ',response.status_code )
return None
def main():
url = "https://api.github.com/orgs/ibm"
response_json = GithubAPI(url)
if response_json:
print('The number of public repos are : ',response_json['public_repos'])
repo_url = response_json['repos_url']
total_no = response_json['public_repos']
per_page = 100
page_count = 1
while page_count < total_no/100:
#Display 100 repos per page and traverse the pages untill we get the last page
page_url = repo_url+"?per_page=100&page_no="+str(page_count)
print(page_url)
repo_response = GithubAPI(page_url)
# Increment page number
page_count = page_count+1
for repo in repo_response:
print([repo['id'],repo['name']])
main()
import csv
WRITE_CSV = "C:/Users/kmpoo/Dropbox/HEC/Teaching/Python for PhD Mar 2018/python4phd/Session 2/ipython/Repo_csv.csv"
with open(WRITE_CSV, 'at',encoding = 'utf-8', newline='') as csv_obj:
write = csv.writer(csv_obj) # Note it is csv.writer not reader
write.writerow(['REPO ID','REPO NAME'])
#Enter code here
import requests
import json
import csv
WRITE_CSV = "C:/Users/kmpoo/Dropbox/HEC/Teaching/Python for PhD Mar 2018/python4phd/Session 2/ipython/Repo_csv.csv"
def appendcsv(data_list):
with open(WRITE_CSV, 'at',encoding = 'utf-8', newline='') as csv_obj:
write = csv.writer(csv_obj) # Note it is csv.writer not reader
write.writerow(data_list)
def GithubAPI(url):
Make a HTTP request for the given URL and send the response body
back to the calling function
response = requests.get(url, auth = ("ENTER USER ID","ENTER PASSWORD"))
if response.status_code == 200:
print('Response status - OK ')
return response.json()
else:
print('Error making the HTTP request ',response.status_code )
return None
def main():
url = "https://api.github.com/orgs/ibm"
response_json = GithubAPI(url)
if response_json:
print('The number of public repos are : ',response_json['public_repos'])
repo_url = response_json['repos_url']
total_no = response_json['public_repos']
per_page = 100
page_count = 1
while page_count < total_no/100:
#Display 100 repos per page and traverse the pages untill we get the last page
page_url = repo_url+"?per_page=100&page_no="+str(page_count)
print(page_url)
repo_response = GithubAPI(page_url)
# Increment page number
page_count = page_count+1
for repo in repo_response:
print([repo['id'],repo['name']])
appendcsv([repo['id'],repo['name']])
main()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get header information
Step2: Get the body Information
Step3: Using a Web API to Collect Data
Step5: Step 2
Step7: Step 3
Step9: Step 3
Step11: Step 4
Step12: 3. Write a CSV
Step14: What do you think will happen if we use 'wt' as mode instead of 'at' ?
|
10,623
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import nsfg
import first
import analytic
import thinkstats2
import thinkplot
thinkplot.PrePlot(3)
for lam in [2.0, 1, 0.5]:
xs, ps = thinkstats2.RenderExpoCdf(lam, 0, 3.0, 50)
label = r'$\lambda=%g$' % lam
thinkplot.Plot(xs, ps, label=label)
thinkplot.Config(title='Exponential CDF', xlabel='x', ylabel='CDF',
loc='lower right')
df = analytic.ReadBabyBoom()
diffs = df.minutes.diff()
cdf = thinkstats2.Cdf(diffs, label='actual')
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='Time between births (minutes)', ylabel='CDF')
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(xlabel='Time between births (minutes)',
ylabel='CCDF', yscale='log', loc='upper right')
thinkplot.PrePlot(3)
mus = [1.0, 2.0, 3.0]
sigmas = [0.5, 0.4, 0.3]
for mu, sigma in zip(mus, sigmas):
xs, ps = thinkstats2.RenderNormalCdf(mu=mu, sigma=sigma,
low=-1.0, high=4.0)
label = r'$\mu=%g$, $\sigma=%g$' % (mu, sigma)
thinkplot.Plot(xs, ps, label=label)
thinkplot.Config(title='Normal CDF', xlabel='x', ylabel='CDF',
loc='upper left')
preg = nsfg.ReadFemPreg()
weights = preg.totalwgt_lb.dropna()
# estimate parameters: trimming outliers yields a better fit
mu, var = thinkstats2.TrimmedMeanVar(weights, p=0.01)
print('Mean, Var', mu, var)
# plot the model
sigma = np.sqrt(var)
print('Sigma', sigma)
xs, ps = thinkstats2.RenderNormalCdf(mu, sigma, low=0, high=12.5)
thinkplot.Plot(xs, ps, label='model', color='0.6')
# plot the data
cdf = thinkstats2.Cdf(weights, label='data')
thinkplot.PrePlot(1)
thinkplot.Cdf(cdf)
thinkplot.Config(title='Birth weights',
xlabel='Birth weight (pounds)',
ylabel='CDF')
n = 1000
thinkplot.PrePlot(3)
mus = [0, 1, 5]
sigmas = [1, 1, 2]
for mu, sigma in zip(mus, sigmas):
sample = np.random.normal(mu, sigma, n)
xs, ys = thinkstats2.NormalProbability(sample)
label = '$\mu=%d$, $\sigma=%d$' % (mu, sigma)
thinkplot.Plot(xs, ys, label=label)
thinkplot.Config(title='Normal probability plot',
xlabel='standard normal sample',
ylabel='sample values')
mean, var = thinkstats2.TrimmedMeanVar(weights, p=0.01)
std = np.sqrt(var)
xs = [-4, 4]
fxs, fys = thinkstats2.FitLine(xs, mean, std)
thinkplot.Plot(fxs, fys, linewidth=4, color='0.8')
xs, ys = thinkstats2.NormalProbability(weights)
thinkplot.Plot(xs, ys, label='all live')
thinkplot.Config(title='Normal probability plot',
xlabel='Standard deviations from mean',
ylabel='Birth weight (lbs)')
full_term = preg[preg.prglngth >= 37]
term_weights = full_term.totalwgt_lb.dropna()
mean, var = thinkstats2.TrimmedMeanVar(weights, p=0.01)
std = np.sqrt(var)
xs = [-4, 4]
fxs, fys = thinkstats2.FitLine(xs, mean, std)
thinkplot.Plot(fxs, fys, linewidth=4, color='0.8')
thinkplot.PrePlot(2)
xs, ys = thinkstats2.NormalProbability(weights)
thinkplot.Plot(xs, ys, label='all live')
xs, ys = thinkstats2.NormalProbability(term_weights)
thinkplot.Plot(xs, ys, label='full term')
thinkplot.Config(title='Normal probability plot',
xlabel='Standard deviations from mean',
ylabel='Birth weight (lbs)')
import brfss
df = brfss.ReadBrfss()
weights = df.wtkg2.dropna()
def MakeNormalModel(weights):
Plots a CDF with a Normal model.
weights: sequence
cdf = thinkstats2.Cdf(weights, label='weights')
mean, var = thinkstats2.TrimmedMeanVar(weights)
std = np.sqrt(var)
print('n, mean, std', len(weights), mean, std)
xmin = mean - 4 * std
xmax = mean + 4 * std
xs, ps = thinkstats2.RenderNormalCdf(mean, std, xmin, xmax)
thinkplot.Plot(xs, ps, label='model', linewidth=4, color='0.8')
thinkplot.Cdf(cdf)
MakeNormalModel(weights)
thinkplot.Config(title='Adult weight, linear scale', xlabel='Weight (kg)',
ylabel='CDF', loc='upper right')
log_weights = np.log10(weights)
MakeNormalModel(log_weights)
thinkplot.Config(title='Adult weight, log scale', xlabel='Weight (log10 kg)',
ylabel='CDF', loc='upper right')
def MakeNormalPlot(weights):
Generates a normal probability plot of birth weights.
weights: sequence
mean, var = thinkstats2.TrimmedMeanVar(weights, p=0.01)
std = np.sqrt(var)
xs = [-5, 5]
xs, ys = thinkstats2.FitLine(xs, mean, std)
thinkplot.Plot(xs, ys, color='0.8', label='model')
xs, ys = thinkstats2.NormalProbability(weights)
thinkplot.Plot(xs, ys, label='weights')
MakeNormalPlot(weights)
thinkplot.Config(title='Adult weight, normal plot', xlabel='Weight (kg)',
ylabel='CDF', loc='upper left')
MakeNormalPlot(log_weights)
thinkplot.Config(title='Adult weight, lognormal plot', xlabel='Weight (log10 kg)',
ylabel='CDF', loc='upper left')
xmin = 0.5
thinkplot.PrePlot(3)
for alpha in [2.0, 1.0, 0.5]:
xs, ps = thinkstats2.RenderParetoCdf(xmin, alpha, 0, 10.0, n=100)
thinkplot.Plot(xs, ps, label=r'$\alpha=%g$' % alpha)
thinkplot.Config(title='Pareto CDF', xlabel='x',
ylabel='CDF', loc='lower right')
import populations
pops = populations.ReadData()
print('Number of cities/towns', len(pops))
log_pops = np.log10(pops)
cdf = thinkstats2.Cdf(pops, label='data')
cdf_log = thinkstats2.Cdf(log_pops, label='data')
# pareto plot
xs, ys = thinkstats2.RenderParetoCdf(xmin=5000, alpha=1.4, low=0, high=1e7)
thinkplot.Plot(np.log10(xs), 1-ys, label='model', color='0.8')
thinkplot.Cdf(cdf_log, complement=True)
thinkplot.Config(xlabel='log10 population',
ylabel='CCDF',
yscale='log', loc='lower left')
thinkplot.PrePlot(cols=2)
mu, sigma = log_pops.mean(), log_pops.std()
xs, ps = thinkstats2.RenderNormalCdf(mu, sigma, low=0, high=8)
thinkplot.Plot(xs, ps, label='model', color='0.8')
thinkplot.Cdf(cdf_log)
thinkplot.Config(xlabel='log10 population',
ylabel='CDF', loc='lower right')
thinkstats2.NormalProbabilityPlot(log_pops, label='data')
thinkplot.Config(xlabel='Random variate',
ylabel='log10 population',
xlim=[-5, 5])
import random
def expovariate(lam):
p = random.random()
x = -np.log(1-p) / lam
return x
t = [expovariate(lam=2) for _ in range(1000)]
cdf = thinkstats2.Cdf(t)
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(xlabel='Exponential variate', ylabel='CCDF', yscale='log')
import scipy.stats
mu = 178
sigma = 7.7
dist = scipy.stats.norm(loc=mu, scale=sigma)
type(dist)
dist.mean(), dist.std()
dist.cdf(mu-sigma)
# Solution goes here
short = dist.cdf(178)
tall = dist.cdf(185)
print(tall - short)
alpha = 1.7
xmin = 1 # meter
dist = scipy.stats.pareto(b=alpha, scale=xmin)
dist.median()
# Solution goes here
dist.mean()
# Solution goes here
dist.cdf(dist.mean())
# Solution goes here
# 1 km or 1 m?
# for 1 km
(1 - dist.cdf(1000)) * 7e9
# max(dist)
dist.sf(6e5) * 7e9
# Solution goes here
# Solution goes here
# the height should be 600km
sample = [random.weibullvariate(2, 1) for _ in range(1000)]
cdf = thinkstats2.Cdf(sample)
thinkplot.Cdf(cdf, transform='weibull')
thinkplot.Config(xlabel='Weibull variate', ylabel='CCDF')
import analytic
df = analytic.ReadBabyBoom()
diffs = df.minutes.diff()
cdf = thinkstats2.Cdf(diffs, label='actual')
n = len(diffs)
lam = 44.0 / 24 / 60
sample = [random.expovariate(lam) for _ in range(n)]
1/lam, np.mean(sample)
# Solution goes here
# make 44 values from exp dist
# give them same mean as data aka 33 min
# plot distribution of random values
rand_cdf = thinkstats2.Cdf(sample, label='random values')
thinkplot.Cdf(rand_cdf)
thinkplot.Show()
thinkplot.Cdfs([rand_cdf, cdf], complement=True)
thinkplot.Config(xlabel='time between births', yscale='log')
# Solution goes here
import hinc
df = hinc.ReadData()
df.head()
xs, ps = df.income.values, df.ps.values
cdf = thinkstats2.Cdf(xs, ps, label='data')
cdf_log = thinkstats2.Cdf(np.log10(xs), ps, label='data')
# linear plot
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='household income',
ylabel='CDF')
xs, ys = thinkstats2.RenderParetoCdf(xmin=55000, alpha=2.5,
low=0, high=250000)
thinkplot.Plot(xs, 1-ys, label='model', color='0.8')
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(xlabel='log10 household income',
ylabel='CCDF',
xscale='log',
yscale='log',
loc='lower left')
median = cdf_log.Percentile(50)
iqr = cdf_log.Percentile(75) - cdf_log.Percentile(25)
std = iqr / 1.349
# choose std to match the upper tail
std = 0.35
print(median, std)
xs, ps = thinkstats2.RenderNormalCdf(median, std, low=3.5, high=5.5)
thinkplot.Plot(xs, ps, label='model', color='0.8')
thinkplot.Cdf(cdf_log)
thinkplot.Config(xlabel='log10 household income',
ylabel='CDF')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exponential distribution
Step2: Here's the distribution of interarrival times from a dataset of birth times.
Step3: Here's what the CCDF looks like on a log-y scale. A straight line is consistent with an exponential distribution.
Step4: Normal distribution
Step5: I'll use a normal model to fit the distribution of birth weights from the NSFG.
Step6: Here's the observed CDF and the model. The model fits the data well except in the left tail.
Step7: A normal probability plot is a visual test for normality. The following example shows that if the data are actually from a normal distribution, the plot is approximately straight.
Step8: Here's the normal probability plot for birth weights, showing that the lightest babies are lighter than we expect from the normal mode, and the heaviest babies are heavier.
Step9: If we suspect that the deviation in the left tail is due to preterm babies, we can check by selecting only full term births.
Step10: Now the deviation in the left tail is almost gone, but the heaviest babies are still heavy.
Step11: Lognormal model
Step13: The following function estimates the parameters of a normal distribution and plots the data and a normal model.
Step14: Here's the distribution of adult weights and a normal model, which is not a very good fit.
Step15: Here's the distribution of adult weight and a lognormal model, plotted on a log-x scale. The model is a better fit for the data, although the heaviest people are heavier than the model expects.
Step17: The following function generates a normal probability plot.
Step18: When we generate a normal probability plot with adult weights, we can see clearly that the data deviate from the model systematically.
Step19: If we make a normal probability plot with log weights, the model fit the data well except in the tails, where the heaviest people exceed expectations.
Step20: Pareto distribution
Step21: The distribution of populations for cities and towns is sometimes said to be Pareto-like.
Step22: Here's the distribution of population for cities and towns in the U.S., along with a Pareto model. The model fits the data well in the tail.
Step23: The lognormal model might be a better fit for this data (as is often the case for things that are supposed to be Pareto).
Step24: Here's a normal probability plot for the log-populations. The model fits the data well except in the right tail, where the biggest cities are bigger than expected.
Step25: Random variates
Step26: We can test it by generating a sample.
Step27: And plotting the CCDF on a log-y scale.
Step28: A straight line is consistent with an exponential distribution.
Step29: For example <tt>scipy.stats.norm</tt> represents a normal distribution.
Step30: A "frozen random variable" can compute its mean and standard deviation.
Step31: It can also evaluate its CDF. How many people are more than one standard deviation below the mean? About 16%
Step32: How many people are between 5'10" and 6'1"?
Step33: Exercise
Step34: What is the mean height in Pareto world?
Step35: What fraction of people are shorter than the mean?
Step36: Out of 7 billion people, how many do we expect to be taller than 1 km? You could use <tt>dist.cdf</tt> or <tt>dist.sf</tt>.
Step37: How tall do we expect the tallest person to be?
Step38: Exercise
Step39: Exercise
Step40: Worked Example
Step41: Here's what the CDF looks like on a linear scale.
Step42: To check whether a Pareto model describes the data well, I plot the CCDF on a log-log scale.
Step43: For the lognormal model I estimate mu and sigma using percentile-based statistics (median and IQR).
Step44: Here's what the distribution, and fitted model, look like on a log-x scale.
|
10,624
|
<ASSISTANT_TASK:>
Python Code:
import numpy
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
# The Computer Language Benchmarks Game
# http://benchmarksgame.alioth.debian.org/
#
# originally by Kevin Carson
# modified by Tupteq, Fredrik Johansson, and Daniel Nanz
# modified by Maciej Fijalkowski
# 2to3
import sys
def combinations(l):
result = []
for x in range(len(l) - 1):
ls = l[x+1:]
for y in ls:
result.append((l[x],y))
return result
PI = 3.14159265358979323
SOLAR_MASS = 4 * PI * PI
DAYS_PER_YEAR = 365.24
BODIES = {
'sun': ([0.0, 0.0, 0.0], [0.0, 0.0, 0.0], SOLAR_MASS),
'jupiter': ([4.84143144246472090e+00,
-1.16032004402742839e+00,
-1.03622044471123109e-01],
[1.66007664274403694e-03 * DAYS_PER_YEAR,
7.69901118419740425e-03 * DAYS_PER_YEAR,
-6.90460016972063023e-05 * DAYS_PER_YEAR],
9.54791938424326609e-04 * SOLAR_MASS),
'saturn': ([8.34336671824457987e+00,
4.12479856412430479e+00,
-4.03523417114321381e-01],
[-2.76742510726862411e-03 * DAYS_PER_YEAR,
4.99852801234917238e-03 * DAYS_PER_YEAR,
2.30417297573763929e-05 * DAYS_PER_YEAR],
2.85885980666130812e-04 * SOLAR_MASS),
'uranus': ([1.28943695621391310e+01,
-1.51111514016986312e+01,
-2.23307578892655734e-01],
[2.96460137564761618e-03 * DAYS_PER_YEAR,
2.37847173959480950e-03 * DAYS_PER_YEAR,
-2.96589568540237556e-05 * DAYS_PER_YEAR],
4.36624404335156298e-05 * SOLAR_MASS),
'neptune': ([1.53796971148509165e+01,
-2.59193146099879641e+01,
1.79258772950371181e-01],
[2.68067772490389322e-03 * DAYS_PER_YEAR,
1.62824170038242295e-03 * DAYS_PER_YEAR,
-9.51592254519715870e-05 * DAYS_PER_YEAR],
5.15138902046611451e-05 * SOLAR_MASS) }
SYSTEM = list(BODIES.values())
PAIRS = combinations(SYSTEM)
def advance(dt, n, bodies=SYSTEM, pairs=PAIRS):
for i in range(n):
for (([x1, y1, z1], v1, m1),
([x2, y2, z2], v2, m2)) in pairs:
dx = x1 - x2
dy = y1 - y2
dz = z1 - z2
mag = dt * ((dx * dx + dy * dy + dz * dz) ** (-1.5))
b1m = m1 * mag
b2m = m2 * mag
v1[0] -= dx * b2m
v1[1] -= dy * b2m
v1[2] -= dz * b2m
v2[0] += dx * b1m
v2[1] += dy * b1m
v2[2] += dz * b1m
for (r, [vx, vy, vz], m) in bodies:
r[0] += dt * vx
r[1] += dt * vy
r[2] += dt * vz
import copy
bodies = copy.deepcopy(SYSTEM)
advance(dt=0.01, n=1, bodies=bodies, pairs=combinations(bodies))
print(bodies)
print(SYSTEM)
def differences(bodies1, bodies2):
Compare two configurations.
assert len(bodies1) == len(bodies2), "Configurations must have same number of bodies! {}, {}".format(len(bodies1), len(bodies2))
N = len(bodies1)
d_positions = numpy.zeros((N, 3))
d_velocities = numpy.zeros((N, 3))
norm_difference = 0.0
for n in range(N):
d_positions[n, :] = numpy.array(bodies1[n][0]) - numpy.array(bodies2[n][0])
d_velocities[n, :] = numpy.array(bodies1[n][1]) - numpy.array(bodies2[n][1])
norm_difference += numpy.sum(numpy.abs(d_positions[n, :])) + numpy.sum(numpy.abs(d_velocities[n, :]))
return norm_difference, d_positions, d_velocities
norm1, d_x, d_v = differences(bodies, SYSTEM)
d_v
def flip_time(bodies):
Flip the time by flipping the velocity.
for i in range(len(bodies)):
for j in range(3):
bodies[i][1][j] *= -1.0
bodies_flip_time = copy.deepcopy(SYSTEM)
flip_time(bodies_flip_time)
advance(dt=-0.01, n=1, bodies=bodies_flip_time, pairs=combinations(bodies_flip_time))
flip_time(bodies_flip_time)
norm_flip_time, d_x_flip_time, d_v_flip_time = differences(bodies, bodies_flip_time)
print("Norm of differences is {}.".format(norm_flip_time))
def flip_coordinate(bodies, coord):
Flip a single coordinate direction
for i in range(len(bodies)):
bodies[i][0][coord] *= -1.0
bodies[i][1][coord] *= -1.0
for coord in range(3):
bodies_flip_coord = copy.deepcopy(SYSTEM)
flip_coordinate(bodies_flip_coord, coord)
advance(dt=0.01, n=1, bodies=bodies_flip_coord, pairs=combinations(bodies_flip_coord))
flip_coordinate(bodies_flip_coord, coord)
norm_flip_coord, d_x_flip_coord, d_v_flip_coord = differences(bodies, bodies_flip_coord)
print("Norm of differences is {} (flipped coordinate {}).".format(norm_flip_coord, coord))
def translate_coordinate(bodies, shift):
Translate the coordinates of all bodies
for i in range(len(bodies)):
for n in range(3):
bodies[i][0][n] += shift[n]
shift = 10.0*(-0.5+numpy.random.rand(3)) # Random coordinate shift in [-5, 5]
bodies_shift = copy.deepcopy(SYSTEM)
translate_coordinate(bodies_shift, shift)
advance(dt=0.01, n=1, bodies=bodies_shift, pairs=combinations(bodies_shift))
translate_coordinate(bodies_shift, -shift)
norm_shift, d_x_shift, d_v_shift = differences(bodies, bodies_shift)
print("Norm of differences is {}.".format(norm_shift))
numpy.sum(numpy.abs(numpy.spacing(shift)))*5
print(d_x_shift)
print(d_v_shift)
def rotate_bodies(bodies, angles, invert=False):
Rotate the coordinates of all bodies.
Rx = numpy.array([[1.0, 0.0, 0.0],
[0.0, numpy.cos(angles[0]), -numpy.sin(angles[0])],
[0.0, numpy.sin(angles[0]), numpy.cos(angles[0])]])
Ry = numpy.array([[numpy.cos(angles[1]), 0.0, numpy.sin(angles[1])],
[0.0, 1.0, 0.0],
[-numpy.sin(angles[1]), 0.0, numpy.cos(angles[1])]])
Rz = numpy.array([[numpy.cos(angles[2]), -numpy.sin(angles[2]), 0.0],
[numpy.sin(angles[2]), numpy.cos(angles[2]), 0.0],
[0.0, 0.0, 1.0]])
if invert:
R = numpy.dot(numpy.dot(Rx, Ry), Rz)
else:
R = numpy.dot(Rz, numpy.dot(Ry, Rx))
for i in range(len(bodies)):
x = numpy.array(bodies[i][0])
v = numpy.array(bodies[i][1])
xp = numpy.dot(R, x)
vp = numpy.dot(R, v)
for n in range(3):
bodies[i][0][n] = xp[n]
bodies[i][1][n] = vp[n]
angles = numpy.pi/4.0*numpy.random.rand(3) # Random Euler angles in [0, pi/4]
bodies_rotate = copy.deepcopy(SYSTEM)
rotate_bodies(bodies_rotate, angles)
advance(dt=0.01, n=1, bodies=bodies_rotate, pairs=combinations(bodies_rotate))
rotate_bodies(bodies_rotate, -angles, invert=True)
norm_rotate, d_x_rotate, d_v_rotate = differences(bodies, bodies_rotate)
print("Norm of differences is {}.".format(norm_rotate))
3*9*5*numpy.sum(numpy.abs(numpy.spacing(angles)))
def scale_bodies(bodies, scale):
Scale coordinates and masses.
bodies_scale = []
for (x, v, m) in bodies:
new_x = copy.deepcopy(x)
new_v = copy.deepcopy(v)
new_m = m * scale
for i in range(3):
new_x[i] *= scale
bodies_scale.append((new_x, new_v, new_m))
return bodies_scale
scale = 2.0
bodies_scale = scale_bodies(SYSTEM, scale)
advance(dt=0.02, n=1, bodies=bodies_scale, pairs=combinations(bodies_scale))
bodies_rescale = scale_bodies(bodies_scale, 1.0/scale)
norm_scale, d_x_scale, d_v_scale = differences(bodies, bodies_rescale)
print("Norm of differences is {}.".format(norm_scale))
T = 10.0 # The base resolution will take 1000 steps
dt_values = numpy.array([0.01*2**(-i) for i in range(4)])
bodies_list = []
for i, dt in enumerate(dt_values):
bodies_loop = copy.deepcopy(SYSTEM)
advance(dt=dt, n=int(T/dt), bodies=bodies_loop, pairs=combinations(bodies_loop))
bodies_list.append(bodies_loop)
convergence_rate = numpy.zeros((2,))
for i in range(len(convergence_rate)):
numerator, diff_x, diff_v= differences(bodies_list[i], bodies_list[i+1])
denominator, diff_x, diff_v = differences(bodies_list[i+1], bodies_list[i+2])
convergence_rate[i] = numpy.log(numerator/denominator)/numpy.log(2.0)
print("Convergence rate (base dt={}) is {} (error {}).".format(
dt_values[i], convergence_rate[i], numpy.abs(convergence_rate[i]-1.0)))
print("Is the convergence rate close enough to 1 for the answers to match? {}".format(
numpy.log(1.0+0.5)/numpy.log(2.0) < convergence_rate[1] < numpy.log(2.0**2-1.0)/numpy.log(2.0)))
print("Does the convergence of the convergence rate show it's close enough to 1? {}".format(
numpy.abs(convergence_rate[1]-1.0) < 2.0/3.0*numpy.abs(convergence_rate[0]-1.0)))
body_1 = ([1.0, 0.0, 0.0], [0.0, 0.0, 0.0], 0.1*SOLAR_MASS)
body_2 = ([-1.0, 0.0, 0.0], [0.0, 0.0, 0.0], 0.1*SOLAR_MASS)
body_1_2_separation = ([3.0, 0.0, 0.0], [0.0, 0.0, 0.0], 0.1*SOLAR_MASS)
body_1_2_mass = ([1.0, 0.0, 0.0], [0.0, 0.0, 0.0], 0.2*SOLAR_MASS)
two_bodies = [copy.deepcopy(body_1), copy.deepcopy(body_2)]
two_bodies_2_separation = [copy.deepcopy(body_1_2_separation), copy.deepcopy(body_2)]
two_bodies_2_mass = [copy.deepcopy(body_1_2_mass), copy.deepcopy(body_2)]
advance(dt=0.01, n=1, bodies=two_bodies, pairs=combinations(two_bodies))
advance(dt=0.01, n=1, bodies=two_bodies_2_separation, pairs=combinations(two_bodies_2_separation))
advance(dt=0.01, n=1, bodies=two_bodies_2_mass, pairs=combinations(two_bodies_2_mass))
print("The acceleration (hence velocity) should decrease as separation^2")
print("So these two numbers should match: {} {} (difference: {})".format(
two_bodies[1][1][0],
two_bodies_2_separation[1][1][0]*4.0,
numpy.abs(two_bodies[1][1][0]-two_bodies_2_separation[1][1][0]*4.0)))
print("The acceleration (hence velocity) should increase as mass^1")
print("So these two numbers should match: {} {} (difference: {})".format(
two_bodies[1][1][0],
two_bodies_2_mass[1][1][0]/2.0,
numpy.abs(two_bodies[1][1][0]-two_bodies_2_mass[1][1][0]/2.0)))
bodies_list_separation = []
separation_scale = numpy.random.rand(5)
separations = separation_scale + 1.0
separation_v = numpy.zeros_like(separations)
for i, scale in enumerate(separation_scale):
body_1_separation = ([scale, 0.0, 0.0], [0.0, 0.0, 0.0], 0.1*SOLAR_MASS)
two_bodies_separation = [copy.deepcopy(body_1_separation), copy.deepcopy(body_2)]
advance(dt=0.01, n=1, bodies=two_bodies_separation, pairs=combinations(two_bodies_separation))
bodies_list_separation.append(two_bodies_separation)
separation_v[i] = two_bodies_separation[1][1][0]
bodies_list_mass = []
mass_scale = numpy.random.rand(5)
masses = 0.1 * mass_scale
mass_v = numpy.zeros_like(masses)
for i, scale in enumerate(mass_scale):
body_1_mass = ([1.0, 0.0, 0.0], [0.0, 0.0, 0.0], 0.1*scale*SOLAR_MASS)
two_bodies_mass = [copy.deepcopy(body_1_mass), copy.deepcopy(body_2)]
advance(dt=0.01, n=1, bodies=two_bodies_mass, pairs=combinations(two_bodies_mass))
bodies_list_mass.append(two_bodies_mass)
mass_v[i] = two_bodies_mass[1][1][0]
p_separation = numpy.polyfit(1./separations, separation_v, len(separations)-1)
p_mass = numpy.polyfit(masses, mass_v, len(masses)-1)
print("We expect the third-to-last (separation^{-2}) coefficient to dominate:")
print("Coefficients from separation: {}".format(p_separation))
print("Coefficient of separation^{{-2}}: {:.5g}".format(p_separation[-3]))
print("Largest other coefficient: {:.5g}".format(numpy.max(numpy.abs(numpy.delete(p_separation,-3)))))
print("We expect the second-to-last (mass^{1}) coefficient to dominate:")
print("Coefficients from mass: {}".format(p_mass))
print("Coefficient of mass^{{1}}: {:.5g}".format(p_mass[-2]))
print("Largest other coefficient: {:.5g}".format(numpy.max(numpy.abs(numpy.delete(p_mass,-2)))))
dt = 0.01
separation = 2.0
mass = 0.1
G = two_bodies[1][1][0]/mass*separation**2/dt
print("Algorithm G, in units of AU^3 SolarMass^{{-1}} years^{{-2}}: {}".format(G))
from scipy import constants
print("Accepted value of G, in units of {}: {}".format(
constants.unit('Newtonian constant of gravitation'),
constants.G))
solar_mass_in_kg = 1.9891e30
year_in_seconds = 3.15569e7
AU_in_metres = 149597870700.0
print("Algorithm G, in units of m^3 kg^-1 s^-2: {}".format(
G*(AU_in_metres**3/(solar_mass_in_kg)/year_in_seconds**2)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, there's a number of issues with this code, as noted in comments on the original post. Instead of addressing those here, I'll point to Konrad Hinsen's upcoming article on testing (to appear in CiSE in July), or Scopatz and Huff's book, both of which discuss n-body codes, testing, and how to code them effectively. Also, we're only interested in testing whether the implementation is "correct", not the limitations of that implementation for modelling the physics
Step3: Comparing these two is going to be a real pain. So, let's create a utility function that will give the difference between two configurations of bodies. It will return the differences in the locations and the velocities as arrays, but will also include the "total" error as a single number - the sum of the absolute values of all errors. If this norm is zero then the two configurations are identical.
Step5: Now we have to think about how to test the particle-particle interaction. The parameter we can change in order to do the test is the configuration bodies (and the associated list of pairs, which we'll always take to be combinations(bodies)).
Step7: As expected, the positions are identical. Note that the velocities are significantly different, because they still have the opposite sign, but if we flip the velocities back as well, then they will be identical.
Step9: Continuous space translation
Step10: In this case the repeated operations introduce some floating point round-off error. This should be related to the round off introduced by the shift, and bounded by the number of bodies ($5$) multiplied by the total round-off introduced
Step11: The precise differences are
Step13: We see that the some of the bodies have errors around the expected floating point limit, and these dominate the resulting error (which is well within the expected bound).
Step14: The size of the differences is considerably larger, thanks to the number of operations performed. With three rotation matrices combined, each containing nine entries, applied to five bodies, we would expect a total error of the order of
Step16: Continuous conformal (like) symmetry
Step17: This scaling does not prove that the force goes as $M^2/L^2$; it only shows that the force contains $M$ and $L$ to the same power. To show that it's got the appropriate form we should either compare to an oracle or test simpler cases. The Java n-body code is (likely) the oracle it was tested against; testing against a simpler case will be done later.
Step18: This shows that the time evolution is close enought to first order as expected. We haven't explicitly shown that it's the semi-implicit Euler method, as explicitly calculating the local truncation error would be a lot of work, and in this case we're really not interested in the specific time integration scheme - just that it converges.
Step19: Of course, we could have just got (un)lucky in comparing single data points when we vary either $b$ or $c$. Instead we can choose a set of points at random, and fit to a more general model; for example, fit to
Step20: We we see that the expected coefficient dominates, but we're not getting a fit that's absolutely perfect. This is likely due to the limitations of the fitting algorithm.
|
10,625
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.linear_model import LinearRegression
# here we just downloaded the data from the library
from sklearn.datasets import load_boston
# Load the data from the the boston house-prices dataset
boston_data = load_boston()
x = boston_data['data']
y = boston_data['target']
# Make and fit the linear regression model
# Fit the model and Assign it to the model variable
model = LinearRegression()
model.fit(x,y)
# Make a prediction using the model
sample_house = [[2.29690000e-01, 0.00000000e+00, 1.05900000e+01, 0.00000000e+00, 4.89000000e-01,
6.32600000e+00, 5.25000000e+01, 4.35490000e+00, 4.00000000e+00, 2.77000000e+02,
1.86000000e+01, 3.94870000e+02, 1.09700000e+01]]
# Predict housing price for the sample_house
prediction = model.predict(sample_house)
print(prediction)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data
Step2: Linear Regression
Step3: Prediction
|
10,626
|
<ASSISTANT_TASK:>
Python Code:
# Refer: https://docs.datacommons.org/api/pandas/
!pip install datacommons_pandas datacommons geopandas plotly descartes --upgrade --quiet
# Import Data Commons
import datacommons as dc
import datacommons_pandas as dpd
# Import other required libraries
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import pandas as pd
import geopandas as gpd
from shapely.geometry import Point
import plotly.express as px
from descartes import PolygonPatch
import ipywidgets as widgets
import pprint
import requests
import json
# setting place scope to US
_USA = 'country/USA'
#@title Select place types
#@markdown *Select `Runtime > Run all` from the menu above after filling up the values.*
#@markdown Select a geo-level for querying statistics. Select 'State' for a quick demo
US_GEO_LEVEL = 'State' #@param ["State", "Country", "County"] {allow-input: true}
# get the dcids for all the places in the selected place type across US
if US_GEO_LEVEL == 'Country':
# if geo-level is a country, then we look at the US
_PLACE_DCID = _USA
else:
# else, we extract the place ids for the selected geo-level
_PLACE_DCID = dc.get_places_in([_USA], US_GEO_LEVEL)[_USA]
# printing the first 5 places that we fetch from Data Commons
if type(_PLACE_DCID) is list:
print(_PLACE_DCID[:5])
else:
print(_PLACE_DCID)
# populate a list of options based on name for non-country GEO-LEVELS
def get_names_for_place_ids(place_id_list):
This function takes as input a list of place ids and returns the corresponding place names from the Data Commons Graph
print("Fetching the place will take time based on the summary level and number of place nodes", end=".....", flush=True)
try:
# get the place names for non-country GEO-LEVELS
place_name_list = [dc.get_property_values([place_id], 'name')[place_id][0] for place_id in place_id_list]
except:
place_name_list = dc.get_property_values([_USA], 'name')[_USA]
print('Done.\n', flush=True)
return place_name_list
_PLACE_NAMES = get_names_for_place_ids(_PLACE_DCID)
print("Select a place from the list to use for the remainder of the notebook.")
selected_place = widgets.Dropdown(options=_PLACE_NAMES, value=_PLACE_NAMES[0])
display(selected_place)
# get the value of the selected place from the drop-down
selected_place_choice = selected_place.value
# get the place's id on the Data Commons Graph by name
if type(_PLACE_DCID) == list:
place_dcid = _PLACE_DCID[_PLACE_NAMES.index(selected_place_choice)]
else:
place_dcid = _PLACE_DCID
place_dcid
superfund_sites = dc.get_places_in([place_dcid], 'SuperfundSite')[place_dcid]
# printing the first 5 superfund sites that we fetch from Data Commons
superfund_sites[:5]
# get names for superfund sites in the selected place
site_names = get_names_for_place_ids(superfund_sites)
# printing the names of first 5 superfund sites that we fetch from Data Commons
site_names[:5]
print("Select a superfund site from the list to use for the remainder of the notebook.")
selected_site = widgets.Dropdown(options=site_names, value=site_names[0])
display(selected_site)
# get the value of the selected place from the drop-down
selected_site_choice = selected_site.value
# get the place's id on the Data Commons Graph by name
site_dcid = superfund_sites[site_names.index(selected_site_choice)]
# print the site_dcid
site_dcid
dc.get_property_labels([site_dcid], out=True)
dc.get_property_values([site_dcid], prop='containedInPlace', out=True)
## get a list of places for containedInPlace
contained_places = dc.get_property_values([site_dcid], prop='containedInPlace', out=True)[site_dcid]
dc.get_property_values(contained_places, prop='name', out=True)
contained_places = dc.get_property_values([site_dcid], prop='containedInPlace', out=True)[site_dcid]
dc.get_property_values(contained_places, prop='typeOf', out=True)
def populate_superfund_site_df(site_id_list):
Populate a pandas dataframe based on the list of superfund site ids
superfund_site_df = pd.DataFrame() # initalizing an empty dataframe
print("This method will take some time since the API queries depend on number of sites", end=" ...", flush=True)
for site in site_id_list:
# get all the properties associated for a superfund site
site_props = dc.get_property_labels([site], out=True)[site]
# a dictionary to map a property to its corresponding value
row_dict = {}
# populate the row_dict map
for i in range(len(site_props)):
print(end=".")
# get all the values for each property associated with a superfund site
prop_val = dc.get_property_values([site], prop=site_props[i], out=True)[site]
# handle the case where containedInPlace is a list whereas the other props are string values
if site_props[i] == 'containedInPlace':
row_dict[site_props[i]] = prop_val
elif site_props[i] == 'location':
prop_val = ''.join(prop_val)
# location is an autogenerated internal node, and we will need an additional API call to get the latitude and longitude of the site
row_dict['latitude'] = dc.get_property_values([prop_val], prop="latitude", out=True)[prop_val][0]
row_dict['longitude'] = dc.get_property_values([prop_val], prop="longitude", out=True)[prop_val][0]
else:
row_dict[site_props[i]] = ''.join(prop_val)
# populate the dataframe
superfund_site_df = superfund_site_df.append(row_dict, ignore_index=True)
print("Done.", flush=True)
return superfund_site_df
site_df = populate_superfund_site_df(superfund_sites)
site_df.head(5)
# filter `site_df` dataframe to required columns
required_cols = ['epaSuperfundSiteId', 'name', 'latitude', 'longitude']
sites_filtered = site_df[required_cols]
sites_filtered.to_csv(f"./superfund_sites_locations_in_{selected_place_choice}.csv", index=False)
# direct download with script
from google.colab import files
files.download(f"./superfund_sites_locations_in_{selected_place_choice}.csv")
fig = px.scatter_geo(sites_filtered,lat='latitude',lon='longitude', hover_name="name")
fig.update_layout(title = f'superfund site locations in {selected_place_choice}', title_x=0.5)
fig.show()
print(f"GeoJSON for {selected_place_choice} which has the place id: {place_dcid}\n\n")
dc.get_property_values([place_dcid], prop='geoJsonCoordinates', out=True)[place_dcid][0]
# copy of the pandas dataframe which we will transform into a GeoDataFrame
gdf = sites_filtered.copy()
# convertying the latitude and longitude columns to a Point geometry
gdf["Coordinates"] = list(zip(gdf.longitude.astype(float), gdf.latitude.astype(float)))
gdf["Coordinates"] = gdf["Coordinates"].apply(Point)
gdf.drop(columns=['latitude', 'longitude']) # deleting redudance
# converting to GeoDataFrame
gdf = gpd.GeoDataFrame(gdf, geometry="Coordinates")
gdf
# Get the geojson for the selected place from the Data Commons Graph
place_map = dc.get_property_values([place_dcid], prop='geoJsonCoordinates', out=True)[place_dcid][0]
place_map = json.loads(place_map)
# set the colors and initialize the empty figure object for drawing the map
BLUE = '#6699cc'
fig = plt.figure(figsize=(12, 8))
ax = fig.gca()
# Draw the geojson fetched from the Data Commons Graph
ax.add_patch(PolygonPatch(place_map, fc=BLUE, ec=BLUE, alpha=0.5, zorder=2 ))
ax.axis('scaled')
# Add the locations of the superfund sites from the GeoDataFrame
gdf.plot(ax=ax, color='red', alpha = 0.5)
plt.title (f'superfund site locations in {selected_place_choice}')
plt.show()
## get a list of measurement sites that are in and around Tar Creek
tar_creek = 'epaSuperfundSiteId/OKD980629844'
measurement_sites = dc.get_places_in([tar_creek], 'SuperfundMeasurementSite')[tar_creek]
measurement_sites
# List of statistics that are available for each measurement site
measurement_sites_statvars_list = ['Concentration_Sulfate_BodyOfWater_GroundWater',
'Concentration_Lead_BodyOfWater_GroundWater',
'Concentration_Cadmium_BodyOfWater_GroundWater',
'Concentration_DissolvedContaminant_Cadmium_BodyOfWater_GroundWater',
'Concentration_DissolvedContaminant_Iron_BodyOfWater_GroundWater',
'Concentration_DissolvedContaminant_Lead_BodyOfWater_GroundWater',
'Concentration_DissolvedContaminant_Zinc_BodyOfWater_GroundWater',
'Concentration_Iron_BodyOfWater_GroundWater',
'DissolvedOxygen_BodyOfWater_GroundWater',
'Concentration_Zinc_BodyOfWater_GroundWater',
'PotentialOfHydrogen_BodyOfWater_GroundWater',
'ElectricalConductivity_BodyOfWater_GroundWater',
'Temperature_BodyOfWater_GroundWater',
'WaterHardness_BodyOfWater_GroundWater']
tar_creek_contamiantion_statvar_df = dpd.build_time_series_dataframe(['epaSuperfundMeasurementSite/OKD980629844/Quapaw_4'], measurement_sites_statvars_list[0])
tar_creek_contamiantion_statvar_df.head(5)
print("Select a statistical variable")
# list of statistical variables that we shall use for this notebook
stat_var_list = [
'CrsiScore_SuperfundSite',
'NaturalHazardExposureScore_SuperfundSite',
'NaturalHazardRiskScore_SuperfundSite',
'NaturalHazardRiskScore_SuperfundSite_CoastalFloodEvent',
'NaturalHazardRiskScore_SuperfundSite_DroughtEvent',
'NaturalHazardRiskScore_SuperfundSite_EarthquakeEvent',
'NaturalHazardRiskScore_SuperfundSite_ExcessiveHeatEvent',
'NaturalHazardRiskScore_SuperfundSite_ExtremeColdWindChillEvent',
'NaturalHazardRiskScore_SuperfundSite_FloodEvent',
'NaturalHazardRiskScore_SuperfundSite_HailEvent',
'NaturalHazardRiskScore_SuperfundSite_HighWindEvent',
'NaturalHazardRiskScore_SuperfundSite_HurricaneEvent',
'NaturalHazardRiskScore_SuperfundSite_LandslideEvent',
'NaturalHazardRiskScore_SuperfundSite_TornadoEvent',
'NaturalHazardRiskScore_SuperfundSite_WildfireEvent'
]
# Populating the select dropdown
stat_var_select = widgets.Dropdown(options=stat_var_list, value=stat_var_list[0])
display(stat_var_select)
stat_var_list
chosen_statvar = stat_var_select.value
site_statvar_df = dpd.build_time_series_dataframe([site_dcid], chosen_statvar)
site_statvar_df.head(5)
site_statvar_df = dpd.build_time_series_dataframe(superfund_sites, chosen_statvar)
site_statvar_df.head(5)
def add_name_col(df):
# Add a new column called name, where each value is the name for the place dcid in the index.
df['name'] = df.index.map(dc.get_property_values(df.index, 'name'))
# Keep just the first name, instead of a list of all names.
df['name'] = df['name'].str[0]
add_name_col(site_statvar_df)
site_statvar_df.head()
# sort the dataframe by the value column in descending order
site_vulnerability = site_statvar_df.sort_values(by='2021', ascending=False)
# set the threshold on the number sites to visualize - since there are 1,300 odd superfund sites
num_sites = 10
site_vulnerability = site_vulnerability[:num_sites]
site_vulnerability.plot(kind='bar', x='name', y='2021', figsize=(25,5))
multi_statvar_df = dpd.build_multivariate_dataframe(superfund_sites,["CrsiScore_SuperfundSite", "NaturalHazardRiskScore_SuperfundSite"])
multi_statvar_df
# Scatter plot the information
multi_statvar_df.plot(kind='scatter', x='NaturalHazardRiskScore_SuperfundSite', y='CrsiScore_SuperfundSite', figsize=(12, 8))
plt.title(f"Natural Hazard Risk Score vs CRSI score for sites in {selected_place_choice}")
plt.xlabel("NaturalHazardRiskScore_SuperfundSite")
plt.ylabel("CrsiScore_SuperfundSite")
plt.show()
superfund_sites = dc.get_places_in([place_dcid], 'SuperfundSite')[place_dcid]
dpd.build_time_series_dataframe(superfund_sites, 'CrsiScore_SuperfundSite').head(5)
cities = dpd.get_places_in([place_dcid], 'City')[place_dcid]
dpd.build_time_series_dataframe(cities, 'Count_Person_AmericanIndianAndAlaskaNativeAlone').head(5)
## Step 1: Making dataframe with the superfund site id and the id of the city where the site is present
def get_city_id(elem_list, fips_code_len=7):
for e in elem_list:
if len(e.split('/')[1]) == fips_code_len:
return e
return ''
site_place_df = site_df[['epaSuperfundSiteId', 'containedInPlace']]
site_place_df['epaSuperfundSiteId'] = site_place_df['epaSuperfundSiteId'].apply(lambda elem: 'epaSuperfundSiteId/' + elem)
site_place_df['containedInPlace'] = site_place_df['containedInPlace'].apply(get_city_id)
site_place_df = site_place_df[site_place_df['containedInPlace'] != '']
site_place_df
# Step 2: Merging the StatVarObservations for the `CrsiScore_SuperfundSite` statistical variable to `site_data_df`
site_data_df = site_place_df.copy()
# Get the list of superfundsites that are contained in a city
contained_site_list = site_data_df['epaSuperfundSiteId'].values.tolist()
# with the list of superfund sites' place_ids query Data Commons to stats for CrsiScore_SuperfundSite statistical variable
superfund_data_df = dpd.build_time_series_dataframe(contained_site_list, 'CrsiScore_SuperfundSite')
# rename the column to make it readable
superfund_data_df.columns = ['CrsiScore_SuperfundSite']
# merge
site_data_df = pd.merge(site_data_df, superfund_data_df, left_on='epaSuperfundSiteId', right_on='place', how='left')
# Step 3: Merging the StatVarObservations for the `Count_Person_AmericanIndianAndAlaskaNativeAlone` statistical variable to `site_data_df`
# Get the list of cities that have superfund sites
contained_cities_list = site_data_df['containedInPlace'].values.tolist()
# with the list of cities' place_ids query Data Commons to stats for Count_Person_AmericanIndianAndAlaskaNativeAlone statistical variable
demographic_data_df = dpd.build_time_series_dataframe(contained_cities_list, 'Count_Person_AmericanIndianAndAlaskaNativeAlone')
# merge
site_data_df = pd.merge(site_data_df, demographic_data_df, left_on='containedInPlace', right_on='place', how='left')
site_data_df
# api call to get the total population and population of the American Indian and Alaska Natives
multivariate_dataframe = dpd.build_multivariate_dataframe(places=contained_cities_list, stat_vars=['Count_Person', 'Count_Person_AmericanIndianAndAlaskaNativeAlone'])
multivariate_dataframe = multivariate_dataframe.reset_index()
# computing per capita population
multivariate_dataframe['perCapitaPopulation'] = multivariate_dataframe['Count_Person_AmericanIndianAndAlaskaNativeAlone'] / multivariate_dataframe['Count_Person']
# merge the computed per_capita values to the site_data_df
site_data_df = pd.merge(site_data_df, multivariate_dataframe[['place', 'perCapitaPopulation']], left_on='containedInPlace', right_on='place', how='left')
site_data_df
def plot_data(title, pd_table, x, y):
Generate a scatter plot comparing CrsiScore_SuperfundSite and Count_Person_AmericanIndianAndAlaskaNativeAlone.
plt.figure(figsize=(12, 8))
plt.title(title)
plt.xlabel('CrsiScore_SuperfundSite')
plt.ylabel('Count_Person_AmericanIndianAndAlaskaNativeAlone')
# Scatter plot the information
ax = plt.gca()
ax.scatter(pd_table[x], pd_table[y], alpha=0.7)
# Generate the plot for state data
title = 'CRSI score vs Count of American Indian and Alaska Native population (per Capita)'
plot_data(title, site_data_df, x='CrsiScore_SuperfundSite', y='perCapitaPopulation')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Querying Statistics from Data Commons
Step2: Now, that a GEO_LEVEL is selected for viewing different statistics, we shall fetch all the places within the GEO_LEVEL that are present in the Data Commons Graph.
Step4: Select a place by name
Step5: Get all superfund sites (by id) in the selected place
Step6: Get all superfund sites (by name) in the selected place
Step7: Demo 2 - Pick a property for all superfund sites
Step8: Understanding how superfund sites are represented in Data Commons
Step9: Get values of properties of a superfund site
Step10: The values of containedInPlace is a list of the places ids in Data Commons. We can do another query on the list of places to get their names using the same API, as shown in this example,
Step11: Similarly, we can see the type of place using the typeOf property for the above example
Step13: Loading values into a pandas dataframe
Step14: Downloading csv files from this colab
Step15: TIP
Step16: The visualization of sites on the world map has a very large spatial area in comparison to the areas of interest, which can either be the area the US or a particular state, county or city in the US. The map data for these places are available in the Data Commons Graph.
Step17: What we have obtained is a string representation of the map in GeoJSON format, which we can load for making the base-map to visualize the locations of the suprefund sites.
Step18: With the GeoDataFrame now ready, we shall visualize the location of the sites using code snippet below
Step19: Demo 3 - Lookup timeseries for a specific superfund site
Step20: List of statistical variables that are available for a measurement site can be obtained on the Graph Browser page for measurement sites (example).
Step21: Now, we will be to query the Data Commons graph for all statistics available for a particular sampling well and a particular statistical variable.
Step22: Accessing and visualizing statistics available for all superfund sites in Data Commons
Step23: Getting statistics to a pandas DataFrame
Step24: In both examples, the dataframe is still not human readable, let us add the name to the superfund sites. To add the names to the dataframe, we will query the superfund site's node and extract the name property using the get_property_values API method.
Step25: Visualizing StatVarObservations for superfund site statistics
Step26: If two statistical variables need to be compared together for correlations for a particular place, please consider using the build_multivariate_dataframe API method.
Step27: For the comparison, we shall make a scatter plot across the two statistical variables namely
Step28: Demo 4 - Comparing superfund statistics with other statistics in Data Commons
Step29: From the two dataframes, it is clear that we have statistics from different years and the number of places across the two dataframes are not consistent.
Step30: Example to create custom data aggregations and normalizations
Step32: Visualizing the comparison on a scatter plot
|
10,627
|
<ASSISTANT_TASK:>
Python Code:
!wget https://zenodo.org/record/49903/files/vocab.npy
!wget https://zenodo.org/record/49903/files/word_vectors.npy
#from lda2vec_model import LDA2Vec
#from chainer import serializers
#import numpy as np
#import pandas as pd
#import pickle
#
#features = pd.read_pickle("../data/features.pd")
#vocab = np.load("../data/vocab")
#npz = np.load(open('topics.story.pyldavis.npz', 'r'))
#dat = {k: v for (k, v) in npz.iteritems()}
#vocab = dat['vocab'].tolist()
#dat = np.load("../data/data.npz")
#n_stories = features.story_id_codes.max() + 1
#n_units = 256
#n_vocab = dat['flattened'].max() + 1
#model = LDA2Vec(n_stories=n_stories, n_story_topics=40,
# n_authors=5664, n_author_topics=20,
# n_units=n_units, n_vocab=n_vocab, counts=np.zeros(n_vocab),
# n_samples=15)
#serializers.load_hdf5("/home/chris/lda2vec-12/examples/hacker_news/lda2vec/lda2vec.hdf5", model)
#np.save("word_vectors", model.sampler.W.data)
#np.save("vocab", vocab)
import numpy as np
word_vectors_raw = np.load("word_vectors.npy")
vocab = np.load("vocab.npy").tolist()
word_vectors = word_vectors_raw / np.linalg.norm(word_vectors_raw, axis=-1)[:, None]
def get_vector(token):
index = vocab.index(token)
return word_vectors[index, :].copy()
def most_similar(token, n=20):
word_vector = get_vector(token)
similarities = np.dot(word_vectors, word_vector)
top = np.argsort(similarities)[::-1][:n]
return [vocab[i] for i in top]
# This is Levy & Goldberg's 3Cosmul Metric
# Based on the Gensim implementation: https://github.com/piskvorky/gensim/blob/master/gensim/models/word2vec.py
def cosmul(positives, negatives, topn=20):
positive = [get_vector(p) for p in positives]
negative = [get_vector(n) for n in negatives]
pos_dists = [((1 + np.dot(word_vectors, term)) / 2.) for term in positive]
neg_dists = [((1 + np.dot(word_vectors, term)) / 2.) for term in negative]
dists = np.prod(pos_dists, axis=0) / (np.prod(neg_dists, axis=0) + 1e-6)
idxs = np.argsort(dists)[::-1][:topn]
return [vocab[i] for i in idxs if (vocab[i] not in positives) and (vocab[i] not in negatives)]
def most_similar_posneg(positives, negatives, topn=20):
positive = np.sum([get_vector(p) for p in positives], axis=0)
negative = np.sum([get_vector(n) for n in negatives], axis=0)
vector = positive - negative
dists = np.dot(word_vectors, vector)
idxs = np.argsort(dists)[::-1][:topn]
return [vocab[i] for i in idxs if (vocab[i] not in positives) and (vocab[i] not in negatives)]
most_similar('san francisco')
cosmul(['california', 'technology'], [], topn=20)
cosmul(['digital', 'currency'], [], topn=20)
cosmul(['text editor', 'terminal'], [], topn=20)
cosmul(['china'], [], topn=20)
cosmul(['china', 'search engine'], [], topn=20)
cosmul(['microsoft'], [], topn=20)
cosmul(['microsoft', 'cloud'], [], topn=20)
cosmul(['king', 'woman'], ['man'], topn=20)
print 'Most similar'
print '\n'.join(most_similar('mark zuckerberg'))
print '\nCosmul'
pos = ['mark zuckerberg', 'amazon']
neg = ['facebook']
print '\n'.join(cosmul(pos, neg, topn=20))
print '\nTraditional Similarity'
print '\n'.join(most_similar_posneg(pos, neg, topn=20))
pos = ['hacker news', 'question']
neg = ['story']
print 'Most similar'
print '\n'.join(most_similar(pos[0]))
print '\nCosmul'
print '\n'.join(cosmul(pos, neg, topn=20))
print '\nTraditional Similarity'
print '\n'.join(most_similar_posneg(pos, neg, topn=20))
pos = ['san francisco']
neg = []
print 'Most similar'
print '\n'.join(most_similar(pos[0]))
print '\nCosmul'
print '\n'.join(cosmul(pos, neg, topn=20))
print '\nTraditional Similarity'
print '\n'.join(most_similar_posneg(pos, neg, topn=20))
pos = ['nlp', 'image']
neg = ['text']
print 'Most similar'
print '\n'.join(most_similar(pos[0]))
print '\nCosmul'
print '\n'.join(cosmul(pos, neg, topn=20))
print '\nTraditional Similarity'
print '\n'.join(most_similar_posneg(pos, neg, topn=20))
pos = ['vim', 'graphics']
neg = ['terminal']
print 'Most similar'
print '\n'.join(most_similar(pos[0]))
print '\nCosmul'
print '\n'.join(cosmul(pos, neg, topn=20))
print '\nTraditional Similarity'
print '\n'.join(most_similar_posneg(pos, neg, topn=20))
pos = ['vegetables', 'drink']
neg = ['eat']
print 'Most similar'
print '\n'.join(most_similar(pos[0]))
print '\nCosmul'
print '\n'.join(cosmul(pos, neg, topn=20))
print '\nTraditional Similarity'
print '\n'.join(most_similar_posneg(pos, neg, topn=20))
pos = ['lda', '']
neg = ['']
print 'Most similar'
print '\n'.join(most_similar(pos[0]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You don't need to run the code below unless you've trained your own model. Otherwise, just download the word vectors from the URL above.
Step2: L2 Normalize the word vectors
Step3: Queen is several rankings down, so not exactly the same as out of the box word2vec!
|
10,628
|
<ASSISTANT_TASK:>
Python Code:
def get_lincoln():
for filepath in sorted(glob.glob('Lincoln/*.txt')):
with open(filepath, 'r', encoding='latin') as f:
doc = f.read()
yield {'filepath': filepath, 'doc': doc}
def get_TCP():
for filepath in sorted(glob.glob('TCP-ECCO/*.txt')):
with open(filepath, 'r', encoding='latin') as f:
doc = f.read()
yield {'filepath': filepath, 'doc': doc}
s = swhoosh.Index('randomIdx/randomIdx', simple_schema(), reset=True)
s.load()
t = time.time()
s.add_documents(get_lincoln())
print("TIME:", time.time() - t)
s.save_and_close()
with open('randomIdx/randomIdx.manager', 'rb') as f:
s = pickle.load(f)
s.load()
r = s.reader()
print(r.doc_count())
print(r.doc_frequency('doc',b'lincoln'))
print(r.doc_field_length(21, 'doc'))
print(r.avg_field_length('doc'))
# returns (overall frequency, num docs, start loc in postings file, postings size)
s._idx['doc'].terminfo(b'lincoln')
[swhoosh.postings.load2(a[2], a[1]) for a in s._idx['doc']._postings(b'lincoln')]
# what the postings look like: (docId, frequency, positions)
s._idx['doc']._postings(b'lincoln')
qp = QueryParser("doc", schema=s._schema)
q = qp.parse("lincoln")
with s.searcher() as searcher:
results = searcher.search(q)
print(results)
for hit in results:
print('{:f}'.format(hit.score), ' | ', hit['filepath'])
def make_clean_index(ix_dirname, paths, procs=1):
ix = whoosh.index.create_in(ix_dirname, schema=simple_schema())
writer = ix.writer(procs=procs)
for filepath in paths:
add_doc(writer, filepath)
writer.commit()
return ix
def add_doc(writer, filepath):
with open(filepath, 'rb') as f:
text = f.read().decode('latin')
writer.add_document(doc=text, filepath=filepath)
t = time.time()
ix = make_clean_index('wind', sorted(glob.glob('Lincoln/*.txt')))
print("TIME:", time.time() - t)
with ix.searcher() as searcher:
results = searcher.search(q)
print(results)
for hit in results:
print('{:f}'.format(hit.score), ' | ', hit['filepath'])
s = swhoosh.Index('randomIdx2/randomIdx2', simple_schema(), reset=True)
s.load()
t = time.time()
s.add_documents(get_TCP())
print("TIME:", time.time() - t)
s = swhoosh.Index('randomIdx2/randomIdx2', simple_schema(), reset=True)
s.load()
t = time.time()
s.add_documents_multiprocessing(get_TCP(), num_procs=4)
print("TIME:", time.time() - t)
s.save()
t = time.time()
ix = make_clean_index('wind2', sorted(glob.glob('TCP-ECCO/*.txt')))
print("TIME:", time.time() - t)
t = time.time()
ix = make_clean_index('wind2', sorted(glob.glob('TCP-ECCO/*.txt')), procs=4)
print("TIME:", time.time() - t)
with s.searcher() as searcher:
results = searcher.search(q)
print(results)
for hit in results:
print('{:f}'.format(hit.score), ' | ', hit['filepath'])
print('')
with ix.searcher() as searcher:
results = searcher.search(q)
print(results)
for hit in results:
print('{:f}'.format(hit.score), ' | ', hit['filepath'])
with open('randomIdx2/randomIdx2.manager', 'rb') as f:
s = pickle.load(f)
s.load()
ix = whoosh.index.open_dir('wind2')
import numpy as np
s1 = s.searcher()
s2 = ix.searcher()
qp = QueryParser("doc", schema=s._schema)
with open('Lincoln/24-speech-1856.txt', 'r', encoding='latin') as f:
data = f.read()
query_vocab = [t.text for t in s._schema['doc'].analyzer(data)]
print('Length of query vocab:',len(query_vocab))
def random_n_query(n):
Generates a random query of length n
return ' '.join(np.random.choice(query_vocab, size=n))
def benchmark_n_query(n, trials):
t_swhoosh, t_whoosh = 0, 0
for i in range(trials):
q = qp.parse(random_n_query(n))
t = time.time()
results = s1.search(q)
t_swhoosh += time.time() - t
t = time.time()
results = s2.search(q)
t_whoosh += time.time() - t
print('- Swhoosh time per query:', "{:.2f}".format(t_swhoosh / trials * 1000), "ms")
print('- Whoosh time per query:', "{:.2f}".format(t_whoosh / trials * 1000), "ms")
return t_swhoosh/trials, t_whoosh/trials
x, y = benchmark_n_query(3, 100)
print('\nSwhoosh was', "{0:.0f}%".format(100*(y-x)/y), 'percent faster.')
x, y = benchmark_n_query(6, 100)
print('\nSwhoosh was', "{0:.0f}%".format(100*(y-x)/y), 'percent faster.')
x, y = benchmark_n_query(30, 100)
print('\nSwhoosh was', "{0:.0f}%".format(100*(y-x)/y), 'percent faster.')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Intialize swhoosh index
Step2: Add documents to swhoosh index
Step3: Save and close, then reload swoosh index
Step4: Wrap Index in an IndexReader and get statistics needed for BM25
Step5: Get term info and postings for 'lincoln'
Step6: Run a BM25 search with Whoosh API
Step7: Compare results to Whoosh
Step8: Note
Step9: And repeating with multiprocessing enabled
Step10: Comparing to whoosh default
Step11: And to whoosh with multiprocessing enabled
Step12: Again, the matcher results are the same
Step14: Query Performance (BM25)
Step15: 3 word queries
Step16: 6 word queries
Step17: 30 word queries
|
10,629
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
x = [-1, 2, 5, 100]
y = [123, 456, 789, 1255]
result = np.polyfit(x, y, 2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
10,630
|
<ASSISTANT_TASK:>
Python Code:
from astroquery.sdss import SDSS
pos = coords.SkyCoord('0h8m05.63s +14d50m23.3s', frame='icrs')
xid = SDSS.query_region(pos, spectro=True)
print(xid)
# complete the code below
favSTARquery = SELECT p.objid, p.ra, p.dec, p.psfMag_u, # [MORE THINGS FOR YOU TO INSERT HERE]
FROM PhotoTag AS p , dbo.fGetNearbyObjEq(# RA, DEC, RADIUS) AS n
WHERE n.objID = p.objID
res = SDSS.query_sql(favSTARquery)
res
# determine the u - g and the g - r colors of our star
ug =
gr =
import astropy.coordinates as coords
PTFcoords = coords.SkyCoord(312.503802, -0.706603, frame='icrs',unit='deg')
SDSScoords = coords.SkyCoord(
dist = PTFcoords.separation(SDSScoords)
print('The angular separation is {:.2f} arcsec'.format(dist.arcsec))
# complete the code below
dered_query = SELECT p.objid, p.ra, p.dec,
[MORE THINGS FOR YOU TO INSERT HERE] as dered_ug,
[MORE THINGS FOR YOU TO INSERT HERE] as dered_gr,
[MORE THINGS FOR YOU TO INSERT HERE]
[MORE THINGS FOR YOU TO INSERT HERE]
FROM PhotoTag AS p , dbo.fGetNearbyObjEq(# RA, DEC, RADIUS) AS n
WHERE n.objID = p.objID
res = SDSS.query_sql(dered_query)
res
# determine the de-reddened colors
ug =
gr =
# determine the boundaries of our PTF field
ra1 =
ra2 =
dec1 =
dec2 =
# complete the code below
colors_query = SELECT p.objid, p.ra, p.dec,
[MORE THINGS FOR YOU TO INSERT HERE]
.
.
.
FROM PhotoTag AS p
WHERE [COND1] AND [COND2] AND ...
colors_res = SDSS.query_sql(colors_query)
colors_res
colors_res.write(
plt.scatter( # Note, this plot will feature a lot of data
# consider using edgecolor = "None" and alpha = 0.05 in your call to plt.scatter
# to make a (potentially) even nicer looking plot, consider using plt.hexbin
plt.xlim(
plt.ylabel(
# there is a lot for you to fill in here
D_ug =
RRLcand =
print("There are {:d} RRL cands in this field".format( # complete
plt.scatter( # complete
plt.scatter( # complete
plt.legend(fancybox = True)
# lots of gaps to fill for this problem
import
source_mjds, source_mags, source_magerrs = source_lightcurve(
plt.scatter(
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Part A) Make an SDSS query
Step3: Notice that the IPython notebook has a nice interface for the table data and that our features are listed in a nice easy to read format. There are a couple nice features about astropy tables. One is that we can select any individual feature by its column name
Step4: Problem A3 Notice that the SDSS position and the PTF position do not match. Calculate the distance between these two sources to ensure that you have the correct star.
Step6: We will discuss the full specifications for selecting RRL below. But, one of the criteria stated in Sesar et al. 2007 is that
Step7: Problem B2 Using the results from the SDSS query, determine de-reddened colors, $(u - g)_0$ and $(g - r)_0$, colors of our favorite star.
Step8: Problem B3 Is $(u-g)_0$ now what you might expect for an RRL star?
Step10: Problem C2 Write a complex query that returns the de-reddened colors of all the stars in our PTF field. Note that we only want photometry for sources that have reliable measurements, so one of your conditional statements should include the clean column. We also only want sources that have PTF light curves (since we will eventually cross match PTF and SDSS), so limit your query to sources with $r \le 20.5 \; \mathrm{mag}$. There are many different ways to write this query (including an SDSS database function, or the examples given above), if you have extra time try to figure out which query is the fastest.
Step11: Problem C3 Save the astropy Table from your query into a comma separated value (csv) file titled "../data/PTF_d022683_f02_c06_SDSS_colors.csv". We will need this file for the remainding hands-on sessions.
Step12: Problem 2) Identify RRL candidates
Step13: Part B
Step14: Now that we have identified RRL cands, we should figure out where they reside in the CC diagram.
Step15: Problem 3 -- Challenge
|
10,631
|
<ASSISTANT_TASK:>
Python Code:
# Fake data parameters
r_actual = 1/7
s_actual = 2/17
data_set_size = 50000
# a number of draws from the summed geometric distributions
time_draws = (np.random.geometric(r_actual, size=data_set_size)
+ np.random.geometric(s_actual, size=data_set_size)
)
time_draws[:15]
# 0 is not a valid draw from either geometric idstribution
assert 0 not in time_draws
# plot the histogram of the count of each integer
bins = max(time_draws)-min(time_draws)
plt.hist(time_draws, bins=bins);
with pm.Model() as try1:
r = pm.Uniform('r', 0,1)
s = pm.Uniform('s', 0,1)
# declare the two related geometric distributions
with try1:
ready = pm.Geometric('ready', p=r)
selected = pm.Geometric('selected', p=s)
# Define a custom density distribution using the two random variables
with try1:
time = pm.DensityDist('time', lambda x: ready+selected, )
with try1:
try:
trace = pm.sample(10000, step=pm.Metropolis())
except ValueError as err:
print(type(err).__name__,':', err)
with pm.Model() as try2:
r = pm.Uniform('r', 0,1)
s = pm.Uniform('s', 0,1)
try:
time = pm.DensityDist('time', lambda x: ss.geom.pmf(x, p=r) + ss.geom.pmf(x, p=s))
except TypeError as err:
print(type(err).__name__,':', err)
with pm.Model() as try3:
r = pm.Uniform('r', 0,1, testval=1/10)
s = pm.Uniform('s', 0,1, testval=1/10)
with try3:
ready = pm.Geometric('ready', p=r, )
# create residual data
adjusted_data = time_draws - ready
# force the residuals to be non-negative by clipping the numbers in a particular interval
pm.math.clip(adjusted_data, 0, np.inf)
# fit the 2nd geometric to the residuals
selected = pm.Geometric('selected', p=s, observed=adjusted_data)
with try3:
try:
trace = pm.sample(10000, step=pm.Metropolis())
except ValueError as err:
print(type(err).__name__,':', err)
pm.traceplot(trace=trace[:2000],);
r_actual, s_actual
with pm.Model() as try4:
r = pm.Uniform('r', 0,1, testval=1/10)
s = pm.Uniform('s', 0,1, testval=1/10)
ready = pm.Geometric('ready', p=r, shape=data_set_size)
selected = pm.Geometric('selected', p=s, shape=data_set_size)
with try4:
pot = pm.Potential('pot', pm.math.sum(pm.math.switch( pm.math.eq(time_draws, ready+selected), 1, 0))/data_set_size)
with try4:
try:
trace = pm.sample(10000, step=pm.Metropolis())
except ValueError as err:
print(type(err).__name__,':', err)
pm.traceplot(trace=trace,varnames=['r', 's',]);
def geo_convolution_likelihood(p,q):
probabilities = [p*q*pm.math.sum( [((1-p)**inner) * ((1-q)**(outer-inner))
for inner in range(outer+1)])
for outer in time_draws]
intersection_prob = 1
for prob in probabilities:
intersection_prob *= prob
return intersection_prob
with pm.Model() as try5:
r = pm.Uniform('r', 0,1, testval=1/10)
s = pm.Uniform('s', 0,1, testval=1/10)
with try5:
pm.Potential('TotalDays', geo_convolution_likelihood(r,s))
with try5:
try:
trace = pm.sample(10000, step=pm.Metropolis())
except ValueError as err:
print(type(err).__name__,':', err)
pm.traceplot(trace=trace,varnames=['r', 's',]);
with pm.Model() as try6:
r = pm.Uniform('r', 0,1, testval=1/7)
s = pm.Uniform('s', 0,1, testval=1/17)
with try6:
ready = pm.Geometric('ready', p=r, shape=data_set_size)
selected = pm.Geometric('selected', p=s, shape=data_set_size)
selected_draws = time_draws - ready
pm.math.clip(selected_draws, 0, np.inf)
selected_obs = pm.Geometric('selected_obs', p=s, observed=selected_draws)
ready_draws = time_draws - selected
pm.math.clip(ready_draws, 0, np.inf)
ready_obs = pm.Geometric('ready_obs', p=r, observed=ready_draws)
with try6:
try:
trace = pm.sample(10000, step=pm.Metropolis())
except ValueError as err:
print(type(err).__name__,':', err)
pm.traceplot(trace=trace[5000:], varnames=['r', 's']);
math.sqrt((1/17)* (1/7))
with pm.Model() as try7:
r = pm.Uniform('r', 0,1, testval=1/7)
s = pm.Uniform('s', 0,1, testval=1/17)
with try7:
ready = pm.Geometric('ready', p=r, shape=data_set_size)
selected = pm.Geometric('selected', p=s, shape=data_set_size)
th try7:
try:
trace = pm.sample(10000, step=pm.Metropolis())
except ValueError as err:
print(type(err).__name__,':', err)otal = ready + selected - time_draws
pm.Potential('TotalDiff', 1/pm.math.dot(total, total))
with try7:
try:
trace = pm.sample(10000, step=pm.Metropolis())
except ValueError as err:
print(type(err).__name__,':', err)
pm.traceplot(trace=trace[5000:], varnames=['r', 's']);
with pm.Model() as try8:
r = pm.Uniform('r', 0,1, testval=1/7)
s = pm.Uniform('s', 0,1, testval=1/17)
all_convolutions = [r*s*pm.math.sum([r**i * s**(conv_num-2-i) for i in range(conv_num-2)])
for conv_num in range(1,25)
]
def log_density(x):
convol = pm.Deterministic('convol', all_convolutions[x])
return pm.math.log(r) + pm.math.log(s) + pm.math.log(convol)
sum_obs = pm.DensityDist('sum_obs', log_density, observed=time_draws)
with try8:
try:
trace = pm.sample(10000, step=pm.Metropolis())
except ValueError as err:
print(type(err).__name__,':', err)
pm.traceplot(trace=trace[:2000],);
# Example starting code in this cell
n1 = 1000
n = n1*2
r = 1/17
q = 1/3
data1 = np.random.geometric(r,size=n1)
data2 = np.random.geometric(q,size=n1)
data = data1 + data2
datatt = shared(data)
with pm.Model() as model:
t = pm.DiscreteUniform( "t", 1, data-1, shape=len(data))
r = pm.Uniform('r',0,1, testval=1/17)
s = pm.Uniform('s',0,1, testval=1/3)
Ready = pm.Geometric('Ready', r, observed=t)
Selected = pm.Geometric('Selected', s, observed=data-t)
ordering = pm.Potential('ordering',pm.math.switch(r >= s, -10000, 0))
with model:
step1 = pm.NUTS([r,s, ordering])
step2 = pm.Metropolis([t])
trace = pm.sample(20000, step=[step1, step2] )
pm.traceplot(trace[1000:],);
trace['t']
ind = len(data)*[0] + len(data)*[1]
with pm.Model() as model:
t = pm.DiscreteUniform( "t", 1, data-1, shape=len(data))
p = pm.Uniform('p',0,1,shape=2, testval=np.array([r,q]))
testmodel = pm.Deterministic('testmodel',tt.concatenate([t,datatt-t]))
process = pm.Geometric('process',p=p[ind],observed=testmodel)
with model:
step1 = pm.Metropolis([t])
step2 = pm.NUTS([p])
step3 = pm.Metropolis([testmodel, process])
trace = pm.sample(50000, [step1, step2, step3])
pm.traceplot(trace[1000:][::10], )
trace['testmodel']
trace['t']
with pm.Model() as model:
t = pm.DiscreteUniform( "t", 1 , data, shape=len(data))
p = pm.Uniform('p',0,1,shape=2, testval=np.array([r,q]))
prob_diff = pm.Deterministic('diff', pm.math.switch(p[0] + 0.20 > p[1], -10000, 0))
sepe = pm.Potential('sepe', var=prob_diff)
ind = len(data)*[0] + len(data)*[1]
testmodel = pm.Deterministic('testmodel',rateFunc(t))
process = pm.Geometric('process',p=p[ind],observed=testmodel)
with model:
trace = pm.sample(50000, pm.Metropolis())
pm.traceplot(trace[1000:][::10], ['p']);
with pm.Model() as model:
t = pm.DiscreteUniform( "t", 1 , data, shape=len(data))
p = pm.Uniform('p',np.array([0,0.3]),np.array([0.1,1]),shape=2, testval=np.array([r,q]))
prob_diff = pm.Deterministic('diff', pm.math.switch(p[0] + 0.20 > p[1], -10000, 0))
sepe = pm.Potential('sepe', var=prob_diff)
ind = len(data)*[0] + len(data)*[1]
testmodel = pm.Deterministic('testmodel',rateFunc(t))
process = pm.Geometric('process',p=p[ind],observed=testmodel)
with model:
trace = pm.sample(50000, pm.Metropolis())
pm.traceplot(trace[1000:][::10], ['p']);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Infer the Readiness and Selection parameters
Step2: I didn't understand this error message at the time, and I moved on to another attempt. But the problem was I had not declared an observed argument for any of the random variables, so PyMC3 had nothing to work with. I also discovered later that the method supplied to DensityDist must be the log probability of the probability distribution, and while I could pass
Step3: But $r$ and $s$ are not numbers; they are random variables, and the pmf mthod cannot accept random variables as an input for p.
Step4: The $s$ parameters getting fit with little uncertainty, but it's uncertainty region doesn't extend to cover either parameters true value.
Step5: For $r$, it never converges a particular value, and it has uncertainty spread from 0 to 1. There is clearly not enough of a "forcing" function using the clip to force $r$ to converge.
Step6: The potential here defines not making sense to be value of $ready+selected$ which is not equal to a data point.
Step7: This is clearly not correct either. The main thing is that liklihood is calculated on the log p scale, so log probabilities are added, not multiplied. The potential I chose therefore doesn't affect much, as adding 0 or 1 is not a large penalty to incur for being "unlikly." I think this led to the results we see here.
Step8: Try 6
Step9: Try 7
Step10: Try 8
|
10,632
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(113) #set seed before any keras import
import pandas as pd
import random
from sklearn.model_selection import train_test_split
from collections import defaultdict
from keras.preprocessing import sequence
from collections import Counter
seed=0
corpus = pd.read_csv('twistytest.csv',
index_col=0,
header=1,
names=['user_id', 'lang', 'text', 'mbti'])
#here we limit the corpus size. The SVM with all the text can learn somethign
corpus.text = corpus.text.apply(lambda x: x)
corpus.mbti = corpus.mbti.apply(lambda x: x[0])
#corpus = tmp.sample(frac=1, random_state=seed)
e = corpus[corpus.mbti.apply(lambda x: x == 'E')]
i = corpus[corpus.mbti.apply(lambda x: x == 'I')].sample(226)
corpus = pd.concat([e,i]).sample(frac=1, random_state=seed)
print(corpus.shape)
## set max length of doc per author
sentences = corpus.text#.apply(lambda x: x[:100000])
## trim labels: convert problem to binary classification I vs E
labels = corpus.mbti
## make sure we have a label for every data instance
assert(len(sentences)==len(labels))
data={}
np.random.seed(113) #seed
data['target']= np.random.permutation(labels)
np.random.seed(113) # use same seed!
data['data'] = np.random.permutation(sentences)
# preview the dataset
print(corpus.shape)
corpus.head()
# plot the distribution of labels
import matplotlib.pyplot as plt
l, v = zip(*Counter(y_train).items())
indexes = np.arange(len(l))
width = 1
plt.bar(indexes, v, width, color=['r', 'b'])
plt.xticks(indexes + width * 0.5, l)
plt.show()
#split the data into train, dev, test
X_rest, X_test, y_rest, y_test = train_test_split(data['data'], data['target'], test_size=0.2)
X_train, X_dev, y_train, y_dev = train_test_split(X_rest, y_rest, test_size=0.2)
del X_rest, y_rest
print("#train instances: {} #dev: {} #test: {}".format(len(X_train),len(X_dev),len(X_test)))
# compute random baseline per class
Counter(y_test)['I']/sum(Counter(y_test).values())
# test
l, v = zip(*Counter(y_test).items())
indexes = np.arange(len(l))
width = 1
plt.bar(indexes, v, width, color=['r', 'b'])
plt.xticks(indexes + width * 0.5, l)
plt.show()
# dev
l, v = zip(*Counter(y_dev).items())
indexes = np.arange(len(l))
width = 1
plt.bar(indexes, v, width, color=['r', 'b'])
plt.xticks(indexes + width * 0.5, l)
plt.show()
from sklearn.svm import LinearSVC
from sklearn.dummy import DummyClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
pipeline = Pipeline([('features', FeatureUnion([('wngram', TfidfVectorizer(ngram_range=(1,2))),
('cngram', TfidfVectorizer(analyzer='char'))])),
('cls', DummyClassifier())])
pipeline.fit(X_train, y_train)
predictions = pipeline.predict(X_dev)
accuracy_score(predictions, y_dev)
print(classification_report(predictions, y_dev))
testpredictions = pipeline.predict(X_test)
print(accuracy_score(testpredictions, y_test))
print(classification_report(testpredictions, y_test))
from sklearn.svm import LinearSVC
from sklearn.dummy import DummyClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
pipeline = Pipeline([('features', FeatureUnion([('wngram', TfidfVectorizer(ngram_range=(1,2))),
('cngram', TfidfVectorizer(analyzer='char'))])),
('cls', LinearSVC())])
pipeline.fit(X_train, y_train)
predictions = pipeline.predict(X_dev)
accuracy_score(predictions, y_dev)
print(classification_report(predictions, y_dev))
testpred = pipeline.predict(X_test)
print(accuracy_score(testpred, y_test))
print(classification_report(testpred, y_test))
from keras.utils import np_utils
y2i = defaultdict(lambda: len(y2i))
y_train_num = [y2i[mbti] for mbti in y_train]
y_dev_num = [y2i[mbti] for mbti in y_dev]
y_test_num = [y2i[mbti] for mbti in y_test]
num_classes = len(np.unique(y_train_num))
print(num_classes)
from collections import defaultdict
# convert words to indices, taking care of UNKs
def get_characters(sentence, c2i):
out = []
for word in sentence.split(" "):
chars = []
for c in word:
chars.append(c2i[c])
out.append(chars)
return out
c2i = defaultdict(lambda: len(c2i))
PAD = c2i["<pad>"] # index 0 is padding
UNK = c2i["<unk>"] # index 1 is for UNK
X_train_num = [get_characters(sentence, c2i) for sentence in X_train]
c2i = defaultdict(lambda: UNK, c2i) # freeze - cute trick!
X_dev_num = [get_characters(sentence, c2i) for sentence in X_dev]
X_test_num = [get_characters(sentence, c2i) for sentence in X_test]
max_sentence_length=max([len(s.split(" ")) for s in X_train]
+ [len(s.split(" ")) for s in X_dev]
+ [len(s.split(" ")) for s in X_test] )
max_word_length = max([len(word) for sentence in X_train_num for word in sentence])
### we need both max sent and word length
print(max_sentence_length)
print(max_word_length)
print(X_train[0:2])
print(X_train_num[0:2]) # example how the first two sentences are encoded
def pad_words(tensor_words, max_word_len, pad_symbol_id, max_sent_len=None):
pad character list all to same word length
padded = []
for words in tensor_words:
if max_sent_len: #pad all to same sentence length (insert empty word list)
words = [[[0]]*(max_sent_len-len(words))+ words][0] #prepending empty words
padded.append(sequence.pad_sequences(words, maxlen=max_word_len, value=pad_symbol_id))
return np.array(padded)
X_train_pad_char = pad_words(X_train_num, max_word_length, 0, max_sent_len=max_sentence_length)
X_dev_pad_char = pad_words(X_dev_num, max_word_length, 0, max_sent_len=max_sentence_length)
X_test_pad_char = pad_words(X_test_num, max_word_length, 0, max_sent_len=max_sentence_length)
X_train_pad_char.shape
from keras.models import Model, Sequential
from keras.layers import Dense, Input, GRU, TimeDistributed, Embedding, Bidirectional
import keras
batch_size=8
max_chars = len(c2i)
c_dim=50
c_h_dim=32
w_h_dim=32
char_vocab_size = len(c2i)
## lower-level character LSTM
input_chars = Input(shape=(max_sentence_length, max_word_length), name='main_input')
embedded_chars = TimeDistributed(Embedding(char_vocab_size, c_dim,
input_length=max_word_length), name='char_embedding')(input_chars)
char_lstm = TimeDistributed(Bidirectional(GRU(c_h_dim)), name='GRU_on_char')(embedded_chars)
word_lstm_from_char = Bidirectional(GRU(w_h_dim), name='GRU_on_words')(char_lstm)
# And add a prediction node on top
predictions = Dense(1, activation='relu', name='output_layer')(word_lstm_from_char)
model = Model(inputs=input_chars, outputs=predictions)
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot, plot_model
plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
model.fit(X_train_pad_char, y_train_num, epochs=10, batch_size=8)
loss, accuracy = model.evaluate(X_dev_pad_char, y_dev_num)
print(accuracy)
loss, accuracy = model.evaluate(X_test_pad_char, y_test_num)
print(accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset
Step2: Test and dev set label distribution
Step3: Baseline
Step4: Results
Step5: Neural network
Step7: Text representation
Step8: Composing words only out of characters
|
10,633
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import laberinto.algen as ag
import laberinto.laberinto as lab
import numpy as np
import matplotlib.pyplot as plt
mapa1 = lab.Map()
mapa1.draw_tablero()
mapa1.create_camino()
ejemplo = mapa1.list_caminos[0]
ag.calculate_traits(ejemplo, ejemplo.dict_genes)
lab.calculate_performances(ejemplo)
ejemplo.draw_directions()
ejemplo.draw_directions()
ejemplo.draw_path()
lab.calculate_fitness(ejemplo)
ejemplo.fitness
lab.avanzar(mapa1)
lab.draw_all(mapa1)
mapa2 = lab.Map(veneno=1)
lab.avanzar(mapa2)
lab.draw_all(mapa2)
mapa2 = lab.Map(veneno=100)
lab.avanzar(mapa2)
lab.draw_all(mapa2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pintemos el laberinto!
Step2: Queda chulo, ¿verdad?
Step3: Un viajero que intentara seguir estas indicaciones, estaría en un buen apuro. ¡Incluso algunas flechas apuntan a las paredes! Necesitaremos ser un poco flexibles para recorrerlo
Step4: Implementando el algoritmo genético
Step5: Ya tenemos un valor numérico, lancemos el algoritmo!
Step6: Resultados
Step7: Sin embargo, abusar de las neurotoxinas letales también puede tener problemas, ¡Quién lo hubiera adivinado!
|
10,634
|
<ASSISTANT_TASK:>
Python Code:
%%time
texts = []
docIds=[]
actuallyTrained=0;
temp=0;
for i in sen.find():
if temp<1000:
temp=temp+1
try:
raw = ''.join(i['document']).lower()
tokens = tokenizer.tokenize(raw)
stopped_tokens = [i for i in tokens if not i in en_stop]
stemmed_tokens = [p_stemmer.stem(i) for i in stopped_tokens]
texts.append(stemmed_tokens)
docIds.append(i['_id'])
actuallyTrained=actuallyTrained+1
except:
pass
else:
break
print(actuallyTrained)
%%time
dictionary = corpora.Dictionary(texts)
%%time
corpus = [dictionary.doc2bow(text) for text in texts]
%%time
ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=20, id2word = dictionary, passes=1)
actuallyTrained
%%time
#the dim is the same number of topics
dim=20
result=[]
for i in range(0,actuallyTrained):
feature=[]
previousindex=0
for item in ldamodel[corpus[i]]:
index=item[0]
#print(index)
for beforeindex in range(previousindex,index):
feature.append(0)
feature.append(item[1])
previousindex=index+1
while (len(feature)<dim):
feature.append(0); #add in 0 at the end
result.append(feature)
from sklearn.cluster import KMeans
import numpy as np
%%time
kmeanstest=np.array(result)
%%time
kmeans = KMeans(n_clusters=20, random_state=0).fit(kmeanstest)
kmeans.labels_.size
len(docIds)
#and before building the dictionary test if the size of docIds and cluster result dimensions are the same.
try:
assert(len(docIds)==kmeans.labels_.size)
dictionary_cocId_topicClusterItBelongs={}
for i in range(0,actuallyTrained):
dictionary_cocId_topicClusterItBelongs.update({docIds[i]:kmeans.labels_[i]})
except:
print("the docIds size is different from the topic # cluster size")
dictionary_cocId_topicClusterItBelongs
#using pickle to dump and load the data
import pickle
with open('traingrst_english.pkl', 'wb') as output:
pickle.dump(dictionary_cocId_topicClusterItBelongs,output)
#thsi is the way to load the dictionary object in
pickle.load(open( "traingrst_english.pkl", "rb" ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: change result into multidimensinal array in order to feed in k-means model,
Step2: build a dictionary for ['docId','cluster #']
|
10,635
|
<ASSISTANT_TASK:>
Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import time
import numpy as np
import matplotlib.pyplot as plt
from igraph import Graph # pip install python-igraph
%watermark -a 'Ethen' -d -t -v -p igraph,numpy,matplotlib
source = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 2, 3, 4, 5]
target = [2, 3, 4, 5, 6, 7, 8, 9, 2, 3, 4, 5, 6, 7, 8, 9, 6, 7, 8, 9]
# create a directed graph
graph = Graph(directed=True)
# add the nodes/vertices (the two are used interchangeably) and edges
# 1. the .add_vertices method adds the number of vertices
# to the graph and igraph uses integer vertex id starting from zero
# 2. to add edges, we call the .add_edges method, where edges
# are specified by a tuple of integers.
graph.add_vertices(10)
graph.add_edges(zip(source, target))
print('vertices count:', graph.vcount())
print('edges count:', graph.ecount())
# a graph api should allow us to retrieve the neighbors of a node
print('neighbors: ', graph.neighbors(2, mode='out'))
# or create an adjacency list of the graph,
# as we can see node 0 and 1 are the most influential
# as the two nodes are connected to a lot of other nodes
graph.get_adjlist()
def compute_independent_cascade(graph, seed_nodes, prob, n_iters=1000):
total_spead = 0
# simulate the spread process over multiple runs
for i in range(n_iters):
np.random.seed(i)
active = seed_nodes[:]
new_active = seed_nodes[:]
# for each newly activated nodes, find its neighbors that becomes activated
while new_active:
activated_nodes = []
for node in new_active:
neighbors = graph.neighbors(node, mode='out')
success = np.random.uniform(0, 1, len(neighbors)) < prob
activated_nodes += list(np.extract(success, neighbors))
# ensure the newly activated nodes doesn't already exist
# in the final list of activated nodes before adding them
# to the final list
new_active = list(set(activated_nodes) - set(active))
active += new_active
total_spead += len(active)
return total_spead / n_iters
# assuming we start with 1 seed node
seed_nodes = [0]
compute_independent_cascade(graph, seed_nodes, prob=0.2)
def greedy(graph, k, prob=0.2, n_iters=1000):
Find k nodes with the largest spread (determined by IC) from a igraph graph
using the Greedy Algorithm.
# we will be storing elapsed time and spreads along the way, in a setting where
# we only care about the final solution, we don't need to record these
# additional information
elapsed = []
spreads = []
solution = []
start_time = time.time()
for _ in range(k):
best_node = -1
best_spread = -np.inf
# loop over nodes that are not yet in our final solution
# to find biggest marginal gain
nodes = set(range(graph.vcount())) - set(solution)
for node in nodes:
spread = compute_independent_cascade(graph, solution + [node], prob, n_iters)
if spread > best_spread:
best_spread = spread
best_node = node
solution.append(best_node)
spreads.append(best_spread)
elapse = round(time.time() - start_time, 3)
elapsed.append(elapse)
return solution, spreads, elapsed
# the result tells us greedy algorithm was able to find the two most influential
# node, node 0 and node 1
k = 2
prob = 0.2
n_iters = 1000
greedy_solution, greedy_spreads, greedy_elapsed = greedy(graph, k, prob, n_iters)
print('solution: ', greedy_solution)
print('spreads: ', greedy_spreads)
print('elapsed: ', greedy_elapsed)
# if we check the solutions from the greedy algorithm we've
# implemented above, we can see that our solution is in fact
# submodular, as the spread we get is in diminshing order
np.diff(np.hstack([np.array([0]), greedy_spreads]))
import heapq
def celf(graph, k, prob, n_iters=1000):
Find k nodes with the largest spread (determined by IC) from a igraph graph
using the Cost Effective Lazy Forward Algorithm, a.k.a Lazy Greedy Algorithm.
start_time = time.time()
# find the first node with greedy algorithm:
# python's heap is a min-heap, thus
# we negate the spread to get the node
# with the maximum spread when popping from the heap
gains = []
for node in range(graph.vcount()):
spread = compute_independent_cascade(graph, [node], prob, n_iters)
heapq.heappush(gains, (-spread, node))
# we pop the heap to get the node with the best spread,
# when storing the spread to negate it again to store the actual spread
spread, node = heapq.heappop(gains)
solution = [node]
spread = -spread
spreads = [spread]
# record the number of times the spread is computed
lookups = [graph.vcount()]
elapsed = [round(time.time() - start_time, 3)]
for _ in range(k - 1):
node_lookup = 0
matched = False
while not matched:
node_lookup += 1
# here we need to compute the marginal gain of adding the current node
# to the solution, instead of just the gain, i.e. we need to subtract
# the spread without adding the current node
_, current_node = heapq.heappop(gains)
spread_gain = compute_independent_cascade(
graph, solution + [current_node], prob, n_iters) - spread
# check if the previous top node stayed on the top after pushing
# the marginal gain to the heap
heapq.heappush(gains, (-spread_gain, current_node))
matched = gains[0][1] == current_node
# spread stores the cumulative spread
spread_gain, node = heapq.heappop(gains)
spread -= spread_gain
solution.append(node)
spreads.append(spread)
lookups.append(node_lookup)
elapse = round(time.time() - start_time, 3)
elapsed.append(elapse)
return solution, spreads, elapsed, lookups
k = 2
prob = 0.2
n_iters = 1000
celf_solution, celf_spreads, celf_elapsed, celf_lookups = celf(graph, k, prob, n_iters)
print('solution: ', celf_solution)
print('spreads: ', celf_spreads)
print('elapsed: ', celf_elapsed)
print('lookups: ', celf_lookups)
np.random.seed(1234)
graph = Graph.Erdos_Renyi(n=100, m=300, directed=True)
k = 10
prob = 0.1
n_iters = 1500
celf_solution, celf_spreads, celf_elapsed, celf_lookups = celf(graph, k, prob, n_iters)
greedy_solution, greedy_spreads, greedy_elapsed = greedy(graph, k, prob, n_iters)
# print resulting solution
print('celf output: ' + str(celf_solution))
print('greedy output: ' + str(greedy_solution))
# change default style figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
lw = 4
fig = plt.figure(figsize=(9,6))
ax = fig.add_subplot(111)
ax.plot(range(1, len(greedy_spreads) + 1), greedy_spreads, label="Greedy", color="#FBB4AE", lw=lw)
ax.plot(range(1, len(celf_spreads) + 1), celf_spreads, label="CELF", color="#B3CDE3", lw=lw)
ax.legend(loc=2)
plt.ylabel('Expected Spread')
plt.title('Expected Spread')
plt.xlabel('Size of Seed Set')
plt.tick_params(bottom=False, left=False)
plt.show()
lw = 4
fig = plt.figure(figsize=(9,6))
ax = fig.add_subplot(111)
ax.plot(range(1, len(greedy_elapsed) + 1), greedy_elapsed, label="Greedy", color="#FBB4AE", lw=lw)
ax.plot(range(1, len(celf_elapsed) + 1), celf_elapsed, label="CELF", color="#B3CDE3", lw=lw)
ax.legend(loc=2)
plt.ylabel('Computation Time (Seconds)')
plt.xlabel('Size of Seed Set')
plt.title('Computation Time')
plt.tick_params(bottom=False, left=False)
plt.show()
celf_lookups
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Submodular Optimization & Influence Maximization
Step2: Spread Process - Independent Cascade (IC)
Step4: We calculate the expected spread of a given seed set by taking the average over a large number of Monte Carlo simulations. The outer loop in the function iterates over each of these simulations and calculates the spread for each iteration, at the end, the mean of each iteration will be our unbiased estimation for the expected spread of the seed nodes we've provided. The actual number of simulation required is up to debate, through experiment I found 1,000 to work well enough, whereas 100 was too low. On the other hand, the paper even set the simulation number up to 10,000.
Step5: Submodular Optimization
Step7: Cost Effective Lazy Forward (CELF) Algorithm
Step8: Larger Network
Step9: Given the graph, we again compare both optimizers with the same parameter. Again for the n_iters parameter, it is not uncommon to see it set to a much higher number in literatures, such as 10,000 to get a more accurate estimate of spread, we chose a lower number here so we don't have to wait as long for the results
Step10: Thankfully, both optimization method yields the same solution set.
Step11: We now compare the speed of each algorithm. The plot below shows that the computation time of Greedy is larger than CELF for all seed set sizes greater than 1 and the difference in computational times grows exponentially with the size of the seed set. This is because Greedy must compute the spread of $N-i-1$ nodes in iteration $i$ whereas CELF generally performs far fewer spread computations after the first iteration.
Step12: We can get some further insight into the superior computational efficiency of CELF by observing how many "node lookups" it had to perform during each of the 10 rounds. The list that records this information shows that the first round iterated over all 100 nodes of the network. This is identical to Greedy which is why the graph above shows that the running time is equivalent for $k=1$. However, for subsequent iterations, there are far fewer spread computations because the marginal spread of a node in a previous iteration is a good indicator for its marginal spread in a future iteration. Note the relationship between the values below and the corresponding computation time presented in the graph above. There is a visible jump in the blue line for higher values of the "node lookups". This again solidifies the fact that while CELF produces identical solution set as Greedy, it usually has enormous speedups over the standard Greedy procedure.
|
10,636
|
<ASSISTANT_TASK:>
Python Code:
model = keras.models.load_model('example_keras_mnist_model.h5')
model.summary()
dataset = mnist.load_data()
train_data = dataset[0][0] / 255
train_data = train_data[..., np.newaxis].astype('float32')
train_labels = np_utils.to_categorical(dataset[0][1]).astype('float32')
test_data = dataset[1][0] / 255
test_data = test_data[..., np.newaxis].astype('float32')
test_labels = np_utils.to_categorical(dataset[1][1]).astype('float32')
test_data.shape
for i in range(5):
plt.imshow(test_data[i, ..., 0])
plt.show()
softmax_predictions = model.predict(test_data[:5])
softmax_predictions
predictions = np.argmax(softmax_predictions, axis=-1)
predictions
predictions == np.argmax(test_labels[:5], axis=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading the dataset and looking at the first five samples of the test data.
Step2: Doing the inference
|
10,637
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy import optimize
from scipy import spatial
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
sns.set(rc={"figure.figsize": (15, 6)})
sns.set_palette(sns.color_palette("Set2", 10))
lalonde_data = pd.read_csv('lalonde.csv')
#Function that plots a boxplot for re78
def compare_groups(data):
plt.figure(figsize=(10,10))
sns.boxplot(x='treat', y='re78', data=data, showfliers=False, showmeans=True, meanline=True, meanprops=dict(color='r'))
plt.xticks(range(2), ["Control Group", "Treatment Group"])
plt.show()
compare_groups(lalonde_data)
#We keep track of the ratio (Treatment group real earnings in 1978 mean) / (Control group real earnings in 1978 mean) after each improvement done in exercise 1
means_ratio_over_improvement = []
#A function that prints the mean of real earnings in 1978 in both group
def print_means(data):
data_means = data.groupby("treat").agg(np.mean)
print("Control group real earnings in 1978 mean: {:.0f}".format(data_means["re78"].loc[0]))
print("Treatment group real earnings in 1978 mean: {:.0f}".format(data_means["re78"].loc[1]))
ratio = data_means["re78"].loc[1]/data_means["re78"].loc[0]
means_ratio_over_improvement.append(ratio)
print("Ratio (treatment/control): {:.2f}".format(ratio))
print_means(lalonde_data)
#Features of each group
main_variables = ['black', 'hispan', 'age', 'married', 'nodegree', 'educ']
#Function that displays a bar plot of each group for every features
def display_proportions(data, variables=main_variables, n_cols=3):
N = len(variables)
f, axes = plt.subplots(nrows=int(np.ceil(N / n_cols)), ncols=n_cols)
f.set_figheight(10)
for idx, axis, var in zip(range(N), axes.flatten(), variables):
sns.barplot(x='treat', y=var, data=data, ax=axis)
axis.set_xticklabels(["Control Group", "Treatment Group"])
axis.set_xlabel("")
axis.set_title(idx+1)
axis.set_ylabel("mean of {}".format(var))
display_proportions(lalonde_data)
lalond_count = lalonde_data.groupby("treat").agg("count")
print("Number of people in the control group: {}".format(lalond_count["re78"].loc[0]))
print("Number of people in the treatment group: {}".format(lalond_count["re78"].loc[1]))
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
#Select features, that is drop id and treat columns
selectedFeatures = lalonde_data.drop(['id','treat'], axis=1)
#Fit the model
lr.fit(selectedFeatures, lalonde_data['treat']);
#Calculate the propensity scores
propensity_scores = lr.predict_proba(selectedFeatures)
#Only keep the probability of receiving the treatment and store it inside the dataframe
lalonde_data['propensity score'] = [x[1] for x in propensity_scores]
#One dataframe per group
control_group = lalonde_data[lalonde_data['treat'] == 0]
treatment_group = lalonde_data[lalonde_data['treat'] == 1]
#Compute the distance matrix using the absolute difference of the propensity scores
cost_matrix = spatial.distance.cdist(
treatment_group["propensity score"].values.reshape((treatment_group.shape[0], 1)),
control_group["propensity score"].values.reshape((control_group.shape[0], 1)),
metric=lambda a,b: np.abs(a - b)
)
#Solve the distance matrix to minimze the total cost function. Where the total cost function is the sum of the distances
#And get the indices of the pairs that minimze this total cost function
treatment_ind, control_ind = optimize.linear_sum_assignment(cost_matrix)
#We construct a dataframe whith the rows corresponding to the indices obtaiend above. Note we have the same number of sample in each group by construction
lalonde_ps_matched = pd.concat((treatment_group.iloc[treatment_ind], control_group.iloc[control_ind]))
display_proportions(lalonde_ps_matched)
compare_groups(lalonde_ps_matched)
print_means(lalonde_ps_matched)
additionnal_feature_matched = 'hispan'
#Compute the distance matrix where a value is 0 if both the row and the colum is hispan, 1 otherwise
add_cost_matrix = spatial.distance.cdist(
treatment_group[additionnal_feature_matched].values.reshape((treatment_group.shape[0], 1)),
control_group[additionnal_feature_matched].values.reshape((control_group.shape[0], 1)),
metric=lambda a,b: int(a != b)
)
#Solve the distance matrix (obtained by adding the propensity score distance matrix to the hispan distance matrix) to minimze the total cost function.
#Where the total cost function is the sum of the distances
#And get the indices of the pairs that minimze this total cost function
treatment_ind_2, control_ind_2 = optimize.linear_sum_assignment(cost_matrix + add_cost_matrix)
#We construct a dataframe whith the rows corresponding to the indices obtaiend above. Note we have the same number of sample in each group by construction
lalonde_ps_matched_2 = pd.concat((treatment_group.iloc[treatment_ind_2], control_group.iloc[control_ind_2]))
display_proportions(lalonde_ps_matched_2)
compare_groups(lalonde_ps_matched_2)
print_means(lalonde_ps_matched_2)
#Plot the means we recorded after each improvement
sns.barplot(y=means_ratio_over_improvement, x = ["Naive", "Propensity score", "Propensity score + hispan matching"])
from sklearn import metrics
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from time import time
#Loading data
all_news = fetch_20newsgroups(subset='all')
vectorizer = TfidfVectorizer(stop_words='english', max_df=0.5, sublinear_tf=True)
#Vectorizing
news_data = vectorizer.fit_transform(all_news.data)
news_target = all_news.target
news_target_names = all_news.target_names
feature_names = vectorizer.get_feature_names()
# this could have been done in a simpler way for this homework,
# but it might be useful to have such a powerful function for other uses,
# hence we decide to keep it here so that other could use it too :)
def split(X, y, ratios):
Split X and y given some ratios
Parameters
----------
X : ndarray
train matrix
y : ndarray
test matrix
ratios : list(int)
ratios on how to split X and y
Returns
-------
out : tuple(ndarray)
Output one tuple of first, the splits of X and then, the splits of y
assert np.sum(ratios) < 1, "sum of ratios cannot be greater than 1"
assert len(ratios) >= 1, "at least one ratio required to split"
def inner_split(X, y, ratios, acc_X, acc_y):
ratio, *ratios_remaining = ratios
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=ratio)
if len(ratios_remaining) == 0:
acc_X.extend([X_train, X_test])
acc_y.extend([y_train, y_test])
acc_X.extend(acc_y)
return tuple(acc_X)
else:
acc_X.append(X_train)
acc_y.append(y_train)
return inner_split(X_test, y_test, [r/(1.0 - ratio) for r in ratios_remaining], acc_X, acc_y)
return inner_split(X, y, ratios, [], [])
def predict(clf, X_train, y_train, X_test):
Using a classifier, train with training data it using to fit testing labels
and then predict some labels of testing data.
It also times the different steps.
Parameters
----------
clf: sklearn classifier
classifier
X_train: ndarray
training data
y_train: ndarray
training labels
X_test: ndarray
testing data
Returns
-------
out : ndarray
Output the prediction of labels
start_time = time()
print("Prediction computations started...")
clf.fit(X_train, y_train)
train_time = time() - start_time
pred = clf.predict(X_test)
prediction_time = time() - train_time - start_time
print("...Finished")
print("Training time = {}s".format(round(train_time)))
print("Prediction time = {}s".format(round(prediction_time // 1)))
return pred
def report(results, n_top=3, compared_to=10):
Print the parameters of the best grid search cross-validation results
and plot their accuracy compared to another accuracy score.
Parameters
----------
results: sklearn grid search cv_results_
grid search cross-validation results
n_top: int
the number of best results to plot
compared_to: int
the nth best results to compare the best results with
Returns
-------
out : None
Output some prints and a plot
means = []
stds = []
for i in range(1, n_top + 1):
candidates = np.flatnonzero(results['rank_test_score'] == i)
for candidate in candidates:
mean = results['mean_test_score'][candidate]
std = results['std_test_score'][candidate]
means.append(mean)
stds.append(std)
print("Model with rank: {}".format(i))
print("Mean validation score: {0:.4f} (std: {1:.4f})".format(mean, std))
print("Parameters: {}".format(results['params'][candidate]))
min_ = np.min(results['mean_test_score'][results['rank_test_score'] == (compared_to)])
print('\n{0:}\'th score = {1:.4f}'.format(compared_to, min_))
means = np.array(means) - min_
plt.title("Top {0} best scores (compared to the {1}'th score = {2:.3f})".format(n_top, compared_to, min_))
plt.bar(range(n_top), means, yerr=stds, align="center")
plt.xticks(range(n_top), range(1, n_top + 1))
plt.xlabel("n'th best scores")
plt.ylabel("score - {}'th score".format(compared_to))
plt.show()
ratios = [0.8, 0.1] #Ratio is 0.8 for train and twice 0.1 for test and validation
X_train, X_test, X_validation, \
y_train, y_test, y_validation = split(news_data, news_target, ratios)
# use a full grid over max_depth and n_estimators parameters
param_grid = {
"max_depth": [3, 10, 20, None],
"n_estimators": np.linspace(3, 200, num=5, dtype=int)
#"max_features": [1, 3, 10],
#"min_samples_split": [2, 3, 10],
#"min_samples_leaf": [1, 3, 10],
#"bootstrap": [True, False],
#"criterion": ["gini", "entropy"]
}
# run grid search
grid_search = GridSearchCV(RandomForestClassifier(), param_grid=param_grid)
grid_search.fit(X_validation, y_validation)
None #No output cell
report(grid_search.cv_results_, n_top=5, compared_to=10)
rank_chosen = 1 #Position of the parameters we choose
best_params = grid_search.cv_results_['params'][np.flatnonzero(grid_search.cv_results_['rank_test_score'] == rank_chosen)[0]]
random_forest_clf = RandomForestClassifier(**best_params)
pred = predict(random_forest_clf, X_train, y_train, X_test)
#Choose the average type
average_type = "weighted"
#Get the different scores of the predicion computed above
accuracy = metrics.accuracy_score(y_test, pred)
precision = metrics.precision_score(y_test, pred, average=average_type)
recall = metrics.recall_score(y_test, pred, average=average_type)
f1_score = metrics.f1_score(y_test, pred, average=average_type)
print("accuracy = {:.4f}".format(accuracy))
print("precision = {:.4f}".format(precision))
print("recall = {:.4f}".format(recall))
print("f1_score = {:.4f}".format(f1_score))
classes = range(len(news_target_names))
def sum_by_class(arr):
return np.array([np.sum(arr == i) for i in classes])
test_sum_by_class = sum_by_class(y_test)
val_sum_by_class = sum_by_class(y_validation)
train_sum_by_class = sum_by_class(y_train)
p1 = plt.bar(classes, test_sum_by_class)
p2 = plt.bar(classes, val_sum_by_class, bottom=test_sum_by_class)
p3 = plt.bar(classes, train_sum_by_class, bottom=test_sum_by_class + val_sum_by_class)
plt.xticks(classes, news_target_names, rotation='vertical')
plt.tick_params(axis='x', labelsize=15)
plt.legend((p1[0], p2[0], p3[0]), ('test', 'validation', 'train'))
plt.show()
import itertools
# A function to plot the confusion matrix, taken from http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html#sphx-glr-auto-examples-model-selection-plot-confusion-matrix-py
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cnf_matrix = metrics.confusion_matrix(y_test, pred)
# Plot non-normalized confusion matrix
plt.figure(figsize=(25, 15))
plot_confusion_matrix(cnf_matrix, classes=news_target_names, title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure(figsize=(25, 15))
plot_confusion_matrix(cnf_matrix, classes=news_target_names, normalize=True, title='Normalized confusion matrix')
importances = random_forest_clf.feature_importances_
std = np.std([tree.feature_importances_ for tree in random_forest_clf.estimators_], axis=0)
#Sort the feature by importance
indices = np.argsort(importances)[::-1]
print("Total number of features = {}".format(len(indices)))
# Only most important ones (out of thousands)
num_best = 20
best_indices = indices[:num_best]
best_importances = importances[best_indices]
best_std = std[best_indices]
# Plot the feature importances
plt.figure()
plt.title("20 best feature importances")
plt.bar(range(num_best), best_importances, yerr=best_std, align="center")
plt.xticks(range(num_best), np.array(feature_names)[best_indices], rotation='vertical')
plt.tick_params(axis='x', labelsize=15)
plt.xlim([-1, num_best])
plt.xlabel("Feature indices")
plt.ylabel("Feature names")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Motivations
Step2: A naive analysis would claim that there are no clear differences between the two groups and thus would conclude that the "Job Training Program" (JTP) is useless. And if a difference exists, people in the treatment groups have a smaller revenue by 10%, hence the treatment would be worst than no treatment at all.
Step3: Obervations
Step4: As we can see there is 2.3 times as many sample for the control group. And because of this, we can be picky and select only a part of the samples in the control group that correspond to the samples in the treatment group. To do so, we will match two samples together from each groups corresponding to their propensity score and then only keep and compare the samples matched.
Step5: 1.4) Balancing the dataset via matching
Step6: Now, lets compare the difference in the distribution for each feature in the two groups as done earlier in part 1.2
Step7: Observations
Step8: We can now see that the mean in the treatment group is slightly higher than in the control group, where it was slightly below before. Also the maximum, median and quartiles are all bigger than their counterpart in the control group. This is a complete different information from what we had before, but let's improve it even more.
Step9: 1.6) A less naive analysis
Step10: Observations
Step11: The difference in the salaries we perceived in part 1.4 increased, but not significantly.
Step12: Question 2
Step13: 2.1) Loading, TF-IDF and Spliting
Step17: Utility functions
Step18: Data splitting
Step19: 2.2) Random Forest
Step20: After having computed an estimation of our model with many different parameters we choose the best parameters (comparing their mean score and std)
Step21: Let's save the parameters which give the best result inside a variable
Step22: Random forest classification
Step23: As one can see, neither precision, recall or f1_score are adding information. This is because there are quite many classes (20) which are uniformly distributed
Step25: The plot above shows that every class is well represented in the test, training and validation sets.
Step26: What the confusion matrices show is that we did a pretty good joob at assignating the categories except we categorized quite a lot of things in religion.christian instead of religion.misc which is understandable because both categories are closely related. Also atheism is closlely related to religion hence the above average value for ths category but it is still a small value. The last part where we could have done better is with every topics about technology (pc.hardware, mac.hardware, etc.) which is again topics that are very closely related. But overall our classifier can categorize correctly a news and if not it classifies it in a category closely related to the correct one.
|
10,638
|
<ASSISTANT_TASK:>
Python Code:
import seaborn as sns
import metapack as mp
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display
from publicdata.chis import *
%matplotlib inline
sns.set_context('notebook')
# Opening a source package presumes you are working with the notebook in the source package,
# https://github.com/sandiegodata-projects/chis.git
pkg = mp.jupyter.open_source_package()
pkg
df = pkg.reference('adult_2017').dataframe()
t = df.pivot_table(values='rakedw0', columns='diabetes', index=df.index)
t2 = t.sum().round(-3)
t2
t2.sum()
(t2/t2.sum()*100).round(1)
t = df[['diabetes','rakedw0']].set_index('diabetes',append=True).unstack()
t2 = t.sum().round(-3)
diabetes_yes = t2.unstack().loc['rakedw0','YES']
diabetes_no = t2.unstack().loc['rakedw0','NO']
diabetes_yes, diabetes_no
weight_cols = [c for c in df.columns if 'raked' in c]
t = df[['diabetes']+weight_cols] # Get the column of interest, and all of the raked weights
t = t.set_index('diabetes',append=True) # Move the column of interest into the index
t = t.unstack() # Unstack the column of interest, so both values are now in multi-level columns
t = t.sum() # Sum all of the weights for each of the raked weght set and "YES"/"NO"
t = t.unstack() # Now we have sums for each of the replicated, for each of the variable values.
t = t.sub(t.loc['rakedw0']).iloc[1:] # Subtract off the median estimate from each of the replicates
t = (t**2).sum() # sum of squares
ci_95 = np.sqrt(t)*1.96 # sqrt to get stddev, and 1.96 to get 95% CI
((diabetes_yes-ci_95.loc['YES'])/29_456_000*100).round(1), ((diabetes_yes+ci_95.loc['YES'])/29_456_000*100).round(1)
((diabetes_no-ci_95.loc['NO'])/29_456_000*100).round(1), ((diabetes_no+ci_95.loc['NO'])/29_456_000*100).round(1)
def chis_estimate(df, column, ci=True, pct=True, rse=False):
Calculate estimates for CHIS variables, with variances, as 95% CI, from the replicate weights
weight_cols = [c for c in df.columns if 'raked' in c]
t = df[[column]+weight_cols] # Get the column of interest, and all of the raked weights
t = t.set_index(column,append=True) # Move the column of interest into the index
t = t.unstack() # Unstack the column of interest, so both values are now in multi-level columns
t = t.sum() # Sum all of the weights for each of the raked weight set and "YES"/"NO"
t = t.unstack() # Now we have sums for each of the replicats, for each of the variable values.
est = t.iloc[0].to_frame() # Replicate weight 0 is the estimate
est.columns = [column]
total = est.sum()[column]
t = t.sub(t.loc['rakedw0']).iloc[1:] # Subtract off the median estimate from each of the replicates
t = (t**2).sum() # sum of squares
se = np.sqrt(t) # sqrt to get stddev,
ci_95 = se*1.96 # and 1.96 to get 95% CI
if ci:
est[column+'_95_l'] = est[column] - ci_95
est[column+'_95_h'] = est[column] + ci_95
else:
est[column+'_se'] = se
if pct:
est[column+'_pct'] = (est[column]/total*100).round(1)
if ci:
est[column+'_pct_l'] = (est[column+'_95_l']/total*100).round(1)
est[column+'_pct_h'] = (est[column+'_95_h']/total*100).round(1)
if rse:
est[column+'_rse'] = (se/est[column]*100).round(1)
est.rename(columns={column:column+'_count'}, inplace=True)
return est
chis_estimate(df, 'diabetes', ci=False, pct=False)
# This validates with the whole population for 2017, from the AskCHIS web application
chis_estimate(df, 'ag1')
# This validates with the latino subset for 2017, from the AskCHIS web application
chis_estimate(df[df.racedf_p1=='LATINO'], 'ag1')
def chis_segment_estimate(df, column, segment_columns):
Return aggregated CHIS data, segmented on one or more other variables.
if not isinstance(segment_columns, (list,tuple)):
segment_columns = [segment_columns]
odf = None
for index,row in df[segment_columns].drop_duplicates().iterrows():
query = ' and '.join([ "{} == '{}'".format(c,v) for c,v in zip(segment_columns, list(row))])
x = chis_estimate(df.query(query), column, ci=True, pct=True, rse=True)
x.columns.names = ['measure']
x = x.unstack()
for col,val in zip(segment_columns, list(row)):
x = pd.concat([x], keys=[val], names=[col])
if odf is None:
odf = x
else:
odf = pd.concat([odf, x])
odf = odf.to_frame()
odf.columns = ['value']
return odf
chis_segment_estimate(df, 'diabetes', ['racedf_p1', 'ur_ihs']).head(20)
t = chis_segment_estimate(df, 'diabetes', ['racedf_p1', 'ur_ihs'])
t.unstack('measure').head()
t = chis_segment_estimate(df, 'diabetes', ['racedf_p1', 'ur_ihs'])
idx = pd.IndexSlice # Convenience redefinition.
# The IndexSlices should have one term ( seperated by ',') for each of the levels in the index.
# We have one `IndexSlice` for rows, and one for columns. Note that the ``row_indexer`` has 4 terms.
row_indexer = idx[:,:,('diabetes_pct','diabetes_rse'),'YES']
col_indexer = idx[:]
# Now we can select with the two indexers.
t = t.loc[row_indexer,col_indexer]
# Rotate the measures out of rows into columns
t = t.unstack('measure')
# The columns are multi-level, but there is only one value for the first level,
# so it is useless.
t.columns = t.columns.droplevel()
# Only use estimates wtih RSE < 30%
t = t[t.diabetes_rse < 30]
# We don't nee the RSE colum any more.
t = t.drop(columns='diabetes_rse')
# Move the Rural/Urban into columns
t = t.unstack(0)
t
x = chis_segment_estimate(df, 'diabetes', ['racedf_p1', 'am3'])
row_indexer = idx[('YES','NO'),:,('diabetes_pct','diabetes_rse'),'YES']
col_indexer = idx[:]
t = x.loc[row_indexer,col_indexer].unstack('measure')
t.columns = t.columns.droplevel()
t = t[t.diabetes_rse < 30].drop(columns='diabetes_rse')
t
x = chis_segment_estimate(df, 'diabetes', ['racedf_p1', 'am3'])
row_indexer = idx[:,:,('diabetes_pct','diabetes_rse'),'YES']
col_indexer = idx[:]
t = x.loc[row_indexer,col_indexer].unstack('measure')
#t.index = t.index.droplevel('diabetes')
t.columns = t.columns.droplevel()
t = t[t.diabetes_rse < 30].drop(columns='diabetes_rse')
t.unstack(0)
chis_segment_estimate(df, 'diabetes', 'am3')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Estimates Using Pivot
Step2: Summing across responses yields the total popluation, which we can use to calculate percentages.
Step3: Estimates Using Unstack
Step4: Calculating Variance
Step5: The final percentage ranges match those from AskCHIS.
Step7: Functions
Step9: Segmenting Results
Step10: The dataframe returned by this function has a multi-level index, which include all of the unique values from the segmentation columns, a level for measures, and the values from the target column. For instance
Step11: You can "pivot" a level out of the row into the columns with unstack(). Here we move the measures out of the row index into columns.
Step12: Complex selections can be made with .loc.
|
10,639
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from qutip import *
delta = 0.0 * 2 * np.pi
epsilon = 0.5 * 2 * np.pi
gamma = 0.25
times = np.linspace(0, 10, 100)
H = delta/2 * sigmax() + epsilon/2 * sigmaz()
H
psi0 = (2 * basis(2, 0) + basis(2, 1)).unit()
c_ops = [np.sqrt(gamma) * sigmam()]
a_ops = [sigmax()]
e_ops = [sigmax(), sigmay(), sigmaz()]
result_me = mesolve(H, psi0, times, c_ops, e_ops)
result_brme = brmesolve(H, psi0, times, a_ops, e_ops, spectra_cb=[lambda w : gamma * (w > 0)])
plot_expectation_values([result_me, result_brme]);
b = Bloch()
b.add_points(result_me.expect, meth='l')
b.add_points(result_brme.expect, meth='l')
b.make_sphere()
N = 10
w0 = 1.0 * 2 * np.pi
g = 0.05 * w0
kappa = 0.15
times = np.linspace(0, 25, 1000)
a = destroy(N)
H = w0 * a.dag() * a + g * (a + a.dag())
# start in a superposition state
psi0 = ket2dm((basis(N, 4) + basis(N, 2) + basis(N,0)).unit())
c_ops = [np.sqrt(kappa) * a]
a_ops = [[a + a.dag(),lambda w : kappa * (w > 0)]]
e_ops = [a.dag() * a, a + a.dag()]
result_me = mesolve(H, psi0, times, c_ops, e_ops)
result_brme = brmesolve(H, psi0, times, a_ops, e_ops)
plot_expectation_values([result_me, result_brme]);
times = np.linspace(0, 25, 250)
n_th = 1.5
c_ops = [np.sqrt(kappa * (n_th + 1)) * a, np.sqrt(kappa * n_th) * a.dag()]
result_me = mesolve(H, psi0, times, c_ops, e_ops)
w_th = w0/np.log(1 + 1/n_th)
def S_w(w):
if w >= 0:
return (n_th + 1) * kappa
else:
return (n_th + 1) * kappa * np.exp(w / w_th)
a_ops = [[a + a.dag(),S_w]]
result_brme = brmesolve(H, psi0, times, a_ops, e_ops)
plot_expectation_values([result_me, result_brme]);
result_me = mesolve(H, psi0, times, c_ops, [])
result_brme = brmesolve(H, psi0, times, a_ops, [])
n_me = expect(a.dag() * a, result_me.states)
n_brme = expect(a.dag() * a, result_brme.states)
fig, ax = plt.subplots()
ax.plot(times, n_me, label='me')
ax.plot(times, n_brme, label='brme')
ax.legend()
ax.set_xlabel("t");
N = 10
a = tensor(destroy(N), identity(2))
sm = tensor(identity(N), destroy(2))
psi0 = ket2dm(tensor(basis(N, 1), basis(2, 0)))
e_ops = [a.dag() * a, sm.dag() * sm]
w0 = 1.0 * 2 * np.pi
g = 0.05 * 2 * np.pi
kappa = 0.05
times = np.linspace(0, 5 * 2 * np.pi / g, 1000)
a_ops = [[(a + a.dag()),lambda w : kappa*(w > 0)]]
c_ops = [np.sqrt(kappa) * a]
H = w0 * a.dag() * a + w0 * sm.dag() * sm + g * (a + a.dag()) * (sm + sm.dag())
result_me = mesolve(H, psi0, times, c_ops, e_ops)
result_brme = brmesolve(H, psi0, times, a_ops, e_ops)
plot_expectation_values([result_me, result_brme]);
w0 = 1.0 * 2 * np.pi
g = 0.75 * 2 * np.pi
kappa = 0.05
times = np.linspace(0, 5 * 2 * np.pi / g, 1000)
c_ops = [np.sqrt(kappa) * a]
H = w0 * a.dag() * a + w0 * sm.dag() * sm + g * (a + a.dag()) * (sm + sm.dag())
result_me = mesolve(H, psi0, times, c_ops, e_ops)
result_brme = brmesolve(H, psi0, times, a_ops, e_ops)
plot_expectation_values([result_me, result_brme]);
from qutip.ipynbtools import version_table
version_table()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Two-level system
Step2: Harmonic oscillator
Step3: Zero temperature
Step4: Finite temperature
Step5: Storing states instead of expectation values
Step6: Atom-Cavity
Step7: Weak coupling
Step8: In the weak coupling regime there is no significant difference between the Lindblad master equation and the Bloch-Redfield master equation.
Step9: In the strong coupling regime there are some corrections to the Lindblad master equation that is due to the fact system eigenstates are hybridized states with both atomic and cavity contributions.
|
10,640
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
test_df = pd.read_csv('test_20180320.csv')
import json
headers_df = pd.io.json.json_normalize(test_df['Headers'].apply(json.loads))
merged_test_df = pd.merge(test_df, headers_df, left_index=True, right_index=True)
import datetime
merged_test_df['Timestamp'] = pd.to_datetime(test_df.Timestamp, unit='s')
merged_test_df.index = pd.to_datetime(test_df.Timestamp, unit='s')
merged_test_df = merged_test_df.drop(['user-agent','Headers','Timestamp'], 1)
content_length_df = pd.DataFrame(merged_test_df['Content-Length'].value_counts())
top_content = content_length_df.head(11)
import matplotlib
import matplotlib.pyplot as plt
t = content_length_df['Content-Length']
o = content_length_df.index
fig, ax = plt.subplots()
ax.scatter(t, o, marker='.')
ax.set(xlabel='occurrences', ylabel='bytes')
plt.show()
five_ip_df = merged_test_df['X-Forwarded-For'].value_counts() == 5
five_ip_arr = []
for i in range(len(five_ip_df)):
if five_ip_df.values[i]:
five_ip_arr.append(five_ip_df.index[i])
thousand_arr = []
for i in range(len(five_ip_arr)):
temp_df = merged_test_df.loc[merged_test_df['X-Forwarded-For'] == five_ip_arr[i]].sort_index()
for j in range(len(temp_df)-1):
temp_time = (temp_df.index[j+1] - temp_df.index[j]).total_seconds()
if temp_time % 1000 == 0: # If this IP address has two records at some multiple of 1000 seconds apart
thousand_arr.append(five_ip_arr[i])
ip_nets = merged_test_df['IP'].str.extract(r'(\d{,3}.\d{,3}.\d{,3})')
ip_nets.columns = ['network']
ip_nets['network'].value_counts().head()
forward_nets = merged_test_df['X-Forwarded-For'].str.extract(r'(\d{,3}.\d{,3}.\d{,3})')
forward_nets.columns = ['network']
forward_nets['network'].value_counts().head()
overall_by_minute = merged_test_df['IP'].resample('T').count()
top_ip_by_minute = merged_test_df.loc[merged_test_df['IP'] == '225.19.49.85']['IP'].resample('T').count()
auth_by_minute = merged_test_df.loc[merged_test_df['path'] == '/auth']['IP'].resample('T').count()
zh_by_minute = merged_test_df.loc[merged_test_df['Accept-Language'].str.contains('zh', na = False)]['IP'].resample('T').count()
mozillia_by_minute = merged_test_df.loc[merged_test_df['User-Agent'].str.contains('Mozillia', na = False)]['IP'].resample('T').count()
tc_by_minute = merged_test_df.loc[merged_test_df["Content-Length"].isin(top_content.index)]['IP'].resample('T').count()
thousand_by_minute = merged_test_df.loc[merged_test_df["X-Forwarded-For"].isin(thousand_arr)]['IP'].resample('T').count()
graph_df = pd.concat([overall_by_minute, top_ip_by_minute, auth_by_minute, zh_by_minute, mozillia_by_minute, tc_by_minute, thousand_by_minute], axis=1)
graph_df.columns = ['overall', 'top_ip', 'auth', 'zh', 'mozillia', 'top_content', 'thousand']
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import numpy as np
hours = mdates.HourLocator()
Fmt = mdates.DateFormatter('%Y')
t = graph_df.index
a = graph_df['overall']
b = graph_df['top_ip']
c = graph_df['auth']
d = graph_df['zh']
e = graph_df['mozillia']
f = graph_df['top_content']
g = graph_df['thousand']
fig, ax = plt.subplots(figsize=(20, 10))
ax.plot(t, a, label='overall')
ax.plot(t, b, label='top_ip')
ax.plot(t, c, label='auth')
ax.plot(t, d, label='zh')
ax.plot(t, e, label='mozillia')
ax.plot(t, f, label='top_content')
ax.plot(t, g, label='thousand')
ax.set(xlabel='time (m)', ylabel='traffic')
ax.grid()
ax.xaxis.set_major_locator(hours)
ax.format_xdata = mdates.DateFormatter('%Y')
fig.autofmt_xdate()
plt.legend(loc=1)
plt.show()
fig, ax = plt.subplots(figsize=(20, 10))
ax.plot(t, a, label='overall')
ax.plot(t, b, label='top_ip')
ax.plot(t, c, label='auth')
ax.plot(t, d, label='zh')
ax.plot(t, e, label='mozillia')
ax.plot(t, f, label='top_content')
ax.plot(t, g, label='thousand')
ax.set(xlabel='time (m)', ylabel='traffic')
ax.grid()
ax.xaxis.set_major_locator(hours)
ax.format_xdata = mdates.DateFormatter('%Y')
fig.autofmt_xdate()
left = '2015-02-17 20:30:00'
right = '2015-02-18 00:30:00'
bottom = 0
top = 250
ax.set_xlim(left, right)
ax.set_ylim(bottom, top)
plt.legend(loc=1)
plt.show()
import urllib.request
import json
from pandas.io.json import json_normalize
with urllib.request.urlopen("https://cool-proxy.net/proxies.json") as url:
data = json.loads(url.read().decode())
proxy_df = json_normalize(data)
final_list = proxy_df[['ip','port']]
final_list.to_csv("output_ips.csv", encoding='utf-8', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then import json and use json_normalize to convert the Headers data into columns of the csv. Merge this back with the original data.
Step2: Convert the time from unix epoch to human-readable form and set the index to the Timestamp.
Step3: Just for additional clarity for exploration, drop user-agent since it is the same as User-Agent, drop Headers since it has been parsed, and drop Timestamp because it is the same as the index.
Step4: What are the columns about?
Step5: I will use top_content later to graph against overall traffic.
Step6: In the chart above, we can see that there are several Content-Lengths which have unusually high rates of occurence, all grouped in a range between 150-160 bytes.
Step7: Since IP and X-Forwarded for appear to stay in consistent in most 5-instance patterns I have observed, I will just use X-Forwarded-For to avoid the complication of the one outlier in IP.
Step8: Checking Subnets
Step9: Below I use resample and count to create a by-minute count of requests and then do the same for several subsets filtered by various clues to malicious behavior I have discovered while exploring the dataset.
Step10: There are a number of observations to be made from this graph, and lots of suggestions about how the data might be further explored.
|
10,641
|
<ASSISTANT_TASK:>
Python Code:
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.sql_advanced.ex3 import *
print("Setup Complete")
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "github_repos" dataset
dataset_ref = client.dataset("github_repos", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "sample_commits" table
table_ref = dataset_ref.table("sample_commits")
# API request - fetch the table
sample_commits_table = client.get_table(table_ref)
# Preview the first five lines of the table
client.list_rows(sample_commits_table, max_results=5).to_dataframe()
# Print information on all the columns in the table
sample_commits_table.schema
# Write a query to find the answer
max_commits_query =
# Check your answer
q_1.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
#%%RM_IF(PROD)%%
max_commits_query =
SELECT committer.name AS committer_name, COUNT(*) AS num_commits
FROM `bigquery-public-data.github_repos.sample_commits`
WHERE committer.date >= '2016-01-01' AND committer.date < '2017-01-01'
GROUP BY committer_name
ORDER BY num_commits DESC
q_1.check()
# Construct a reference to the "languages" table
table_ref = dataset_ref.table("languages")
# API request - fetch the table
languages_table = client.get_table(table_ref)
# Preview the first five lines of the table
client.list_rows(languages_table, max_results=5).to_dataframe()
# Print information on all the columns in the table
languages_table.schema
# Fill in the blank
num_rows = ____
# Check your answer
q_2.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_2.hint()
#_COMMENT_IF(PROD)_
q_2.solution()
#%%RM_IF(PROD)%%
num_rows = 6
q_2.check()
# Write a query to find the answer
pop_lang_query =
# Check your answer
q_3.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
#%%RM_IF(PROD)%%
pop_lang_query =
SELECT l.name as language_name, COUNT(*) as num_repos
FROM `bigquery-public-data.github_repos.languages`,
UNNEST(language) AS l
GROUP BY language_name
ORDER BY num_repos DESC
q_3.check()
# Your code here
all_langs_query =
# Check your answer
q_4.check()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_4.hint()
#_COMMENT_IF(PROD)_
q_4.solution()
#%%RM_IF(PROD)%%
all_langs_query =
SELECT l.name, l.bytes
FROM `bigquery-public-data.github_repos.languages`,
UNNEST(language) as l
WHERE repo_name = 'polyrabbit/polyglot'
ORDER BY l.bytes DESC
q_4.check()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercises
Step2: Run the next code cell to print the table schema.
Step5: Write a query to find the individuals with the most commits in this table in 2016. Your query should return a table with two columns
Step6: 2) Look at languages!
Step7: Each row of the languages table corresponds to a different repository.
Step8: Assume for the moment that you have access to a table called sample_languages that contains only a very small subset of the rows from the languages table
Step11: 3) What's the most popular programming language?
Step14: 4) Which languages are used in the repository with the most languages?
|
10,642
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import cobra
import cobra.test
# "ecoli" and "salmonella" are also valid arguments
model = cobra.test.create_test_model("textbook")
print(len(model.reactions))
print(len(model.metabolites))
print(len(model.genes))
model
model.reactions[29]
model.metabolites.get_by_id("atp_c")
model.reactions.EX_glc__D_e.bounds
pgi = model.reactions.get_by_id("PGI")
pgi
print(pgi.name)
print(pgi.reaction)
print(pgi.lower_bound, "< pgi <", pgi.upper_bound)
print(pgi.reversibility)
pgi.check_mass_balance()
pgi.add_metabolites({model.metabolites.get_by_id("h_c"): -1})
pgi.reaction
pgi.check_mass_balance()
pgi.subtract_metabolites({model.metabolites.get_by_id("h_c"): -1})
print(pgi.reaction)
print(pgi.check_mass_balance())
pgi.reaction = "g6p_c --> f6p_c + h_c + green_eggs + ham"
pgi.reaction
pgi.reaction = "g6p_c <=> f6p_c"
pgi.reaction
atp = model.metabolites.get_by_id("atp_c")
atp
print(atp.name)
print(atp.compartment)
atp.charge
print(atp.formula)
len(atp.reactions)
model.metabolites.get_by_id("g6p_c").reactions
gpr = pgi.gene_reaction_rule
gpr
pgi.genes
pgi_gene = model.genes.get_by_id("b4025")
pgi_gene
pgi_gene.reactions
pgi.gene_reaction_rule = "(spam or eggs)"
pgi.genes
pgi_gene.reactions
model.genes.get_by_id("spam")
cobra.manipulation.delete_model_genes(
model, ["spam"], cumulative_deletions=True)
print("after 1 KO: %4d < flux_PGI < %4d" % (pgi.lower_bound, pgi.upper_bound))
cobra.manipulation.delete_model_genes(
model, ["eggs"], cumulative_deletions=True)
print("after 2 KO: %4d < flux_PGI < %4d" % (pgi.lower_bound, pgi.upper_bound))
cobra.manipulation.undelete_model_genes(model)
print(pgi.lower_bound, "< pgi <", pgi.upper_bound)
model = cobra.test.create_test_model('textbook')
for reaction in model.reactions[:5]:
with model as model:
reaction.knock_out()
model.optimize()
print('%s blocked (bounds: %s), new growth rate %f' %
(reaction.id, str(reaction.bounds), model.objective.value))
[reaction.bounds for reaction in model.reactions[:5]]
print('original objective: ', model.objective.expression)
with model:
model.objective = 'ATPM'
print('print objective in first context:', model.objective.expression)
with model:
model.objective = 'ACALD'
print('print objective in second context:', model.objective.expression)
print('objective after exiting second context:',
model.objective.expression)
print('back to original objective:', model.objective.expression)
with model as inner:
inner.reactions.PFK.knock_out
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The reactions, metabolites, and genes attributes of the cobrapy model are a special type of list called a cobra.DictList, and each one is made up of cobra.Reaction, cobra.Metabolite and cobra.Gene objects respectively.
Step2: When using Jupyter notebook this type of information is rendered as a table.
Step3: Just like a regular list, objects in the DictList can be retrieved by index. For example, to get the 30th reaction in the model (at index 29 because of 0-indexing)
Step4: Additionally, items can be retrieved by their id using the DictList.get_by_id() function. For example, to get the cytosolic atp metabolite object (the id is "atp_c"), we can do the following
Step5: As an added bonus, users with an interactive shell such as IPython will be able to tab-complete to list elements inside a list. While this is not recommended behavior for most code because of the possibility for characters like "-" inside ids, this is very useful while in an interactive prompt
Step6: Reactions
Step7: We can view the full name and reaction catalyzed as strings
Step8: We can also view reaction upper and lower bounds. Because the pgi.lower_bound < 0, and pgi.upper_bound > 0, pgi is reversible.
Step9: We can also ensure the reaction is mass balanced. This function will return elements which violate mass balance. If it comes back empty, then the reaction is mass balanced.
Step10: In order to add a metabolite, we pass in a dict with the metabolite object and its coefficient
Step11: The reaction is no longer mass balanced
Step12: We can remove the metabolite, and the reaction will be balanced once again.
Step13: It is also possible to build the reaction from a string. However, care must be taken when doing this to ensure reaction id's match those in the model. The direction of the arrow is also used to update the upper and lower bounds.
Step14: Metabolites
Step15: We can print out the metabolite name and compartment (cytosol in this case) directly as string.
Step16: We can see that ATP is a charged molecule in our model.
Step17: We can see the chemical formula for the metabolite as well.
Step18: The reactions attribute gives a frozenset of all reactions using the given metabolite. We can use this to count the number of reactions which use atp.
Step19: A metabolite like glucose 6-phosphate will participate in fewer reactions.
Step20: Genes
Step21: Corresponding gene objects also exist. These objects are tracked by the reactions itself, as well as by the model
Step22: Each gene keeps track of the reactions it catalyzes
Step23: Altering the gene_reaction_rule will create new gene objects if necessary and update all relationships.
Step24: Newly created genes are also added to the model
Step25: The delete_model_genes function will evaluate the GPR and set the upper and lower bounds to 0 if the reaction is knocked out. This function can preserve existing deletions or reset them using the cumulative_deletions flag.
Step26: The undelete_model_genes can be used to reset a gene deletion
Step27: Making changes reversibly using models as contexts
Step28: If we look at those knocked reactions, see that their bounds have all been reverted.
Step29: Nested contexts are also supported
Step30: Most methods that modify the model are supported like this including adding and removing reactions and metabolites and setting the objective. Supported methods and functions mention this in the corresponding documentation.
|
10,643
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
%matplotlib inline
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.decomposition import PCA
data = pd.read_csv('abalone.data', names=['Sex', 'Length', 'Diameter', 'Height', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings'])
data.head()
data = pd.get_dummies(data)
data.head()
data.describe()
corr = data.corr()
fig, ax = plt.subplots(figsize=(18,10))
sns.heatmap(corr)
corr
fig, ((ax1, ax2), (ax3, ax4),(ax5, ax6),(ax7,ax8)) = plt.subplots(4, 2, figsize = (15,10), sharex=False)
axs = [ax1,ax2,ax3,ax4,ax5,ax6,ax7,ax8]
plt.tight_layout()
for n in range(0, 8):
axs[n].hist(data[data.columns[n]], bins=30)
axs[n].set_title(data.columns[n], fontsize=10)
plt.figure(figsize=(18, 10))
plt.hist(data['Rings'], bins=30)
plt.title("Rings", fontsize=16)
plt.show()
X_train, X_test, y_train, y_test = train_test_split(data.drop(columns=['Rings']), data['Rings'], test_size=.2, random_state=17)
sc = StandardScaler().fit(X_train)
X_train, X_test = sc.transform(X_train), sc.transform(X_test)
def approx(y_pred, y_true):
predictions = list(zip(y_pred, y_true))
return [len(list(filter(lambda a: abs(a[0] - a[1]) <= d, predictions))) / len(predictions) for d in [0.5, 1, 2]]
def score(model):
model.fit(X_train, y_train)
print('Train score: {}'.format(approx(model.predict(X_train), y_train)))
print('Test score: {}'.format(approx(model.predict(X_test), y_test)))
def grid_search(model, params):
gs = GridSearchCV(model, params)
return gs.fit(X_train, y_train)
score(KNeighborsClassifier(29))
score(SVC(kernel='linear'))
import graphviz
from sklearn.tree import export_graphviz
dt = DecisionTreeClassifier(max_depth=5)
score(dt)
dot_data = export_graphviz(dt, out_file=None,
feature_names=data.drop(columns=['Rings']).columns,
class_names=[str(i + 1) for i in range(29)],
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
score(RandomForestClassifier(max_depth=4, n_estimators=83, max_features=1))
score(MLPClassifier(alpha=2))
score(AdaBoostClassifier())
from sklearn.svm import SVR
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from sklearn.tree import DecisionTreeRegressor
from sklearn.neural_network import MLPRegressor
score(LinearRegression())
score(SVR(C=250, gamma=0.01))
score(SVR(kernel='poly', C=100, degree=4))
score(DecisionTreeRegressor(max_depth=6, criterion="mse", min_samples_leaf=20))
score(MLPRegressor(alpha=1e-2))
import urllib
import tempfile
import tensorflow as tf
FLAGS = None
LEARNING_RATE = 0.001
tf.logging.set_verbosity(tf.logging.INFO)
def maybe_download(train_data=None, test_data=None, predict_data=None):
Maybe downloads training data and returns train and test file names.
if train_data:
train_file_name = train_data
else:
train_file = tempfile.NamedTemporaryFile(delete=False)
urllib.request.urlretrieve(
"http://download.tensorflow.org/data/abalone_train.csv",
train_file.name)
train_file_name = train_file.name
train_file.close()
print("Training data is downloaded to %s" % train_file_name)
if test_data:
test_file_name = test_data
else:
test_file = tempfile.NamedTemporaryFile(delete=False)
urllib.request.urlretrieve(
"http://download.tensorflow.org/data/abalone_test.csv", test_file.name)
test_file_name = test_file.name
test_file.close()
print("Test data is downloaded to %s" % test_file_name)
if predict_data:
predict_file_name = predict_data
else:
predict_file = tempfile.NamedTemporaryFile(delete=False)
urllib.request.urlretrieve(
"http://download.tensorflow.org/data/abalone_predict.csv",
predict_file.name)
predict_file_name = predict_file.name
predict_file.close()
print("Prediction data is downloaded to %s" % predict_file_name)
return train_file_name, test_file_name, predict_file_name
def model_fn(features, labels, mode, params):
first_hidden_layer = tf.layers.dense(features["x"], 10, activation=tf.nn.relu)
second_hidden_layer = tf.layers.dense(
first_hidden_layer, 10, activation=tf.nn.relu)
output_layer = tf.layers.dense(second_hidden_layer, 1)
predictions = tf.reshape(output_layer, [-1])
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={"ages": predictions})
loss = tf.losses.mean_squared_error(labels, predictions)
optimizer = tf.train.GradientDescentOptimizer(
learning_rate=params["learning_rate"])
train_op = optimizer.minimize(
loss=loss, global_step=tf.train.get_global_step())
eval_metric_ops = {
"rmse": tf.metrics.root_mean_squared_error(
tf.cast(labels, tf.float64), predictions)
}
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
abalone_train, abalone_test, abalone_predict = maybe_download()
training_set = tf.contrib.learn.datasets.base.load_csv_without_header(
filename=abalone_train, target_dtype=np.int, features_dtype=np.float64)
test_set = tf.contrib.learn.datasets.base.load_csv_without_header(
filename=abalone_test, target_dtype=np.int, features_dtype=np.float64)
prediction_set = tf.contrib.learn.datasets.base.load_csv_without_header(
filename=abalone_predict, target_dtype=np.int, features_dtype=np.float64)
model_params = {"learning_rate": LEARNING_RATE}
nn = tf.estimator.Estimator(model_fn=model_fn, params=model_params)
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": np.array(training_set.data)},
y=np.array(training_set.target),
num_epochs=None,
shuffle=True)
nn.train(input_fn=train_input_fn, steps=5000)
test_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": np.array(test_set.data)},
y=np.array(test_set.target),
num_epochs=1,
shuffle=False)
ev = nn.evaluate(input_fn=test_input_fn)
print("Loss: %s" % ev["loss"])
print("Root Mean Squared Error: %s" % ev["rmse"])
predict_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": prediction_set.data},
num_epochs=1,
shuffle=False)
predictions = nn.predict(input_fn=predict_input_fn)
for i, p in enumerate(predictions):
print("Prediction %s: %s" % (i + 1, p["ages"]))
t_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": test_set.data},
num_epochs=1,
shuffle=False)
t_pred = nn.predict(input_fn=t_fn)
t_pred = list(map(lambda x: x['ages'], t_pred))
approx(t_pred, test_set.target)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's convert categorical feature 'Sex' to numerical via one-hot encoding
Step2: Analysis
Step3: Classification
Step4: K-Neighbors
Step5: SVM + linear kernel
Step6: Decision tree
Step7: Random forest
Step8: Multi-layer perceptron
Step9: AdaBoost
Step10: Regression
Step11: Linear regression
Step12: SVM + RBF kernel
Step13: SVM + polynomial kernel
Step14: Decision tree
Step15: Multi-layer perceptron
Step17: TensorFlow
|
10,644
|
<ASSISTANT_TASK:>
Python Code:
import mne
import numpy as np
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
evokeds = mne.read_evokeds(meg_path / 'sample_audvis-ave.fif')
left_auditory = evokeds[0].apply_baseline()
fwd = mne.read_forward_solution(
meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif')
mne.convert_forward_solution(fwd, surf_ori=True, copy=False)
noise_cov = mne.read_cov(meg_path / 'sample_audvis-cov.fif')
subject = 'sample'
subjects_dir = data_path / 'subjects'
trans_fname = meg_path / 'sample_audvis_raw-trans.fif'
lh = fwd['src'][0] # Visualize the left hemisphere
verts = lh['rr'] # The vertices of the source space
tris = lh['tris'] # Groups of three vertices that form triangles
dip_pos = lh['rr'][lh['vertno']] # The position of the dipoles
dip_ori = lh['nn'][lh['vertno']]
dip_len = len(dip_pos)
dip_times = [0]
white = (1.0, 1.0, 1.0) # RGB values for a white color
actual_amp = np.ones(dip_len) # misc amp to create Dipole instance
actual_gof = np.ones(dip_len) # misc GOF to create Dipole instance
dipoles = mne.Dipole(dip_times, dip_pos, actual_amp, dip_ori, actual_gof)
trans = mne.read_trans(trans_fname)
fig = mne.viz.create_3d_figure(size=(600, 400), bgcolor=white)
coord_frame = 'mri'
# Plot the cortex
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans, surfaces='white',
coord_frame=coord_frame, fig=fig)
# Mark the position of the dipoles with small red dots
mne.viz.plot_dipole_locations(
dipoles=dipoles, trans=trans, mode='sphere', subject=subject,
subjects_dir=subjects_dir, coord_frame=coord_frame, scale=7e-4, fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.25)
fig = mne.viz.create_3d_figure(size=(600, 400))
# Plot the cortex
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans,
surfaces='white', coord_frame='head', fig=fig)
# Show the dipoles as arrows pointing along the surface normal
mne.viz.plot_dipole_locations(
dipoles=dipoles, trans=trans, mode='arrow', subject=subject,
subjects_dir=subjects_dir, coord_frame='head', scale=7e-4, fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)
# Compute the source estimate for the left auditory condition in the sample
# dataset.
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=True)
stc = apply_inverse(left_auditory, inv, pick_ori=None)
# Visualize it at the moment of peak activity.
_, time_max = stc.get_peak(hemi='lh')
brain_fixed = stc.plot(surface='white', subjects_dir=subjects_dir,
initial_time=time_max, time_unit='s', size=(600, 400))
mne.viz.set_3d_view(figure=brain_fixed, focalpoint=(0., 0., 50))
fig = mne.viz.create_3d_figure(size=(600, 400))
# Plot the cortex
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans,
surfaces='white', coord_frame='head', fig=fig)
# Show the three dipoles defined at each location in the source space
mne.viz.plot_alignment(
subject=subject, subjects_dir=subjects_dir, trans=trans, fwd=fwd,
surfaces='white', coord_frame='head', fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)
# Make an inverse operator with loose dipole orientations
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,
loose=1.0)
# Compute the source estimate, indicate that we want a vector solution
stc = apply_inverse(left_auditory, inv, pick_ori='vector')
# Visualize it at the moment of peak activity.
_, time_max = stc.magnitude().get_peak(hemi='lh')
brain_mag = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,
time_unit='s', size=(600, 400), overlay_alpha=0)
mne.viz.set_3d_view(figure=brain_mag, focalpoint=(0., 0., 50))
# Set loose to 0.2, the default value
inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,
loose=0.2)
stc = apply_inverse(left_auditory, inv, pick_ori='vector')
# Visualize it at the moment of peak activity.
_, time_max = stc.magnitude().get_peak(hemi='lh')
brain_loose = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,
time_unit='s', size=(600, 400), overlay_alpha=0)
mne.viz.set_3d_view(figure=brain_loose, focalpoint=(0., 0., 50))
# Only retain vector magnitudes
stc = apply_inverse(left_auditory, inv, pick_ori=None)
# Visualize it at the moment of peak activity
_, time_max = stc.get_peak(hemi='lh')
brain = stc.plot(surface='white', subjects_dir=subjects_dir,
initial_time=time_max, time_unit='s', size=(600, 400))
mne.viz.set_3d_view(figure=brain, focalpoint=(0., 0., 50))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The source space
Step2: Fixed dipole orientations
Step3: Restricting the dipole orientations in this manner leads to the following
Step4: The direction of the estimated current is now restricted to two directions
Step5: When computing the source estimate, the activity at each of the three dipoles
Step6: Limiting orientations, but not fixing them
Step7: Discarding dipole orientation information
|
10,645
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
class Plan:
def __init__(self): pass
# Plan 1 = Cigna HDHP/HSA
p1 = Plan()
p1.family_deductible = 4000.00 # Same deductible for both family and individual
p1.individual_deductible = 4000.00
p1.family_oopmax = 6000.00 # Same out-of-pocket max for family and individual
p1.individual_oopmax = 6000.00
p1.premium_monthly = 46*2 # $46 * 2 times/month
p1.hsa_contribution = 2400.00
p1.coinsurance_rate = 0.1
# Plan 2 = Cigna PPO $1000
p2 = Plan()
p2.family_deductible = 2000.00
p2.individual_deductible = 1000.00
p2.family_oopmax = 8000.00
p2.individual_oopmax = 4000.00
p2.premium_monthly = 46*2 # $46 * 2 times/month
p2.hsa_contribution = 0.0
p2.coinsurance_rate = 0.2
# Plan 3 = Cigna PPO $500
p3 = Plan()
p3.family_deductible = 1000.00
p3.individual_deductible = 500.00
p3.family_oopmax = 7000.00
p3.individual_oopmax = 3500.00
p3.premium_monthly = 103*2 # $103 * 2 times/month
p3.hsa_contribution = 0.0
p3.coinsurance_rate = 0.1
# For the purposes of this estimation, we are assuming the deductible
# is always larger than the HSA contribution amount
def apply_deductible_and_hsa(cost, deductible, hsa):
cost_to_you = 0
cost_remaining = 0
# Apply HSA
deductible_minus_hsa = deductible - hsa
if cost <= hsa:
cost_to_you = 0
cost_remaining = 0
elif cost <= deductible:
cost_to_you = cost - hsa
cost_remaining = 0
elif cost > deductible:
cost_to_you = deductible_minus_hsa
cost_remaining = cost - deductible
return (cost_to_you, cost_remaining)
def apply_coinsurance(cost, coinsurance_rate):
return cost * coinsurance_rate
def apply_oopmax(cost, oopmax):
if cost >= oopmax:
return oopmax
else:
return cost
def setup_graph(title='', x_label='', y_label='', fig_size=None):
fig = plt.figure()
if fig_size != None:
fig.set_size_inches(fig_size[0], fig_size[1])
ax = fig.add_subplot(111)
ax.set_title(title)
ax.set_xlabel(x_label)
ax.set_ylabel(y_label)
def individual_cost(plan, gross_cost):
(cost_to_you, cost_remaining) = apply_deductible_and_hsa(gross_cost,
plan.individual_deductible,
plan.hsa_contribution)
cost_to_you += apply_coinsurance(cost_remaining, plan.coinsurance_rate)
cost_to_you = apply_oopmax(cost_to_you, plan.individual_oopmax)
# Apply yearly premiums - note that the out-of-pocket max doesn't include
# the premiums; thus, we apply them after applying out-of-pocket max.
cost_to_you += (plan.premium_monthly * 12)
return cost_to_you
def family_cost(plan, gross_cost):
(cost_to_you, cost_remaining) = apply_deductible_and_hsa(gross_cost,
plan.family_deductible,
plan.hsa_contribution)
cost_to_you += apply_coinsurance(cost_remaining, plan.coinsurance_rate)
cost_to_you = apply_oopmax(cost_to_you, plan.family_oopmax)
# Apply yearly premiums - note that the out-of-pocket max doesn't include
# the premiums; thus, we apply them after applying out-of-pocket max.
cost_to_you += (plan.premium_monthly * 12)
return cost_to_you
# Should be the monthly premium times 12 (to make up the yearly premium).
family_cost(p1, 0)
p1.premium_monthly * 12.0
family_cost(p2, 0)
p2.premium_monthly * 12.0
family_cost(p3, 0)
p3.premium_monthly * 12.0
# Should be monthly premium times 12
family_cost(p1, 2400) == p1.premium_monthly * 12
(p1.premium_monthly * 12) + \
(p1.family_deductible - p1.hsa_contribution) + \
(6000 - p1.family_deductible) * p1.coinsurance_rate
family_cost(p1, 6000)
# Calculate costs
gross_costs = range(0, 70000)
p1_costs = [family_cost(p1, cost) for cost in gross_costs]
p2_costs = [family_cost(p2, cost) for cost in gross_costs]
p3_costs = [family_cost(p3, cost) for cost in gross_costs]
# Do graph
setup_graph(title='Family costs', x_label='Gross cost', y_label='Cost to you', fig_size=(12,7))
ax = plt.subplot(1,1,1)
p1_graph, = ax.plot(gross_costs, p1_costs, label="Cigna HDHP/HSA")
p2_graph, = ax.plot(gross_costs, p2_costs, label="Cigna PPO $1000")
p3_graph, = ax.plot(gross_costs, p3_costs, label="Cigna PPO $500")
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels, loc='upper left')
plt.show()
# Calculate costs
gross_costs = range(0, 50000)
p1_costs = [individual_cost(p1, cost) for cost in gross_costs]
p2_costs = [individual_cost(p2, cost) for cost in gross_costs]
p3_costs = [individual_cost(p3, cost) for cost in gross_costs]
# Do graph
setup_graph(title='Individual costs', x_label='Gross cost', y_label='Cost to you', fig_size=(12,7))
ax = plt.subplot(1,1,1)
p1_graph, = ax.plot(gross_costs, p1_costs, label="Cigna HDHP/HSA")
p2_graph, = ax.plot(gross_costs, p2_costs, label="Cigna PPO $1000")
p3_graph, = ax.plot(gross_costs, p3_costs, label="Cigna PPO $500")
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels, loc='upper left')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Helper functions
Step2: Plan cost functions
Step3: Sanity Tests
Step4: Cost less than HSA
Step5: Cost greater than HSA and deductible
Step6: Family Cost graph
Step7: Individual Cost
|
10,646
|
<ASSISTANT_TASK:>
Python Code:
def maximum(x, y):
if x > y:
return x
else:
return y
assert maximum(3, 3) == 3
assert maximum(1, 2) == 2
assert maximum(3, 2) == 3
def max_of_three(x, y, z):
if x > y and x > z:
return x
elif y > x and y > z:
return y
elif z > x and z > y:
return z
else:
return x
assert max_of_three(1, 2, 3) == 3
assert max_of_three(1, 1, 2) == 2
assert max_of_three(2, 1 , .5) == 2
assert max_of_three(0, 0, 0) == 0
def length(obj):
len = 0
for _ in obj:
len += 1
return len
assert length([1, 2, 3]) == 3
assert length('this is some string') == 19
def is_vowel(char):
return char in 'aeiou'
assert is_vowel('t') == False
assert is_vowel('a') == True
def accumulate(obj):
res = 0
for num in obj:
res += num
return res
def multiply(obj):
res = 1
for num in obj:
res *= num
return res
assert accumulate([1, 2, 3, 4]) == 10
assert multiply([1, 2, 3, 4]) == 24
from operator import add, mul
def calc(obj, func):
res = None
if func == add:
res = 0
if func == mul:
res = 1
for num in obj:
res = func(res, num)
return res
print(calc([1, 2, 3, 4], mul))
print(calc([1, 2, 3, 4], add))
def reverse(s):
return ''.join([c for c in s[::-1]])
assert reverse('I am testing') == 'gnitset ma I'
def is_palindrome(s):
return s == reverse(s)
assert is_palindrome('radar') == True
assert is_palindrome('sonar') == False
def is_member(x, a):
for v in x:
if v == a:
return True
return False
assert is_member([1, 2, 3], 4) == False
assert is_member([1, 2, 3], 2) == True
def histogram(obj):
for n in obj:
print('*' * n, '\n')
histogram([4, 9, 7])
def filter_long_words(words, n):
return [word for word in words if len(word) >= n]
assert len(filter_long_words('this is some sentence'.split(), 3)) == 3
def is_pangram(sentence):
alphabet = set('a b c d e f g h i j k l m n o p q r s t u v w x y z'.split())
for char in sentence:
try:
alphabet.remove(char)
except KeyError:
pass
if len(alphabet) == 0:
return True
else:
return False
assert is_pangram('foo') == False
assert is_pangram('The quick brown fox jumps over the lazy dog') == True
def translate(eng):
dictionary = {
"may": "möge",
"the": "die",
"force": "macht",
"be": "sein",
"with": "mit",
"you": "dir"
}
ger = []
for word in eng:
if word in dictionary:
ger.append(dictionary[word])
else:
ger.append(word)
return ger
assert translate("may the force be with you".split()) == ['möge', 'die', 'macht', 'sein', 'mit', 'dir']
def rot13(msg):
key = {'a':'n', 'b':'o', 'c':'p', 'd':'q', 'e':'r', 'f':'s', 'g':'t', 'h':'u',
'i':'v', 'j':'w', 'k':'x', 'l':'y', 'm':'z', 'n':'a', 'o':'b', 'p':'c',
'q':'d', 'r':'e', 's':'f', 't':'g', 'u':'h', 'v':'i', 'w':'j', 'x':'k',
'y':'l', 'z':'m', 'A':'N', 'B':'O', 'C':'P', 'D':'Q', 'E':'R', 'F':'S',
'G':'T', 'H':'U', 'I':'V', 'J':'W', 'K':'X', 'L':'Y', 'M':'Z', 'N':'A',
'O':'B', 'P':'C', 'Q':'D', 'R':'E', 'S':'F', 'T':'G', 'U':'H', 'V':'I',
'W':'J', 'X':'K', 'Y':'L', 'Z':'M'}
res = []
for char in msg:
if char in key:
res.append(key[char])
else:
res.append(char)
return ''.join(res)
text = 'this is some text'
assert rot13(rot13(text)) == text
from collections import defaultdict
import string
def char_freq_table(filename):
char_counter = defaultdict(int)
with open(filename) as fh:
text = fh.read()
for character in text:
char_counter[character] += 1
return char_counter
frequencies = char_freq_table('material/jedi.txt')
with open('jedi_frequencies.txt', 'w') as fh:
for k, v in frequencies.items():
if k in string.printable.replace('\n', '') :
fh.writelines('| {} || {} |\n'.format(k, v))
with open('material/jedi_frequencies.txt') as fh:
print(fh.read())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Define a function max_of_three that takes three numbers as arguments and returns the largest of them.
Step2: 3. Define a function length that computes the length of a given list or string. (It is true that Python has the len() function built in, but writing it yourself is nevertheless a good exercise.)
Step3: 4. Write a function is_vowel that takes a character (i.e. a string of length 1) and returns True if it is a vowel, False otherwise.
Step4: 5. Define a function accumulate and a function multiply that sums and multiplies (respectively) all the numbers in a list of numbers. For example, sum([1, 2, 3, 4]) should return 10, and multiply([1, 2, 3, 4]) should return 24.
Step5: A more elegant and generic solution is given hereafter. It uses a functional approach, as the function that is to be calculated is passed to the function
Step6: 6. Define a function reverse that computes the reversal of a string. For example, reverse("I am testing") should return the string "gnitset ma I".
Step7: 7. Define a function is_palindrome that recognizes palindromes (i.e. words that look the same written backwards). For example, is_palindrome("radar") should return True.
Step8: 8. Write a function is_member that takes a value (i.e. a number, string, etc) x and a list of values a, and returns True if x is a member of a, False otherwise. (Note that this is exactly what the in operator does, but for the sake of the exercise you should pretend Python did not have this operator.)
Step9: 9. Define a procedure histogram that takes a list of integers and prints a histogram to the screen. For example, histogram([4, 9, 7]) should print the following
Step10: 10. Write a function filter_long_words that takes a list of words and an integer n and returns the list of words that are longer than n.
Step11: 11. A pangram is a sentence that contains all the letters of the English alphabet at least once, for example
Step12: 12. Represent a small bilingual lexicon as a Python dictionary in the following fashion {"may"
Step13: 13. In cryptography, a Caesar cipher is a very simple encryption techniques in which each letter in the plain text is replaced by a letter some fixed number of positions down the alphabet. For example, with a shift of 3, A would be replaced by D, B would become E, and so on. The method is named after Julius Caesar, who used it to communicate with his generals. ROT-13 ("rotate by 13 places") is a widely used example of a Caesar cipher where the shift is 13. In Python, the key for ROT-13 may be represented by means of the following dictionary
Step14: 14. Write a procedure char_freq_table that accepts the file name material/jedi.txt as argument, builds a frequency listing of the characters contained in the file, and prints a sorted and nicely formatted character frequency table to the screen.
|
10,647
|
<ASSISTANT_TASK:>
Python Code:
import tohu
from tohu.v4.primitive_generators import *
from tohu.v4.dispatch_generators import *
from tohu.v4.utils import print_generated_sequence
print(f'Tohu version: {tohu.__version__}')
g = Constant('quux')
print_generated_sequence(g, num=10, seed=12345)
g1 = Boolean()
g2 = Boolean(p=0.8)
print_generated_sequence(g1, num=20, seed=12345)
print_generated_sequence(g2, num=20, seed=99999)
g = Integer(low=100, high=200)
print_generated_sequence(g, num=10, seed=12345)
g = Float(low=2.3, high=4.2)
print_generated_sequence(g, num=10, sep='\n', fmt='.12f', seed=12345)
g = HashDigest(length=6)
print_generated_sequence(g, num=10, seed=12345)
g = HashDigest(length=6, uppercase=False)
print_generated_sequence(g, num=10, seed=12345)
g = HashDigest(length=10, as_bytes=True)
print_generated_sequence(g, num=5, seed=12345, sep='\n')
g1 = NumpyRandomGenerator(method="normal", loc=3.0, scale=5.0)
g2 = NumpyRandomGenerator(method="poisson", lam=30)
g3 = NumpyRandomGenerator(method="exponential", scale=0.3)
g1.reset(seed=12345); print_generated_sequence(g1, num=4)
g2.reset(seed=12345); print_generated_sequence(g2, num=15)
g3.reset(seed=12345); print_generated_sequence(g3, num=4)
g = FakerGenerator(method='name')
print_generated_sequence(g, num=8, seed=12345)
g = FakerGenerator(method='address')
print_generated_sequence(g, num=8, seed=12345, sep='\n---\n')
seq = ['a', 'b', 'c', 'd', 'e']
g = IterateOver(seq)
g.reset()
print([x for x in g])
print([x for x in g])
g.reset()
print([x for x in g])
some_items = ['aa', 'bb', 'cc', 'dd', 'ee']
g = SelectOne(some_items)
print_generated_sequence(g, num=30, seed=12345)
g = SelectOne(some_items, p=[0.1, 0.05, 0.7, 0.03, 0.12])
print_generated_sequence(g, num=30, seed=99999)
g = Timestamp(start='1998-03-01 00:02:00', end='1998-03-01 00:02:15')
print_generated_sequence(g, num=10, sep='\n', seed=99999)
g = Timestamp(start='2018-02-14', end='2018-02-18')
print_generated_sequence(g, num=5, sep='\n', seed=12345)
g = Timestamp(date='2018-01-01')
print_generated_sequence(g, num=5, sep='\n', seed=12345)
g.reset(seed=12345)
[next(g), next(g), next(g)]
h = Timestamp(date='2018-01-01').strftime('%-d %b %Y, %H:%M (%a)')
h.reset(seed=12345)
[next(h), next(h), next(h)]
g = CharString(length=15)
print_generated_sequence(g, num=5, seed=12345)
print_generated_sequence(g, num=5, seed=99999)
g = CharString(length=12, charset="ABCDEFG")
print_generated_sequence(g, num=5, sep='\n', seed=12345)
g1 = CharString(length=12, charset="<lowercase>")
g2 = CharString(length=12, charset="<alphanumeric_uppercase>")
print_generated_sequence(g1, num=5, sep='\n', seed=12345); print()
print_generated_sequence(g2, num=5, sep='\n', seed=12345)
g = DigitString(length=15)
print_generated_sequence(g, num=5, seed=12345)
print_generated_sequence(g, num=5, seed=99999)
g = Sequential(prefix='Foo_', digits=3)
g.reset()
print_generated_sequence(g, num=5)
print_generated_sequence(g, num=5)
print()
g.reset()
print_generated_sequence(g, num=5)
g.reset(seed=12345); print_generated_sequence(g, num=5)
g.reset(seed=99999); print_generated_sequence(g, num=5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Constant
Step2: Boolean
Step3: Integer
Step4: Float
Step5: HashDigest
Step6: HashDigest hex strings (lowercase)
Step7: HashDigest byte strings
Step8: NumpyRandomGenerator
Step9: FakerGenerator
Step10: Example
Step11: IterateOver
Step12: SelectOne
Step13: By default, all possible values are chosen with equal probability, but this can be changed by passing a distribution as the parameter p.
Step14: We can see that the item 'cc' has the highest chance of being selected (70%), followed by 'ee' and 'aa' (12% and 10%, respectively).
Step15: If start or end are dates of the form YYYY-MM-DD (without the exact HH
Step16: For convenience, one can also pass a single date, which will produce timestamps during this particular date.
Step17: Note that the generated items are datetime objects (even though they appear as strings when printed above).
Step18: We can use the .strftime() method to create another generator which returns timestamps as strings instead of datetime objects.
Step19: CharString
Step20: It is possible to explicitly specify the character set.
Step21: There are also a few pre-defined character sets.
Step22: DigitString
Step23: Sequential
Step24: Calling reset() on the generator makes the numbering start from 1 again.
Step25: Note that the method Sequential.reset() supports the seed argument for consistency with other generators, but its value is ignored - the generator is simply reset to its initial value. This is illustrated here
|
10,648
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from os.path import join
from pylab import rcParams
import matplotlib.pyplot as plt
rcParams['figure.figsize'] = (13, 6)
plt.style.use('ggplot')
#import nilmtk
from nilmtk import DataSet, TimeFrame, MeterGroup, HDFDataStore
from nilmtk.disaggregate.hart_85 import Hart85
from nilmtk.disaggregate import CombinatorialOptimisation
from nilmtk.utils import print_dict, show_versions
from nilmtk.metrics import f1_score
#import seaborn as sns
#sns.set_palette("Set3", n_colors=12)
import warnings
warnings.filterwarnings("ignore") #suppress warnings, comment out if warnings required
#uncomment if required
#show_versions()
data_dir = '/Users/GJWood/nilm_gjw_data/HDF5/'
gjw = DataSet(join(data_dir, 'nilm_gjw_data.hdf5'))
print('loaded ' + str(len(gjw.buildings)) + ' buildings')
building_number=1
gjw.set_window('2015-07-12 00:00:00', '2015-07-16 00:00:00')
elec = gjw.buildings[building_number].elec
mains = elec.mains()
house = elec['fridge'] #only one meter so any selection will do
df = house.load().next() #load the first chunk of data into a dataframe
df.info() #check that the data is what we want (optional)
#note the data has two columns and a time index
plotdata = df.ix['2015-07-12 00:00:00': '2015-07-16 00:00:00']
plotdata.plot()
plt.title("Raw Mains Usage")
plt.ylabel("Power (W)")
plt.xlabel("Time");
plt.scatter(plotdata[('power','active')],plotdata[('power','reactive')])
plt.title("Raw Mains Usage Signature Space")
plt.ylabel("Reactive Power (VAR)")
plt.xlabel("Active Power (W)");
h = Hart85()
h.train(mains,cols=[('power','active'),('power','reactive')],min_tolerance=100,noise_level=70,buffer_size=20,state_threshold=15)
plt.scatter(h.steady_states[('active average')],h.steady_states[('reactive average')])
plt.scatter(h.centroids[('power','active')],h.centroids[('power','reactive')],marker='x',c=(1.0, 0.0, 0.0))
plt.legend(['Steady states','Centroids'],loc=4)
plt.title("Training steady states Signature space")
plt.ylabel("Reactive average (VAR)")
plt.xlabel("Active average (W)");
h.steady_states.head()
h.steady_states.tail()
h.centroids
h.model
ax = mains.plot()
h.steady_states['active average'].plot(style='o', ax = ax);
plt.ylabel("Power (W)")
plt.xlabel("Time");
#plt.show()
h.pair_df.head()
pair_shape_df = pd.DataFrame(columns=['Height','Duration'])
pair_shape_df['Height']= (h.pair_df['T1 Active'].abs()+h.pair_df['T2 Active'].abs())/2
pair_shape_df['Duration']= pd.to_timedelta(h.pair_df['T2 Time']-h.pair_df['T1 Time'],unit='s').dt.seconds
pair_shape_df.head()
fig = plt.figure(figsize=(13,6))
ax = fig.add_subplot(1, 1, 1)
ax.set_yscale('log')
ax.scatter(pair_shape_df['Height'],pair_shape_df['Duration'])
#plt.plot((x1, x2), (y1, y2), 'k-')
ax.plot((h.centroids[('power','active')],
h.centroids[('power','active')]),
(h.centroids[('power','active')]*0,
h.centroids[('power','active')]*0+10000)
,marker='x',c=(0.0, 0.0, 0.0))
#ax.axvline(h.centroids[('power','active')], color='k', linestyle='--')
plt.legend(['Transitions','Centroids'],loc=1)
plt.title("Paired event - Signature Space")
plt.ylabel("Log Duration (sec)")
plt.xlabel("Transition (W)");
gjw.set_window('2015-07-13 00:00:00','2015-07-14 00:00:00')
elec = gjw.buildings[building_number].elec
mains = elec.mains()
mains.plot()
ax = mains.plot()
h.steady_states['active average'].plot(style='o', ax = ax);
plt.ylabel("Power (W)")
plt.xlabel("Time");
disag_filename = join(data_dir, 'disag_gjw_hart.hdf5')
output = HDFDataStore(disag_filename, 'w')
h.disaggregate(mains,output,sample_period=1)
output.close()
ax = mains.plot()
h.steady_states['active average'].plot(style='o', ax = ax);
plt.ylabel("Power (W)")
plt.xlabel("Time");
disag_hart = DataSet(disag_filename)
disag_hart
disag_hart_elec = disag_hart.buildings[building_number].elec
disag_hart_elec
disag_hart_elec.mains()
h.centroids
h.model
h.steady_states
from nilmtk.metrics import f1_score
f1_hart= f1_score(disag_hart_elec, test_elec)
f1_hart.index = disag_hart_elec.get_labels(f1_hart.index)
f1_hart.plot(kind='barh')
plt.ylabel('appliance');
plt.xlabel('f-score');
plt.title("Hart");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Show versions for any diagnostics
Step2: Load dataset
Step3: Period of interest 4 days during holiday
Step4: Training
Step5: Set two days for Disaggregation period of interest
Step6: Disaggregate using Hart (Active data only)
|
10,649
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
def rfam_uri(family_id):
return 'http://rfam.xfam.org/family/%s/alignment?acc=%s&format=fastau&download=0'%(family_id,family_id)
def rfam_uri(family_id):
return '%s.fa'%(family_id)
rfam_id = 'RF00005' #tRNA
rfam_id = 'RF00871' #microRNA mir-689
rfam_id = 'RF02275' #Hammerhead_HH9
import re
from eden import util
def fasta_to_fasta( input ):
for m in re.finditer(r"^(>[^\n]+)\n+([^>]+)",'\n'.join(util.read( input )), re.MULTILINE):
if m:
header, seq = m.groups()
seq = re.sub('\n','',seq)
yield header
yield seq
iterable = fasta_to_fasta(rfam_uri(rfam_id))
[line for line in iterable]
import networkx as nx
def sequence_dotbracket_to_graph(seq_info, seq_struct):
G = nx.Graph()
lifo = list()
for i,(c,b) in enumerate( zip(seq_info, seq_struct) ):
G.add_node(i, label = c)
if i > 0:
#add backbone edges
G.add_edge(i,i-1, label='-')
if b == '(':
lifo.append(i)
if b == ')':
#when a closing bracket is found, add a basepair edge with the corresponding opening bracket
j = lifo.pop()
G.add_edge(i,j, label='=')
return G
import subprocess as sp
def pre_process(input):
lines = fasta_to_fasta(input)
for line in lines:
#get a header+sequence
header = line
seq = lines.next()
#invoke RNAfold
cmd = 'echo "%s" | RNAfold --noPS' % seq
out = sp.check_output(cmd, shell = True)
#parse the output
text = out.strip().split('\n')
seq_info = text[0]
seq_struct = text[1].split()[0]
#make a graph
G = sequence_dotbracket_to_graph(seq_info, seq_struct)
G.graph['id'] = header
yield G
from eden.util.display import draw_graph
import itertools
graphs = pre_process(rfam_uri(rfam_id))
for graph in itertools.islice(graphs,2):
draw_graph(graph, size=11, node_size=200, node_border=False)
from eden.graph import Vectorizer
vectorizer = Vectorizer( complexity=2 )
def describe(X):
print 'Instances: %d ; Features: %d with an avg of %d features per instance' % (X.shape[0], X.shape[1], X.getnnz()/X.shape[0])
graphs = pre_process(rfam_uri(rfam_id))
Xp = vectorizer.transform( graphs )
describe(Xp)
import random
def shuffle_fasta( input , times=1, order=1):
iterable = fasta_to_fasta(input)
for line in iterable:
#get pairs of header-sequence
header = line
seq = iterable.next()
#shuffle
for i in range(times):
#split the seqeunce in substrings of length 'order'
kmers = [ seq[i:i+order] for i in range(0,len(seq),order) ]
#shuffle and join
random.shuffle(kmers)
seq_out = ''.join(kmers)
yield header
yield seq_out
iterable = shuffle_fasta( rfam_uri(rfam_id), times=2, order=2)
graphs = pre_process( iterable )
Xn = vectorizer.transform( graphs )
describe(Xn)
import numpy as np
from scipy.sparse import vstack
yp = [1] * Xp.shape[0]
yn = [-1] * Xn.shape[0]
y = np.array(yp + yn)
X = vstack( [Xp,Xn] , format = "csr")
from sklearn.linear_model import SGDClassifier
estimator = SGDClassifier(class_weight='auto', shuffle = True )
estimator.fit(X,y)
from sklearn import cross_validation
print 'Predictive performance:'
#assess the generalization capacity of the model via a k-fold cross validation
for scoring in ['accuracy','precision', 'recall', 'f1', 'average_precision', 'roc_auc']:
scores = cross_validation.cross_val_score( estimator, X, y, cv = 3, scoring = scoring )
print( '%20s: %.3f +- %.3f' % ( scoring, np.mean( scores ), np.std( scores ) ) )
graphs = pre_process(rfam_uri(rfam_id))
Xp = vectorizer.transform( graphs )
predictions = estimator.decision_function(Xp)
iterable = fasta_to_fasta(rfam_uri(rfam_id))
headers = [line for line in itertools.islice(iterable, 0, None, 2)]
for prediction,header in sorted(zip(predictions, headers), reverse=True):
print "Score: %0.3f %s " % (prediction, header)
%%time
from sklearn.cluster import KMeans
kmeans = KMeans( n_clusters=4 )
predictions = kmeans.fit_predict(Xp)
#get the headers of the FASTA files
iterable = fasta_to_fasta(rfam_uri(rfam_id))
headers = [line for line in itertools.islice(iterable, 0, None, 2)]
for prediction,header in sorted(zip(predictions, headers)):
print "cluster: %d %s " % (prediction, header)
graphs = itertools.islice( pre_process( rfam_uri( rfam_id ) ), 50 )
from eden.util.display import dendrogram
dendrogram( graphs, vectorizer )
%%time
graphs = itertools.islice( pre_process( rfam_uri( rfam_id ) ), 450 )
from eden.util.display import embed2D
embed2D( graphs, vectorizer )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: set an interface for data acquisition, either from file or from remote
Step2: write a parser for FASTA format
Step3: write a converter that takes a parsed fasta file in input, calls RNAfold, parses the output and builds a graph
Step4: display the graphs
Step5: Setup the vectorizer object
Step6: process the fasta, build the graphs, transform them with the vectorizer into sparse vectors
Step7: create a custom fasta parser that shuffles the sequences; these will be used as negative examples
Step8: build a data matrix by stacking the data matrix for positive and for negative instances, build a target vector with +1 for positive ids and -1 for negative ids
Step9: fit a binary classifier from the scikit ML library
Step10: evaluate the quality of the classifier using the cross-validation technique
Step11: use the decision_function of the classifier to get an indication on its confidence for each instance
|
10,650
|
<ASSISTANT_TASK:>
Python Code:
# enable showing matplotlib image inline
%matplotlib inline
# autoreload module
%load_ext autoreload
%autoreload 2
# load local package
import sys
import os
sys.path.append(os.path.join(os.getcwd(), "../../../")) # load project root
from sklearn import preprocessing
import make_model as maker
dataset = maker.load_data()
header = dataset["header"][1:] # exclude label column
y = dataset["y"]
X = dataset["X"]
scaler = preprocessing.StandardScaler().fit(X) # regularization
X_R = scaler.transform(X)
print(y.shape)
print(X_R.shape)
print(header)
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
get_headers = lambda s: [i_h[1] for i_h in enumerate(header) if s[i_h[0]]]
selector = SelectKBest(f_classif, k=10).fit(X_R, y)
selected = selector.get_support()
kbests = sorted(zip(get_headers(selected), selector.scores_[selected]), key=lambda h_s: h_s[1], reverse=True)
print(kbests)
import make_model as maker
header_index = lambda hs: [i_h[0] for i_h in enumerate(header) if i_h[1] in hs]
columns = header_index(["smile avg", "pose>pitch min"])
print([header[c] for c in columns])
model = maker.make_model(y, X, columns, save_model=True)
print(model)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Steps
Step1: Make Training Data
Step2: 会議を分類するモデルにはSVMを使用します。これで、有効な特徴量を探していきます。
Step3: ここからいくつか特徴量を選択し、モデルを作成します。今回はデータが少ないこともあり、なるべくシンプルなモデルとし特徴量を2つに絞りたいと思います。smile系の特徴量は1つに限定しました。あと一つですが、以下検討の結果pose>pitch minを使用することにしました。
|
10,651
|
<ASSISTANT_TASK:>
Python Code:
from pyha.components.fsk_modulator import FSKModulator
samples_per_symbol = 64 # this is so high to make stuff plottable
symbols = [1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0]
data = []
for x in symbols:
data.extend([x] * samples_per_symbol)
plt.plot(data)
plt.title('tx data')
plt.show()
fs = 300e3
deviation = 70e3
mod = FSKModulator(deviation, fs)
tx_signal = mod.model_main(data)
# awgn channel (add some noise)
tx_signal = tx_signal + np.random.normal(scale=np.sqrt(0.5))
Pxx, freqs, bins, im = plt.specgram(tx_signal, Fs=fs, NFFT=64, noverlap=0)
plt.show()
gain = fs / (2 * np.pi * deviation)
demod = np.angle(tx_signal[1:] * np.conjugate(tx_signal[:-1]))
rx_demod = gain * demod
plt.plot(rx_demod)
# plt.plot(data)
plt.show()
from pyha.components.moving_average import MovingAverage
mf = MovingAverage(window_len=samples_per_symbol)
rx_matched = mf.model_main(rx_demod)
plt.plot(rx_matched)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Demodulator
Step2: Matched filtering
|
10,652
|
<ASSISTANT_TASK:>
Python Code:
df = pd.read_csv('https://archive.org/download/scikit-chem_example_files/iris.csv',
header=None); df
smi = skchem.read_smiles('https://archive.org/download/scikit-chem_example_files/example.smi')
[method for method in skchem.io.__dict__ if method.startswith('read_')]
pd.DataFrame.from_smiles('https://archive.org/download/scikit-chem_example_files/example.smi')
from io import StringIO
sio = StringIO()
df.to_csv(sio)
sio.seek(0)
print(sio.read())
sio = StringIO()
smi.iloc[:2].to_sdf(sio) # don't write too many!
sio.seek(0)
print(sio.read())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Analogously with scikit-chem
Step2: Currently available
Step3: scikit-chem also adds convenience methods onto pandas.DataFrame objects.
Step4: Writing files
|
10,653
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import cv2
import matplotlib.pyplot as plt
import utils
import tensorflow as tf
%matplotlib inline
a = tf.constant(5.)
b = tf.constant(1.2345)
with tf.Session() as sess:
print('a={0}, b={1}'.format(sess.run(a), sess.run(b)))
print('a+b={0}'.format(sess.run(a+b)))
print('a-b={0}'.format(sess.run(a-b)))
print('a*b={0}'.format(sess.run(a*b)))
print('a/b={0}'.format(sess.run(a/b)))
a = tf.placeholder(dtype=tf.float32)
b = tf.placeholder(dtype=tf.float32)
op_add = tf.add(a, b)
op_sub = tf.subtract(a, b)
op_mul = tf.multiply(a, b)
op_div = tf.divide(a, b)
with tf.Session() as sess:
print('a+b={0}'.format(sess.run(op_add, feed_dict={a: 1.0, b:0.5})))
print('a-b={0}'.format(sess.run(op_sub, feed_dict={a: 1.0, b:0.5})))
print('a*b={0}'.format(sess.run(op_mul, feed_dict={a: 1.0, b:0.5})))
print('a/b={0}'.format(sess.run(op_div, feed_dict={a: 1.0, b:0.5})))
a = np.array([[1.0, 0.5]], dtype=np.float32)
b = np.array([[0.2, 0.3]], dtype=np.float32)
mat1 = tf.constant(a)
mat2 = tf.constant(b)
mat_mul1 = tf.matmul(mat1, mat2, transpose_b=True)
mat_mul2 = tf.matmul(mat1, mat2, transpose_a=True)
mat_add = tf.add(mat1, mat2)
mat_sub = tf.subtract(mat1, mat2)
mat_elemmul = tf.multiply(mat1, mat2)
with tf.Session() as sess:
dot_product1 = sess.run(mat_mul1)
dot_product2 = sess.run(mat_mul2)
add_ret = sess.run(mat_add)
sub_ret = sess.run(mat_sub)
elm_ret = sess.run(mat_elemmul)
print('1), result is {0}, shape is {1}'.format(dot_product1, dot_product1.shape))
print('2), result is {0}, shape is {1}'.format(dot_product2, dot_product2.shape))
print('3), mat_a + mat_b: result is {0}, shape is {1}'.format(add_ret, add_ret.shape))
print('4), mat_a - mat_b: result is {0}, shape is {1}'.format(sub_ret, sub_ret.shape))
print('5), mat_a .* mat_b: result is {0}, shape is {1}'.format(elm_ret, elm_ret.shape))
sample_cnt = 100
train_X = np.linspace(-3.0, 3.0, num=sample_cnt, dtype=np.float32).reshape((sample_cnt, 1))
train_y = train_X * 0.375 + 1.1
print(train_X.shape)
X = tf.placeholder(dtype=tf.float32)
y = tf.placeholder(dtype=tf.float32)
W = tf.Variable(tf.random_normal((1,)), name='weights')
b = tf.Variable(tf.random_normal((1,)), name='bias')
# Linear function
h = tf.add(tf.multiply(X, W), b)
# MSE cost function
diff = h - y
cost = tf.reduce_sum(tf.multiply(diff, diff)) / (2 * sample_cnt)
# GD optimizer
lr = 0.01
ad = tf.train.AdamOptimizer(learning_rate=lr).minimize(cost)
#gd = tf.train.GradientDescentOptimizer(learning_rate=lr).minimize(cost)
# initializer
init = tf.global_variables_initializer()
with tf.Session() as sess:
# first init all variables
sess.run(init)
# batch training
for epoch in range(0, 1000):
sess.run(ad, feed_dict={X: train_X, y: train_y})
if (epoch + 1) % 100 == 0:
cur_cost = sess.run(cost, feed_dict={X: train_X, y: train_y})
print('epoch: {0}, cost: {1}, W: {2}, b: {3}'.format(epoch + 1, cur_cost, sess.run(W), sess.run(b)))
# finish
final_cost = sess.run(cost, feed_dict={X: train_X, y: train_y})
print('training finished!')
print('final cost: {0}, W: {1}, b: {2}'.format(final_cost, sess.run(W), sess.run(b)))
# then plot some curves
predictions = sess.run(h, feed_dict={X: train_X})
plt.plot(train_X, train_y, 'r+', label='training')
plt.plot(train_X, predictions, 'b--', label='fitted')
plt.grid(True)
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build a model by compiling a computation graph with tensorflow
Step2: 2), basic ops within a tf session
Step3: 3), define some placeholder which is actually a tensor container
Step4: 4), then we define some tensor operations
Step5: 5), launch these operations
Step6: 6), let's see how to do matrix operations
Step7: Let's play with linear regression
Step8: Implement with tensorflow
Step9: 2), mode parameters
Step10: 3), build model
Step11: 4), training
|
10,654
|
<ASSISTANT_TASK:>
Python Code:
def longestGP(A , N ) :
if(N < 2 ) :
return N
length = 1
common_ratio = 1
maxlength = 1
for i in range(N - 1 ) :
if(A[i + 1 ] % A[i ] == 0 ) :
if(A[i + 1 ] // A[i ] == common_ratio ) :
length = length + 1
maxlength = max(maxlength , length )
else :
common_ratio = A[i + 1 ] // A[i ]
length = 2
else :
maxlength = max(maxlength , length )
length = 1
maxlength = max(maxlength , length )
return maxlength
arr =[1 , 2 , 4 , 7 , 14 , 28 , 56 , 89 ]
N = len(arr )
print(longestGP(arr , N ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
10,655
|
<ASSISTANT_TASK:>
Python Code:
# load default libraries
import os, sys
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# make sure gdal is correctly installed
from osgeo import gdal
import gc
%matplotlib inline
def raster2array(rasterfn):
raster = gdal.Open(rasterfn)
band = raster.GetRasterBand(1)
return band.ReadAsArray()
g_array = raster2array('global_cumul_impact_2013_all_layers.tif')
g_array_f = g_array.flatten()
(g_array_f == 0).sum()
print('The total number of non-zero values in the raw raster dataset:', g_array_f.size - (g_array_f==0).sum())
## in fact the following should be used for testing equality of float dtypes. Because the result remains\
## the same thus the simpler option is used.
## (np.isclose(g_array_f, 0.0)).sum()
## the percentile function applied to the sliced array, i.e., those with values greater than 0
quantiles = [np.percentile(g_array_f[~(g_array_f == 0)], quantile) for quantile in [1,3,5,10]]
quantiles
print('\n'.join(['Threshold cut-off value: '+ str(threshold) for threshold in quantiles]))
# calculate cell-size in sqkm2
cell_size = 934.478*934.478/1000000
print(cell_size)
# the OBJECTID - ras_val table. This is a very big table and will take a long time.
input_data = pd.read_csv('result.csv')
# print fields
input_data.columns
input_data.ras_val.min()
# the attribute table containing information about province etcb
input_attr = pd.read_csv('attr.csv')
# print fileds
input_attr.columns
# total count of pixels per OBJECTID, i.e. base
result_count = input_data.groupby('OBJECTID').count().reset_index()
# filter result only in the top 1, 3, 5, 10 percentile (of least impacted marine areas)
result_1, result_3, result_5, result_10 = \
[input_data[input_data.ras_val <= threshold].groupby('OBJECTID').count().reset_index() for threshold in quantiles]
# join base to the attribute
attr_merge = pd.merge(input_attr, result_count, on = 'OBJECTID')
# join result to the above table
attr_merge_10 = pd.merge(attr_merge, result_10, how = 'left', on ='OBJECTID', suffixes = ('_base', '_result'))
# fill ras_val_result's NaN with 0, province and realms with None. This should happen earlier
attr_merge_10['ras_val_result'].fillna(0, inplace=True)
attr_merge_10['PROVINCE'].fillna('None', inplace=True)
attr_merge_10['PROVINCE_P'].fillna('None', inplace=True)
# apply an aggregate function to each sub dataframe, as a result of grouping
def apply_func(group):
overlap = group['ras_val_result'].sum()*cell_size # in sqkm
base = group['ras_val_base'].sum()*cell_size
per = overlap/base
# can have multiple columns as a result, if returened as pd.series
return pd.Series([overlap, per, base], index=['less_than_threshold', 'per_ltt', 'base'])
# code reuse: threshold
def calculate_wilderness_marine(threshold, groups):
<threshold to consider wilderness value>, <a python list such as ['PROVINCE', 'PROVINCE_P', attr fields]>
# filtered input data according to threshold merge
input_data_filtered = input_data[input_data.ras_val <= threshold].groupby('OBJECTID').count().reset_index()
# base merge
base_merge = pd.merge(input_attr, result_count, on = 'OBJECTID')
# merge the two above
result = pd.merge(base_merge, input_data_filtered, how='left', on='OBJECTID', suffixes=('_base', '_result'))
# solve no data issue
result['ras_val_result'].fillna(0, inplace=True)
result['PROVINCE'].fillna('None', inplace=True)
result['PROVINCE_P'].fillna('None', inplace=True)
return result.groupby(groups).apply(apply_func).reset_index()
# use 10% as threshold
calculate_wilderness_marine(quantiles[-1], ['PROVINCE', 'PROVINCE_P', 'category']).head(20)
# check data integrity
input_data.OBJECTID.unique().size
# no zeros in the result data
input_data.ras_val.size
# it should not have 0, which indicates nodata in the raster data as it has been removed during the spatial analysis
input_data.ras_val.min()
# percentage of EEZ water in relation to the entire ocean
input_data.ras_val.size/g_array_f[~(g_array_f==0)].size
# all input_data are non-zero (zero indicates land and nodata)
input_data[~(input_data.ras_val == 0)].ras_val.count() == input_data.ras_val.count()
# get threshold for 10%
new_threshold = np.percentile(input_data.ras_val, 10)
old_threshold = np.percentile(g_array_f[~(g_array_f == 0)], 10)
# export wilderness distribution by province or other groupings
calculate_wilderness_marine(new_threshold, ['PROVINCE']).to_csv('export_meow_province.csv')
calculate_wilderness_marine(new_threshold, ['PROVINCE', 'PROVINCE_P', 'category']).to_csv('export_province_full.csv')
import seaborn as sns
# small multiples: distribution of percentage of less than threshold (ltt)
g = sns.FacetGrid(calculate_wilderness_marine(new_threshold, ['PROVINCE', 'PROVINCE_P', 'category']), col="category")
g.map(plt.hist, 'per_ltt', bins=50, log=True)
# MEOW province (200m and 200 nautical combined)
sns.distplot(calculate_wilderness_marine(new_threshold, ['PROVINCE']).per_ltt)
# pelagic province
sns.distplot(calculate_wilderness_marine(new_threshold, ['PROVINCE_P']).per_ltt)
# load data
wh47 = pd.read_csv('wh47.csv')
wh_attr = pd.read_csv('wh_attr.csv')
print(wh47.columns, wh_attr.columns)
# check thresholds, use new threshold
print('Old threshold: {0}\nNew threshold: {1}'.format(old_threshold, new_threshold))
# get WH statics
wh_n_base = (wh47.groupby('wdpaid').ras_val.count()*cell_size).reset_index() # all marine area
wh_n = (wh47[wh47.ras_val<new_threshold].groupby('wdpaid').ras_val.count()*cell_size).reset_index() # marine wild
# merge in order to calculate percentage (% of marine wilderness in marine area of WH sites)
a = pd.merge(wh_n_base, wh_n, on='wdpaid', suffixes=('_all', '_wild'))
a = pd.merge(wh_attr, a, how='inner', on='wdpaid')
a['per'] = a.ras_val_wild/a.ras_val_all
# export save
a.to_csv('export_wh_wilderness.csv')
# distribution of WH wilderness percentage
sns.distplot(a.per)
sns.distplot(a.ras_val_wild)
del a
input_attr.columns, wh_attr.columns
int_wh = pd.read_csv('wh_base_intersect.csv')
int_wh_attr = pd.read_csv('wh_base_intersect_attr.csv')
int_wh.columns, int_wh_attr.columns
int_wh_attr[['wdpaid', 'en_name', 'gis_area', 'PROVINCE_P', 'PROVINCE', 'category']].to_csv('wh_biogeo_intersect.csv')
# filter pixels that meet the new threshold (from EEZ)
int_wh_filter = int_wh[int_wh.ras_val < new_threshold]
# group value based on OBJECTID
int_wh_filter_group = int_wh_filter.groupby('OBJECTID_12').count().reset_index()
# attr join
int_result = pd.merge(int_wh_attr, int_wh_filter_group, on='OBJECTID_12')
# % wilderness area inside each PA within EEZ
int_result.groupby(['wdpaid', 'en_name']).ras_val.sum()*cell_size
# calculate total WH marine area, no filter applied
# group value based on OBJECTID
int_wh_group = int_wh.groupby('OBJECTID_12').count().reset_index()
# base
G_base = (pd.merge(int_wh_attr, int_wh_group, on='OBJECTID_12').groupby(['wdpaid', 'en_name']).ras_val.sum()*cell_size).reset_index()
G_wh = (int_result.groupby(['wdpaid', 'en_name']).ras_val.sum()*cell_size).reset_index()
G_base.columns, G_wh.columns
G_result = pd.merge(G_base, G_wh, how='left', on=('wdpaid', 'en_name'))
G_result.fillna(0, inplace=True)
G_result.columns = ['wdpaid', 'en_name', 'marine_area', 'marine_wild_area']
G_result['per'] = G_result.marine_wild_area/G_result.marine_area
# G_result.to_csv('export_wh_per_.csv')
G_result
wh47.columns, int_wh.columns
wh47_int = pd.merge(int_wh, int_wh_attr, on='OBJECTID_12')
wh47_int.columns
# compare differences from the two methods
a = wh47.groupby('wdpaid').ras_val.count().reset_index()
b = wh47_int.groupby('wdpaid').ras_val.count().reset_index()
c = pd.merge(a, b, on='wdpaid', suffixes=('_wh', '_int'))
c['per'] = abs(c.ras_val_wh - c.ras_val_int)/c.ras_val_wh
# c
del a, b, c
# the data to be used
## wh intersection
# filter pixels that meet the new threshold (from EEZ)
int_wh_filter = int_wh[int_wh.ras_val < new_threshold]
# group value based on OBJECTID
int_wh_filter_group = int_wh_filter.groupby('OBJECTID_12').count().reset_index()
# attr join
int_result = pd.merge(int_wh_attr, int_wh_filter_group, on='OBJECTID_12')
int_result.columns
# get unique WDPAIDs for each province
int_result.groupby('PROVINCE').wdpaid.unique()
# get province MEOW (200m + 200nm)
province = calculate_wilderness_marine(new_threshold, ['PROVINCE'])
# provinces with WH sites, nunique() return unique number of WDPAIDs
province_wh_number = pd.merge(province, int_result.groupby('PROVINCE').wdpaid.nunique().reset_index(), on='PROVINCE', how='left')
# WH area that are wilderness area within provinces
province_wh_wilderness = (int_result.groupby('PROVINCE').ras_val.sum() * cell_size).reset_index()
# get province attributes and joi
a = pd.merge(province, province_wh_wilderness, on='PROVINCE', how = 'left')
# fill all NAs with 0
a.fillna(0,inplace=True)
# calculate percentage of province wilderness covered by WH
a['per_wilderness_covered_by_WH'] = a.ras_val/a.less_than_threshold
a.columns = ['PROVINCE', 'wilderness_area', 'per_wilderness_area', 'total_area', 'wh_wilderness_area', a.columns[-1]]
# ======== now get number of WH sites per Province into one single dataframe ==========
## num of WH sites
b = int_result.groupby('PROVINCE').wdpaid.nunique().reset_index()
b.columns = ['PROVINCE', 'num_wh']
## merge
a = pd.merge(a, b, how='left', on='PROVINCE')
a.fillna(0, inplace=True)
# a.sort_values('num_wh')
a.to_csv('export_gap_meow_province.csv')
# clear temp variable in case of polluting the global name space
del a
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get quantiles from the input raster data (global threshold from raw data)
Step2: The number of non-zero values is notably different from esri's calculation, which stands at 414,347,791, less than what's calculated here and is 300,000 fewer zeros. This suggests esri may be using a bigger tolerence value, i.e. what is considered small enough to be regarded as zero .
Step3: <a href='#data'>
Step4: Here I created four result tables containing only pixels that meet the criteria as specified by different thresholds
Step6: The next step will be to join the input_attr table with filtered pixel values. Replace result10 result table if other threshold is used.
Step7: One all tables are joined - full attributes with pixel values, attributes can be used to specify groupings
Step8: Further aggregation could be applied here, if needed.
Step9: Use the new threshold (based on EEZ) and the function defined in the previous section to output lists of
Step10: The distribution map of wilderness within EEZ using new threshold
Step11: From the graphs, it is obvious that most provinces/pelagic provinces have very low percentage of marine wilderness area inside them.
Step12: Gap analysis
Step13: As contrary to common sense, wilderness in WH sites calculated from the intersection is slightly different from that of directly using WH boundary to cut out the marine cumulative impact data. This is due to boundary mismatches. The intersection of WH and EEZ (with biogeography attrs) removed all land area, where the marine pressure layer may have mapped pixels (See below highlighted pixels, in Galapogas)
Step14: There are considerable differences in percentage between the two methods to calculate marine areas within WH sites at first glance, however at the site scale, apart from wrangel Island, the differences are quite negliable.
Step15: The above does not say anything about wilderness, although by linking it with the provinces of wilderness values it could potentially identify prioirity provinces, however the above does not address the questions of how much of widerness is covered by WH sites. It could be a well 'represented' province may have little of its vast wilderness enjoying WH status, thus it may still presents a gap, from the point of view of marine wilderness.
|
10,656
|
<ASSISTANT_TASK:>
Python Code:
#Import the necessary modules and perform the necessary tests
import scipy as sc
import pylab as gr
sc.test("all",verbose=0)
%matplotlib inline
def SIR(U,t,p):
x,y,z=U
yNew= p["alpha"] * y * x
zNew= p["beta"] * y
dx = -yNew
dy = yNew - zNew
dz = zNew
return dx, dy, dz
p={"alpha": 0.15, "beta":0.1, "timeStop":300.0, "timeStep":0.01 }
p["Ro"]=p["alpha"]/p["beta"]
p["sampTimes"]= sc.arange(0,p["timeStop"],p["timeStep"])
N= 1e4; i0= 1e1; r0=0; s0=N-i0-r0
x0=s0/N; y0=i0/N; z0=r0/N;
p["ic"]=[x0,y0,z0]
print("N=%g with initial conditions (S,I,R)=(%g,%g,%g)"%(N,s0,i0,r0))
print("Initial conditions: ", p["ic"])
print("B(0)=%g"%(p["ic"][0]*p["Ro"]))
# Numerical integration
xyz= sc.integrate.odeint(SIR, p["ic"], p["sampTimes"], args=(p,)).transpose()
# Calculate the outbreak indicator
B= xyz[0]*p["alpha"]/p["beta"]
# Figure
fig=gr.figure(figsize=(11,5))
gr.ioff()
rows=1; cols=2
ax=list()
for n in sc.arange(rows*cols):
ax.append(fig.add_subplot(rows,cols,n+1))
ax[0].plot(p["sampTimes"], xyz[0], 'k', label=r"$(t,x(t))$")
ax[0].plot(p["sampTimes"], xyz[1], 'g', lw=3, label=r"$(t,y(t))$")
ax[0].plot(p["sampTimes"], xyz[2], 'b', label=r"$(t,z(t))$")
ax[0].plot(p["sampTimes"], B, 'r', label=r"$(t,B(t))$")
ax[0].plot([0, p["timeStop"]], [1,1], 'k--', alpha=0.4)
ax[1].plot(xyz[0], xyz[1], 'g', lw=3, label=r"$(x(t),y(t))$")
ax[1].plot(xyz[0], xyz[2], 'b', label=r"$(x(t),z(t))$")
ax[1].plot(xyz[0], B, 'r', label=r"$(x(t),B(t))$")
ax[1].plot([0, 1], [1,1], 'k--', alpha=0.4)
ax[0].legend(); ax[1].legend(loc="upper left")
gr.ion(); gr.draw()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup a python function that specifies the dynamics
Step2: The function SIR above takes three arguments, $U$, $t$, and $p$ that represent the states of the system, the time and the parameters, respectively.
Step3: Integrate numerically and plot the results
|
10,657
|
<ASSISTANT_TASK:>
Python Code:
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
out = F.relu(self.conv1(x))
out = F.max_pool2d(out, 2)
out = F.relu(self.conv2(out))
out = F.max_pool2d(out, 2)
out = out.view(out.size(0), -1)
out = F.relu(self.fc1(out))
out = F.relu(self.fc2(out))
out = self.fc3(out)
return out
pytorch_network = LeNet()
# The most useful, just print the network
print(pytorch_network)
# Also useful: will only print those layers with params
state_dict = pytorch_network.state_dict()
print(util.state_dict_layer_names(state_dict))
import keras
from keras import backend as K
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
K.set_image_data_format('channels_first')
def lenet_keras():
model = Sequential()
model.add(Conv2D(6, kernel_size=(5, 5),
activation='relu',
input_shape=(1,32,32),
name='conv1'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(16, (5, 5), activation='relu', name='conv2'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(120, activation='relu', name='fc1'))
model.add(Dense(84, activation='relu', name='fc2'))
model.add(Dense(10, activation=None, name='fc3'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta())
return model
keras_network = lenet_keras()
transfer.keras_to_pytorch(keras_network, pytorch_network)
# Create dummy data
data = torch.rand(6,1,32,32)
data_keras = data.numpy()
data_pytorch = Variable(data, requires_grad=False)
# Do a forward pass in both frameworks
keras_pred = keras_network.predict(data_keras)
pytorch_pred = pytorch_network(data_pytorch).data.numpy()
assert keras_pred.shape == pytorch_pred.shape
plt.axis('Off')
plt.imshow(keras_pred)
plt.show()
plt.axis('Off')
plt.imshow(pytorch_pred)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2
Step2: Step 3
Step3: Step 4
Step4: Done!
|
10,658
|
<ASSISTANT_TASK:>
Python Code:
print("Student 1 answers question 1.")
print("Student 2 answers question 1.")
print("Student 3 answers question 1.")
print("Student 1 answers question 2.")
print("Student 3 answers question 2.")
print("Student 4 answers question 2.")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question 2
|
10,659
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function, unicode_literals
[10.5, 5.2, 3.25, 7.0]
import numpy as np
video = np.array([10.5, 5.2, 3.25, 7.0])
video
video.size
video[2] # 3rd element
%matplotlib inline
import matplotlib.pyplot as plt
u = np.array([2, 5])
v = np.array([3, 1])
x_coords, y_coords = zip(u, v)
plt.scatter(x_coords, y_coords, color=["r","b"])
plt.axis([0, 9, 0, 6])
plt.grid()
plt.show()
def plot_vector2d(vector2d, origin=[0, 0], **options):
return plt.arrow(origin[0], origin[1], vector2d[0], vector2d[1],
head_width=0.2, head_length=0.3, length_includes_head=True,
**options)
plot_vector2d(u, color="r")
plot_vector2d(v, color="b")
plt.axis([0, 9, 0, 6])
plt.grid()
plt.show()
a = np.array([1, 2, 8])
b = np.array([5, 6, 3])
from mpl_toolkits.mplot3d import Axes3D
subplot3d = plt.subplot(111, projection='3d')
x_coords, y_coords, z_coords = zip(a,b)
subplot3d.scatter(x_coords, y_coords, z_coords)
subplot3d.set_zlim3d([0, 9])
plt.show()
def plot_vectors3d(ax, vectors3d, z0, **options):
for v in vectors3d:
x, y, z = v
ax.plot([x,x], [y,y], [z0, z], color="gray", linestyle='dotted', marker=".")
x_coords, y_coords, z_coords = zip(*vectors3d)
ax.scatter(x_coords, y_coords, z_coords, **options)
subplot3d = plt.subplot(111, projection='3d')
subplot3d.set_zlim([0, 9])
plot_vectors3d(subplot3d, [a,b], 0, color=("r","b"))
plt.show()
def vector_norm(vector):
squares = [element**2 for element in vector]
return sum(squares)**0.5
print("||", u, "|| =")
vector_norm(u)
import numpy.linalg as LA
LA.norm(u)
radius = LA.norm(u)
plt.gca().add_artist(plt.Circle((0,0), radius, color="#DDDDDD"))
plot_vector2d(u, color="red")
plt.axis([0, 8.7, 0, 6])
plt.grid()
plt.show()
print(" ", u)
print("+", v)
print("-"*10)
u + v
plot_vector2d(u, color="r")
plot_vector2d(v, color="b")
plot_vector2d(v, origin=u, color="b", linestyle="dotted")
plot_vector2d(u, origin=v, color="r", linestyle="dotted")
plot_vector2d(u+v, color="g")
plt.axis([0, 9, 0, 7])
plt.text(0.7, 3, "u", color="r", fontsize=18)
plt.text(4, 3, "u", color="r", fontsize=18)
plt.text(1.8, 0.2, "v", color="b", fontsize=18)
plt.text(3.1, 5.6, "v", color="b", fontsize=18)
plt.text(2.4, 2.5, "u+v", color="g", fontsize=18)
plt.grid()
plt.show()
t1 = np.array([2, 0.25])
t2 = np.array([2.5, 3.5])
t3 = np.array([1, 2])
x_coords, y_coords = zip(t1, t2, t3, t1)
plt.plot(x_coords, y_coords, "c--", x_coords, y_coords, "co")
plot_vector2d(v, t1, color="r", linestyle=":")
plot_vector2d(v, t2, color="r", linestyle=":")
plot_vector2d(v, t3, color="r", linestyle=":")
t1b = t1 + v
t2b = t2 + v
t3b = t3 + v
x_coords_b, y_coords_b = zip(t1b, t2b, t3b, t1b)
plt.plot(x_coords_b, y_coords_b, "b-", x_coords_b, y_coords_b, "bo")
plt.text(4, 4.2, "v", color="r", fontsize=18)
plt.text(3, 2.3, "v", color="r", fontsize=18)
plt.text(3.5, 0.4, "v", color="r", fontsize=18)
plt.axis([0, 6, 0, 5])
plt.grid()
plt.show()
print("1.5 *", u, "=")
1.5 * u
k = 2.5
t1c = k * t1
t2c = k * t2
t3c = k * t3
plt.plot(x_coords, y_coords, "c--", x_coords, y_coords, "co")
plot_vector2d(t1, color="r")
plot_vector2d(t2, color="r")
plot_vector2d(t3, color="r")
x_coords_c, y_coords_c = zip(t1c, t2c, t3c, t1c)
plt.plot(x_coords_c, y_coords_c, "b-", x_coords_c, y_coords_c, "bo")
plot_vector2d(k * t1, color="b", linestyle=":")
plot_vector2d(k * t2, color="b", linestyle=":")
plot_vector2d(k * t3, color="b", linestyle=":")
plt.axis([0, 9, 0, 9])
plt.grid()
plt.show()
plt.gca().add_artist(plt.Circle((0,0),1,color='c'))
plt.plot(0, 0, "ko")
plot_vector2d(v / LA.norm(v), color="k")
plot_vector2d(v, color="b", linestyle=":")
plt.text(0.3, 0.3, "$\hat{u}$", color="k", fontsize=18)
plt.text(1.5, 0.7, "$u$", color="b", fontsize=18)
plt.axis([-1.5, 5.5, -1.5, 3.5])
plt.grid()
plt.show()
def dot_product(v1, v2):
return sum(v1i * v2i for v1i, v2i in zip(v1, v2))
dot_product(u, v)
np.dot(u,v)
u.dot(v)
print(" ",u)
print("* ",v, "(NOT a dot product)")
print("-"*10)
u * v
def vector_angle(u, v):
cos_theta = u.dot(v) / LA.norm(u) / LA.norm(v)
return np.arccos(np.clip(cos_theta, -1, 1))
theta = vector_angle(u, v)
print("Angle =", theta, "radians")
print(" =", theta * 180 / np.pi, "degrees")
u_normalized = u / LA.norm(u)
proj = v.dot(u_normalized) * u_normalized
plot_vector2d(u, color="r")
plot_vector2d(v, color="b")
plot_vector2d(proj, color="k", linestyle=":")
plt.plot(proj[0], proj[1], "ko")
plt.plot([proj[0], v[0]], [proj[1], v[1]], "b:")
plt.text(1, 2, "$proj_u v$", color="k", fontsize=18)
plt.text(1.8, 0.2, "$v$", color="b", fontsize=18)
plt.text(0.8, 3, "$u$", color="r", fontsize=18)
plt.axis([0, 8, 0, 5.5])
plt.grid()
plt.show()
[
[10, 20, 30],
[40, 50, 60]
]
A = np.array([
[10,20,30],
[40,50,60]
])
A
A.shape
A.size
A[1,2] # 2nd row, 3rd column
A[1, :] # 2nd row vector (as a 1D array)
A[:, 2] # 3rd column vector (as a 1D array)
A[1:2, :] # rows 2 to 3 (excluded): this returns row 2 as a one-row matrix
A[:, 2:3] # columns 3 to 4 (excluded): this returns column 3 as a one-column matrix
np.diag([4, 5, 6])
D = np.array([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
])
np.diag(D)
np.eye(3)
B = np.array([[1,2,3], [4, 5, 6]])
B
A
A + B
B + A
C = np.array([[100,200,300], [400, 500, 600]])
A + (B + C)
(A + B) + C
2 * A
A * 2
2 * (3 * A)
(2 * 3) * A
2 * (A + B)
2 * A + 2 * B
D = np.array([
[ 2, 3, 5, 7],
[11, 13, 17, 19],
[23, 29, 31, 37]
])
E = A.dot(D)
E
40*5 + 50*17 + 60*31
E[1,2] # row 2, column 3
try:
D.dot(A)
except ValueError as e:
print("ValueError:", e)
F = np.array([
[5,2],
[4,1],
[9,3]
])
A.dot(F)
F.dot(A)
G = np.array([
[8, 7, 4, 2, 5],
[2, 5, 1, 0, 5],
[9, 11, 17, 21, 0],
[0, 1, 0, 1, 2]])
A.dot(D).dot(G) # (AB)G
A.dot(D.dot(G)) # A(BG)
(A + B).dot(D)
A.dot(D) + B.dot(D)
A.dot(np.eye(3))
np.eye(2).dot(A)
A * B # NOT a matrix multiplication
import sys
print("Python version: {}.{}.{}".format(*sys.version_info))
print("Numpy version:", np.version.version)
# Uncomment the following line if your Python version is ≥3.5
# and your NumPy version is ≥1.10:
#A @ D
A
A.T
A.T.T
(A + B).T
A.T + B.T
(A.dot(D)).T
D.T.dot(A.T)
D.dot(D.T)
u
u.T
u_row = np.array([u])
u_row
u[np.newaxis, :]
u[np.newaxis]
u[None]
u_row.T
u[:, np.newaxis]
P = np.array([
[3.0, 4.0, 1.0, 4.6],
[0.2, 3.5, 2.0, 0.5]
])
x_coords_P, y_coords_P = P
plt.scatter(x_coords_P, y_coords_P)
plt.axis([0, 5, 0, 4])
plt.show()
plt.plot(x_coords_P, y_coords_P, "bo")
plt.plot(x_coords_P, y_coords_P, "b--")
plt.axis([0, 5, 0, 4])
plt.grid()
plt.show()
from matplotlib.patches import Polygon
plt.gca().add_artist(Polygon(P.T))
plt.axis([0, 5, 0, 4])
plt.grid()
plt.show()
H = np.array([
[ 0.5, -0.2, 0.2, -0.1],
[ 0.4, 0.4, 1.5, 0.6]
])
P_moved = P + H
plt.gca().add_artist(Polygon(P.T, alpha=0.2))
plt.gca().add_artist(Polygon(P_moved.T, alpha=0.3, color="r"))
for vector, origin in zip(H.T, P.T):
plot_vector2d(vector, origin=origin)
plt.text(2.2, 1.8, "$P$", color="b", fontsize=18)
plt.text(2.0, 3.2, "$P+H$", color="r", fontsize=18)
plt.text(2.5, 0.5, "$H_{*,1}$", color="k", fontsize=18)
plt.text(4.1, 3.5, "$H_{*,2}$", color="k", fontsize=18)
plt.text(0.4, 2.6, "$H_{*,3}$", color="k", fontsize=18)
plt.text(4.4, 0.2, "$H_{*,4}$", color="k", fontsize=18)
plt.axis([0, 5, 0, 4])
plt.grid()
plt.show()
H2 = np.array([
[-0.5, -0.5, -0.5, -0.5],
[ 0.4, 0.4, 0.4, 0.4]
])
P_translated = P + H2
plt.gca().add_artist(Polygon(P.T, alpha=0.2))
plt.gca().add_artist(Polygon(P_translated.T, alpha=0.3, color="r"))
for vector, origin in zip(H2.T, P.T):
plot_vector2d(vector, origin=origin)
plt.axis([0, 5, 0, 4])
plt.grid()
plt.show()
P + [[-0.5], [0.4]] # same as P + H2, thanks to NumPy broadcasting
def plot_transformation(P_before, P_after, text_before, text_after, axis = [0, 5, 0, 4], arrows=False):
if arrows:
for vector_before, vector_after in zip(P_before.T, P_after.T):
plot_vector2d(vector_before, color="blue", linestyle="--")
plot_vector2d(vector_after, color="red", linestyle="-")
plt.gca().add_artist(Polygon(P_before.T, alpha=0.2))
plt.gca().add_artist(Polygon(P_after.T, alpha=0.3, color="r"))
plt.text(P_before[0].mean(), P_before[1].mean(), text_before, fontsize=18, color="blue")
plt.text(P_after[0].mean(), P_after[1].mean(), text_after, fontsize=18, color="red")
plt.axis(axis)
plt.grid()
P_rescaled = 0.60 * P
plot_transformation(P, P_rescaled, "$P$", "$0.6 P$", arrows=True)
plt.show()
U = np.array([[1, 0]])
U.dot(P)
def plot_projection(U, P):
U_P = U.dot(P)
axis_end = 100 * U
plot_vector2d(axis_end[0], color="black")
plt.gca().add_artist(Polygon(P.T, alpha=0.2))
for vector, proj_coordinate in zip(P.T, U_P.T):
proj_point = proj_coordinate * U
plt.plot(proj_point[0][0], proj_point[0][1], "ro")
plt.plot([vector[0], proj_point[0][0]], [vector[1], proj_point[0][1]], "r--")
plt.axis([0, 5, 0, 4])
plt.grid()
plt.show()
plot_projection(U, P)
angle30 = 30 * np.pi / 180 # angle in radians
U_30 = np.array([[np.cos(angle30), np.sin(angle30)]])
plot_projection(U_30, P)
angle120 = 120 * np.pi / 180
V = np.array([
[np.cos(angle30), np.sin(angle30)],
[np.cos(angle120), np.sin(angle120)]
])
V
V.dot(P)
P_rotated = V.dot(P)
plot_transformation(P, P_rotated, "$P$", "$VP$", [-2, 6, -2, 4], arrows=True)
plt.show()
F_shear = np.array([
[1, 1.5],
[0, 1]
])
plot_transformation(P, F_shear.dot(P), "$P$", "$F_{shear} P$",
axis=[0, 10, 0, 7])
plt.show()
Square = np.array([
[0, 0, 1, 1],
[0, 1, 1, 0]
])
plot_transformation(Square, F_shear.dot(Square), "$Square$", "$F_{shear} Square$",
axis=[0, 2.6, 0, 1.8])
plt.show()
F_squeeze = np.array([
[1.4, 0],
[0, 1/1.4]
])
plot_transformation(P, F_squeeze.dot(P), "$P$", "$F_{squeeze} P$",
axis=[0, 7, 0, 5])
plt.show()
plot_transformation(Square, F_squeeze.dot(Square), "$Square$", "$F_{squeeze} Square$",
axis=[0, 1.8, 0, 1.2])
plt.show()
F_reflect = np.array([
[1, 0],
[0, -1]
])
plot_transformation(P, F_reflect.dot(P), "$P$", "$F_{reflect} P$",
axis=[-2, 9, -4.5, 4.5])
plt.show()
F_inv_shear = np.array([
[1, -1.5],
[0, 1]
])
P_sheared = F_shear.dot(P)
P_unsheared = F_inv_shear.dot(P_sheared)
plot_transformation(P_sheared, P_unsheared, "$P_{sheared}$", "$P_{unsheared}$",
axis=[0, 10, 0, 7])
plt.plot(P[0], P[1], "b--")
plt.show()
F_inv_shear = LA.inv(F_shear)
F_inv_shear
plt.plot([0, 0, 1, 1, 0, 0.1, 0.1, 0, 0.1, 1.1, 1.0, 1.1, 1.1, 1.0, 1.1, 0.1],
[0, 1, 1, 0, 0, 0.1, 1.1, 1.0, 1.1, 1.1, 1.0, 1.1, 0.1, 0, 0.1, 0.1],
"r-")
plt.axis([-0.5, 2.1, -0.5, 1.5])
plt.show()
F_project = np.array([
[1, 0],
[0, 0]
])
plot_transformation(P, F_project.dot(P), "$P$", "$F_{project} \cdot P$",
axis=[0, 6, -1, 4])
plt.show()
try:
LA.inv(F_project)
except LA.LinAlgError as e:
print("LinAlgError:", e)
angle30 = 30 * np.pi / 180
F_project_30 = np.array([
[np.cos(angle30)**2, np.sin(2*angle30)/2],
[np.sin(2*angle30)/2, np.sin(angle30)**2]
])
plot_transformation(P, F_project_30.dot(P), "$P$", "$F_{project\_30} \cdot P$",
axis=[0, 6, -1, 4])
plt.show()
LA.inv(F_project_30)
F_shear.dot(LA.inv(F_shear))
LA.inv(LA.inv(F_shear))
F_involution = np.array([
[0, -2],
[-1/2, 0]
])
plot_transformation(P, F_involution.dot(P), "$P$", "$F_{involution} \cdot P$",
axis=[-8, 5, -4, 4])
plt.show()
F_reflect.dot(F_reflect.T)
M = np.array([
[1, 2, 3],
[4, 5, 6],
[7, 8, 0]
])
LA.det(M)
LA.det(F_project)
LA.det(F_project_30)
LA.det(F_shear)
F_scale = np.array([
[0.5, 0],
[0, 0.5]
])
plot_transformation(P, F_scale.dot(P), "$P$", "$F_{scale} \cdot P$",
axis=[0, 6, -1, 4])
plt.show()
LA.det(F_scale)
LA.det(F_reflect)
P_squeezed_then_sheared = F_shear.dot(F_squeeze.dot(P))
P_squeezed_then_sheared = (F_shear.dot(F_squeeze)).dot(P)
F_squeeze_then_shear = F_shear.dot(F_squeeze)
P_squeezed_then_sheared = F_squeeze_then_shear.dot(P)
LA.inv(F_shear.dot(F_squeeze)) == LA.inv(F_squeeze).dot(LA.inv(F_shear))
U, S_diag, V_T = LA.svd(F_shear) # note: in python 3 you can rename S_diag to Σ_diag
U
S_diag
S = np.diag(S_diag)
S
U.dot(np.diag(S_diag)).dot(V_T)
F_shear
plot_transformation(Square, V_T.dot(Square), "$Square$", "$V^T \cdot Square$",
axis=[-0.5, 3.5 , -1.5, 1.5])
plt.show()
plot_transformation(V_T.dot(Square), S.dot(V_T).dot(Square), "$V^T \cdot Square$", "$\Sigma \cdot V^T \cdot Square$",
axis=[-0.5, 3.5 , -1.5, 1.5])
plt.show()
plot_transformation(S.dot(V_T).dot(Square), U.dot(S).dot(V_T).dot(Square),"$\Sigma \cdot V^T \cdot Square$", "$U \cdot \Sigma \cdot V^T \cdot Square$",
axis=[-0.5, 3.5 , -1.5, 1.5])
plt.show()
eigenvalues, eigenvectors = LA.eig(F_squeeze)
eigenvalues # [λ0, λ1, …]
eigenvectors # [v0, v1, …]
eigenvalues2, eigenvectors2 = LA.eig(F_shear)
eigenvalues2 # [λ0, λ1, …]
eigenvectors2 # [v0, v1, …]
D = np.array([
[100, 200, 300],
[ 10, 20, 30],
[ 1, 2, 3],
])
np.trace(D)
np.trace(F_project)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Vectors
Step2: Since we plan to do quite a lot of scientific calculations, it is much better to use NumPy's ndarray, which provides a lot of convenient and optimized implementations of essential mathematical operations on vectors (for more details about NumPy, check out the NumPy tutorial). For example
Step3: The size of a vector can be obtained using the size attribute
Step4: The $i^{th}$ element (also called entry or item) of a vector $\textbf{v}$ is noted $\textbf{v}_i$.
Step5: Plotting vectors
Step6: 2D vectors
Step7: These vectors each have 2 elements, so they can easily be represented graphically on a 2D graph, for example as points
Step8: Vectors can also be represented as arrows. Let's create a small convenience function to draw nice arrows
Step9: Now let's draw the vectors u and v as arrows
Step10: 3D vectors
Step11: Now let's plot them using matplotlib's Axes3D
Step12: It is a bit hard to visualize exactly where in space these two points are, so let's add vertical lines. We'll create a small convenience function to plot a list of 3d vectors with vertical lines attached
Step13: Norm
Step14: However, it is much more efficient to use NumPy's norm function, available in the linalg (Linear Algebra) module
Step15: Let's plot a little diagram to confirm that the length of vector $\textbf{v}$ is indeed $\approx5.4$
Step16: Looks about right!
Step17: Let's look at what vector addition looks like graphically
Step18: Vector addition is commutative, meaning that $\textbf{u} + \textbf{v} = \textbf{v} + \textbf{u}$. You can see it on the previous image
Step19: Finally, substracting a vector is like adding the opposite vector.
Step20: Graphically, scalar multiplication results in changing the scale of a figure, hence the name scalar. The distance from the origin (the point at coordinates equal to zero) is also multiplied by the scalar. For example, let's scale up by a factor of k = 2.5
Step21: As you might guess, dividing a vector by a scalar is equivalent to multiplying by its inverse
Step22: Dot product
Step23: But a much more efficient implementation is provided by NumPy with the dot function
Step24: Equivalently, you can use the dot method of ndarrays
Step25: Caution
Step26: Main properties
Step27: Note
Step28: Matrices
Step29: A much more efficient way is to use the NumPy library which provides optimized implementations of many matrix operations
Step30: By convention matrices generally have uppercase names, such as $A$.
Step31: Caution
Step32: Element indexing
Step33: The $i^{th}$ row vector is sometimes noted $M_i$ or $M_{i,}$, but again there is no standard notation so people often prefer to explicitely define their own names, for example
Step34: Similarly, the $j^{th}$ column vector is sometimes noted $M^j$ or $M_{,j}$, but there is no standard notation. We will use $M_{,j}$. For example, to access $A_{*,3}$ (ie. $A$'s 3rd column vector)
Step35: Note that the result is actually a one-dimensional NumPy array
Step36: Square, triangular, diagonal and identity matrices
Step37: If you pass a matrix to the diag function, it will happily extract the diagonal values
Step38: Finally, the identity matrix of size $n$, noted $I_n$, is a diagonal matrix of size $n \times n$ with $1$'s in the main diagonal, for example $I_3$
Step39: The identity matrix is often noted simply $I$ (instead of $I_n$) when its size is clear given the context. It is called the identity matrix because multiplying a matrix with it leaves the matrix unchanged as we will see below.
Step40: Addition is commutative, meaning that $A + B = B + A$
Step41: It is also associative, meaning that $A + (B + C) = (A + B) + C$
Step42: Scalar multiplication
Step43: Scalar multiplication is also defined on the right hand side, and gives the same result
Step44: This makes scalar multiplication commutative.
Step45: Finally, it is distributive over addition of matrices, meaning that $\lambda (Q + R) = \lambda Q + \lambda R$
Step46: Matrix multiplication
Step47: Let's check this result by looking at one element, just to be sure
Step48: Looks good! You can check the other elements until you get used to the algorithm.
Step49: This illustrates the fact that matrix multiplication is NOT commutative
Step50: On the other hand, matrix multiplication is associative, meaning that $Q(RS) = (QR)S$. Let's create a $4 \times 5$ matrix $G$ to illustrate this
Step51: It is also distributive over addition of matrices, meaning that $(Q + R)S = QS + RS$. For example
Step52: The product of a matrix $M$ by the identity matrix (of matching size) results in the same matrix $M$. More formally, if $M$ is an $m \times n$ matrix, then
Step53: Caution
Step54: The @ infix operator
Step55: Note
Step56: As you might expect, transposing a matrix twice returns the original matrix
Step57: Transposition is distributive over addition of matrices, meaning that $(Q + R)^T = Q^T + R^T$. For example
Step58: Moreover, $(Q \cdot R)^T = R^T \cdot Q^T$. Note that the order is reversed. For example
Step59: A symmetric matrix $M$ is defined as a matrix that is equal to its transpose
Step60: Converting 1D arrays to 2D arrays in NumPy
Step61: We want to convert $\textbf{u}$ into a row vector before transposing it. There are a few ways to do this
Step62: Notice the extra square brackets
Step63: This quite explicit
Step64: This is equivalent, but a little less explicit.
Step65: This is the shortest version, but you probably want to avoid it because it is unclear. The reason it works is that np.newaxis is actually equal to None, so this is equivalent to the previous version.
Step66: Great! We now have a nice column vector.
Step67: Plotting a matrix
Step68: Of course we could also have stored the same 4 vectors as row vectors instead of column vectors, resulting in a $4 \times 2$ matrix (the transpose of $P$, in fact). It is really an arbitrary choice.
Step69: Or you can represent it as a polygon
Step70: Geometric applications of matrix operations
Step71: If we add a matrix full of identical vectors, we get a simple geometric translation
Step72: Although matrices can only be added together if they have the same size, NumPy allows adding a row vector or a column vector to a matrix
Step73: Scalar multiplication
Step74: Matrix multiplication – Projection onto an axis
Step75: Now let's look at the dot product $U \cdot P$
Step76: These are the horizontal coordinates of the vectors in $P$. In other words, we just projected $P$ onto the horizontal axis
Step77: We can actually project on any other axis by just replacing $U$ with any other unit vector. For example, let's project on the axis that is at a 30° angle above the horizontal axis
Step78: Good! Remember that the dot product of a unit vector and a matrix basically performs a projection on an axis and gives us the coordinates of the resulting points on that axis.
Step79: Let's look at the product $VP$
Step80: The first row is equal to $V_{1,} P$, which is the coordinates of the projection of $P$ onto the 30° axis, as we have seen above. The second row is $V_{2,} P$, which is the coordinates of the projection of $P$ onto the 120° axis. So basically we obtained the coordinates of $P$ after rotating the horizontal and vertical axes by 30° (or equivalently after rotating the polygon by -30° around the origin)! Let's plot $VP$ to see this
Step81: Matrix $V$ is called a rotation matrix.
Step82: Let's look at how this transformation affects the unit square
Step83: Now let's look at a squeeze mapping
Step84: The effect on the unit square is
Step85: Let's show a last one
Step86: Matrix inverse
Step87: We applied a shear mapping on $P$, just like we did before, but then we applied a second transformation to the result, and lo and behold this had the effect of coming back to the original $P$ (we plotted the original $P$'s outline to double check). The second transformation is the inverse of the first one.
Step88: Only square matrices can be inversed. This makes sense when you think about it
Step89: Looking at this image, it is impossible to tell whether this is the projection of a cube or the projection of a narrow rectangular object. Some information has been lost in the projection.
Step90: This transformation matrix performs a projection onto the horizontal axis. Our polygon gets entirely flattened out so some information is entirely lost and it is impossible to go back to the original polygon using a linear transformation. In other words, $F_{project}$ has no inverse. Such a square matrix that cannot be inversed is called a singular matrix (aka degenerate matrix). If we ask NumPy to calculate its inverse, it raises an exception
Step91: Here is another example of a singular matrix. This one performs a projection onto the axis at a 30° angle above the horizontal axis
Step92: But this time, due to floating point rounding errors, NumPy manages to calculate an inverse (notice how large the elements are, though)
Step93: As you might expect, the dot product of a matrix by its inverse results in the identity matrix
Step94: Another way to express this is that the inverse of the inverse of a matrix $M$ is $M$ itself
Step95: Also, the inverse of scaling by a factor of $\lambda$ is of course scaling by a factor or $\frac{1}{\lambda}$
Step96: Finally, a square matrix $H$ whose inverse is its own transpose is an orthogonal matrix
Step97: Determinant
Step98: One of the main uses of the determinant is to determine whether a square matrix can be inversed or not
Step99: That's right, $F_{project}$ is singular, as we saw earlier.
Step100: This determinant is suspiciously close to 0
Step101: Perfect! This matrix can be inversed as we saw earlier. Wow, math really works!
Step102: We rescaled the polygon by a factor of 1/2 on both vertical and horizontal axes so the surface area of the resulting polygon is 1/4$^{th}$ of the original polygon. Let's compute the determinant and check that
Step103: Correct!
Step104: Composing linear transformations
Step105: Since the dot product is associative, the following code is equivalent
Step106: Note that the order of the transformations is the reverse of the dot product order.
Step107: From now on we can perform both transformations in just one dot product, which can lead to a very significant performance boost.
Step108: Singular Value Decomposition
Step109: Note that this is just a 1D array containing the diagonal values of Σ. To get the actual matrix Σ, we can use NumPy's diag function
Step110: Now let's check that $U \cdot \Sigma \cdot V^T$ is indeed equal to F_shear
Step111: It worked like a charm. Let's apply these transformations one by one (in reverse order) on the unit square to understand what's going on. First, let's apply the first rotation $V^T$
Step112: Now let's rescale along the vertical and horizontal axes using $\Sigma$
Step113: Finally, we apply the second rotation $U$
Step114: And we can see that the result is indeed a shear mapping of the original unit square.
Step115: Indeed the horizontal vectors are stretched by a factor of 1.4, and the vertical vectors are shrunk by a factor of 1/1.4=0.714…, so far so good. Let's look at the shear mapping matrix $F_{shear}$
Step116: Wait, what!? We expected just one unit eigenvector, not two. The second vector is almost equal to $\begin{pmatrix}-1 \ 0 \end{pmatrix}$, which is on the same line as the first vector $\begin{pmatrix}1 \ 0 \end{pmatrix}$. This is due to floating point errors. We can safely ignore vectors that are (almost) colinear (ie. on the same line).
Step117: The trace does not have a simple geometric interpretation (in general), but it has a number of properties that make it useful in many areas
|
10,660
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import openmc
import numpy as np
import matplotlib.pyplot as plt
# Define fuel and B4C materials
fuel = openmc.Material()
fuel.add_element('U', 1.0, enrichment=4.5)
fuel.add_nuclide('O16', 2.0)
fuel.set_density('g/cm3', 10.0)
b4c = openmc.Material()
b4c.add_element('B', 4.0)
b4c.add_nuclide('C0', 1.0)
b4c.set_density('g/cm3', 2.5)
# Define surfaces used to construct regions
zmin, zmax = -10., 10.
box = openmc.model.get_rectangular_prism(10., 10., boundary_type='reflective')
bottom = openmc.ZPlane(z0=zmin, boundary_type='vacuum')
boron_lower = openmc.ZPlane(z0=-0.5)
boron_upper = openmc.ZPlane(z0=0.5)
top = openmc.ZPlane(z0=zmax, boundary_type='vacuum')
# Create three cells and add them to geometry
fuel1 = openmc.Cell(fill=fuel, region=box & +bottom & -boron_lower)
absorber = openmc.Cell(fill=b4c, region=box & +boron_lower & -boron_upper)
fuel2 = openmc.Cell(fill=fuel, region=box & +boron_upper & -top)
geom = openmc.Geometry([fuel1, absorber, fuel2])
settings = openmc.Settings()
spatial_dist = openmc.stats.Box(*geom.bounding_box)
settings.source = openmc.Source(space=spatial_dist)
settings.batches = 210
settings.inactive = 10
settings.particles = 1000
# Create a flux tally
flux_tally = openmc.Tally()
flux_tally.scores = ['flux']
# Create a Legendre polynomial expansion filter and add to tally
order = 8
expand_filter = openmc.ZernikeFilter(order)
flux_tally.filters.append(expand_filter)
tallies = openmc.Tallies([flux_tally])
model = openmc.model.Model(geometry=geom, settings=settings, tallies=tallies)
model.export_to_xml()
model.run(output=False)
with openmc.StatePoint('statepoint.210.h5') as sp:
df = sp.tallies[flux_tally.id].get_pandas_dataframe()
df
n = np.arange(order + 1)
a_n = (2*n + 1)/2 * df['mean']
phi = np.polynomial.Legendre(a_n/10, domain=(zmin, zmax))
z = np.linspace(zmin, zmax, 1000)
plt.plot(z, phi(z))
plt.xlabel('Z position [cm]')
plt.ylabel('Flux [n/src]')
np.trapz(phi(z), z)
import matplotlib.pyplot as plt
%matplotlib inline
import openmc
from openmc.examples import pwr_pin_cell
import numpy as np
from series.Zernike import Zernike
model = openmc.model.Model()
# Define materials.
fuel = openmc.Material(name='UO2 (2.4%)')
fuel.set_density('g/cm3', 10.29769)
fuel.add_nuclide("U234", 4.4843e-6)
fuel.add_nuclide("U235", 5.5815e-4)
fuel.add_nuclide("U238", 2.2408e-2)
fuel.add_nuclide("O16", 4.5829e-2)
clad = openmc.Material(name='Zircaloy')
clad.set_density('g/cm3', 6.55)
clad.add_nuclide("Zr90", 2.1827e-2)
clad.add_nuclide("Zr91", 4.7600e-3)
clad.add_nuclide("Zr92", 7.2758e-3)
clad.add_nuclide("Zr94", 7.3734e-3)
clad.add_nuclide("Zr96", 1.1879e-3)
hot_water = openmc.Material(name='Hot borated water')
hot_water.set_density('g/cm3', 0.740582)
hot_water.add_nuclide("H1", 4.9457e-2)
hot_water.add_nuclide("O16", 2.4672e-2)
hot_water.add_nuclide("B10", 8.0042e-6)
hot_water.add_nuclide("B11", 3.2218e-5)
hot_water.add_s_alpha_beta('c_H_in_H2O')
b4c = openmc.Material()
b4c.add_element('B', 4.0)
b4c.add_nuclide('C0', 1.0)
b4c.set_density('g/cm3', 2.5)
# Define the materials file.
model.materials = (fuel, b4c, hot_water)
# Instantiate ZCylinder surfaces
pitch = 1.26
fuel_or = openmc.ZCylinder(x0=0, y0=0, R=0.39218, name='Fuel OR')
absorber_or = openmc.ZCylinder(x0=0, y0=0, R=0.45720, name='Absorber OR')
left = openmc.XPlane(x0=-pitch/2, name='left', boundary_type='reflective')
right = openmc.XPlane(x0=pitch/2, name='right', boundary_type='reflective')
bottom = openmc.YPlane(y0=-pitch/2, name='bottom',
boundary_type='reflective')
top = openmc.YPlane(y0=pitch/2, name='top', boundary_type='reflective')
# Instantiate Cells
fuel_pin = openmc.Cell(name='Fuel', fill=fuel)
absorber = openmc.Cell(name='Absorber', fill=b4c)
water = openmc.Cell(name='Water', fill=hot_water)
# Use surface half-spaces to define regions
fuel_pin.region = -fuel_or
absorber.region = +fuel_or & -absorber_or
water.region = +absorber_or & +left & -right & +bottom & -top
# Create root universe
model.geometry.root_universe = openmc.Universe(0, name='root universe')
model.geometry.root_universe.add_cells([fuel_pin, absorber, water])
model.settings.batches = 210
model.settings.inactive = 10
model.settings.particles = 1000
max_zernike_order = 10
flux_tallies = [openmc.Tally() for i in range(max_zernike_order, -1, -1)]
fission_tallies = [openmc.Tally() for i in range(max_zernike_order, -1, -1)]
zernike_filters = [openmc.ZernikeFilter(n, r=.63) for n in range(max_zernike_order, -1, -1)]
for fiss_tally,flux_tally,tfilter in zip(fission_tallies,
flux_tallies,
zernike_filters):
fiss_tally.scores = ['fission']
flux_tally.scores = ['flux']
fiss_tally.filters = [tfilter]
flux_tally.filters = [tfilter]
mesh = openmc.Mesh()
mesh.dimension = [100, 100]
mesh.lower_left = [-0.63, -0.63]
mesh.upper_right = [0.63, 0.63]
# Create mesh filter for tally
mesh_filter = openmc.MeshFilter(mesh)
# Create mesh tally to score flux and fission rate
mesh_tally = openmc.Tally(name='mesh_flux')
mesh_tally.filters = [mesh_filter]
mesh_tally.scores = ['flux', 'fission']
model.tallies = flux_tallies + fission_tallies + [mesh_tally]
model.run(openmc_exec="/Users/lindad/projects/Okapi/openmc/build/bin/openmc")
with openmc.StatePoint('statepoint.210.h5') as sp:
output_flux_tallies = [sp.tallies[ftally.id].get_pandas_dataframe()
for ftally in flux_tallies]
output_fission_tallies = [sp.tallies[ftally.id].get_pandas_dataframe()
for ftally in fission_tallies]
output_mesh_tally = sp.get_tally(id=mesh_tally.id)
fission_mesh = output_mesh_tally.get_slice(scores=['fission'])
flux_mesh = output_mesh_tally.get_slice(scores=['flux'])
flux_mesh.mean.shape = (100, 100)
fission_mesh.mean.shape = (100, 100)
fig = plt.subplot(121)
fig.imshow(flux_mesh.mean)
fig2 = plt.subplot(122)
fig2.imshow(fission_mesh.mean)
flux_coeffs = [ftally['mean'] for ftally in output_flux_tallies]
fission_coeffs = [ftally['mean'] for ftally in output_fission_tallies]
flux_zernike_polys = [Zernike(order, 0, 0, .63)
for order in range(max_zernike_order, -1, -1)]
fission_zernike_polys = [Zernike(order, 0, 0, .63)
for order in range(max_zernike_order, -1, -1)]
for poly, coeffs in zip(fission_zernike_polys, fission_coeffs):
poly.coefficients = coeffs
for poly, coeffs in zip(flux_zernike_polys, flux_coeffs):
poly.coefficients = coeffs
x = np.linspace(-.63, .63, 1000)
flux_evals = [np.zeros((x.size)) for poly in flux_zernike_polys]
fission_evals = [np.zeros((x.size)) for poly in fission_zernike_polys]
for index, xval in enumerate(x):
for fission_eval, poly in zip(fission_evals, fission_zernike_polys):
fission_eval[index] = poly(xval, 0)
for flux_eval, poly in zip(flux_evals, flux_zernike_polys):
flux_eval[index] = poly(xval, 0)
plt.rcParams['figure.figsize'] = [10, 10]
for i in range(10, -1, -2):
plt.plot(x, flux_evals[max_zernike_order - i], label=str(i))
plt.legend()
plt.rcParams['figure.figsize'] = [10, 10]
for i in range(10, -1, -1):
plt.plot(x, fission_evals[max_zernike_order - i], label=str(i))
plt.legend()
for i in range(max_zernike_order, -1, -2):
plt.plot(x, fission_evals[max_zernike_order - i], label=str(i))
plt.legend()
for i in range(max_zernike_order - 1, -1, -2):
plt.plot(x, fission_evals[max_zernike_order - i], label=str(i))
plt.legend()
def nonsmooth_fission_shape(r, phi):
fuel_radius = 0.39218
outer_radius = .63
if r <= fuel_radius:
return cos(r * pi / (2 * fuel_radius))
else:
return 0
import matplotlib.pyplot as plt
%matplotlib inline
import openmc
from openmc.examples import pwr_pin_cell
import numpy as np
from series.Zernike import Zernike
from math import cos, pi
max_radius = .63
max_zernike_order = 10
fission_zernike_polys = [Zernike(order, 0, 0, max_radius)
for order in range(max_zernike_order, -1, -1)]
import sympy as sp
def sp_fission_shape():
phi = sp.symbols('phi')
r = sp.symbols('r')
return sp.cos(r * sp.pi / 2)
def advanced_fission_shape(r, phi):
fuel_radius = 0.39218
if r < fuel_radius:
return cos(r * pi / (2 * fuel_radius))
else:
return 0
def fission_shape(r, phi):
fuel_radius = max_radius
return cos(r * pi / (2 * fuel_radius))
def linear(r, phi):
return r
def sp_linear():
r = sp.symbols('r')
return r
def constant(r, phi):
return 1
def sp_constant():
return 1
model_function = advanced_fission_shape
model_sym_function = fission_shape
plot_intervals = 100
r_array = np.linspace(0, max_radius, 100)
true_soln = np.zeros((r_array.size))
for poly in fission_zernike_polys:
poly.generateCoefficients(model_function, num_rintervals=100,
num_aintervals=20)
num_poly_evals = [np.zeros((r_array.size)) for poly in fission_zernike_polys]
for index,r in enumerate(r_array):
for poly_eval, poly in zip(num_poly_evals, fission_zernike_polys):
poly_eval[index] = poly(r, 0)
true_soln[index] = model_function(r, 0)
plt.rcParams['figure.figsize'] = [10, 10]
for i in range(10, -1, -10):
plt.plot(r_array, num_poly_evals[max_zernike_order-i], label=str(i))
plt.plot(r_array, true_soln, '--', label='true')
plt.legend()
for poly in fission_zernike_polys:
poly.symbolicCoefficients(model_sym_function())
poly_evals = [np.zeros((r_array.size)) for poly in fission_zernike_polys]
for index,r in enumerate(r_array):
for poly_eval, poly in zip(poly_evals, fission_zernike_polys):
poly_eval[index] = poly.symbolicEval(r, 0)
true_soln[index] = model_function(r, 0)
plt.rcParams['figure.figsize'] = [10, 10]
for i in range(max_zernike_order, -1, -2):
plt.plot(r_array, poly_evals[max_zernike_order-i], label=str(i))
plt.plot(r_array, true_soln, '--', label='true')
plt.legend()
plt.ylim(.98, 1.02)
def trap_integrate_cylindrical(num_rintervals, num_aintervals, function):
integral = 0
dr = 1 / num_rintervals
dphi = 2 * pi / num_aintervals
num_rpoints = num_rintervals + 1
num_apoints = num_aintervals + 1
for i in range(num_rpoints):
for j in range(num_apoints):
w = 1
if i == 0 or i == num_rpoints - 1:
w *= .5
if j == 0 or j == num_apoints - 1:
w *= .5
integral += w * function(i * dr, j * dphi) * i * dr * dr * dphi
return 1 / pi * integral
def test(r):
return r
trap_integrate_cylindrical(1, 1, constant)
trap_integrate_cylindrical(10, test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To begin, let us first create a simple model. The model will be a slab of fuel material with reflective boundaries conditions in the x- and y-directions and vacuum boundaries in the z-direction. However, to make the distribution slightly more interesting, we'll put some B<sub>4</sub>C in the middle of the slab.
Step2: For the starting source, we'll use a uniform distribution over the entire box geometry.
Step3: Defining the tally is relatively straightforward. One simply needs to list 'flux' as a score and then add an expansion filter. For this case, we will want to use the SpatialLegendreFilter class which multiplies tally scores by Legendre polynomials evaluated on normalized spatial positions along an axis.
Step4: The last thing we need to do is create a Tallies collection and export the entire model, which we'll do using the Model convenience class.
Step5: Running a simulation is now as simple as calling the run() method of Model.
Step6: Now that the run is finished, we need to load the results from the statepoint file.
Step7: We've used the get_pandas_dataframe() method that returns tally data as a Pandas dataframe. Let's see what the raw data looks like.
Step8: Since the expansion coefficients are given as
Step9: To plot the flux distribution, we can use the numpy.polynomial.Legendre class which represents a truncated Legendre polynomial series. Since we really want to plot $\phi(z)$ and not $\phi(z')$ we first need to perform a change of variables. Since
Step10: Let's plot it and see how our flux looks!
Step11: As you might expect, we get a rough cosine shape but with a flux depression in the middle due to the boron slab that we introduced. To get a more accurate distribution, we'd likely need to use a higher order expansion.
Step12: In addition to being able to tally Legendre moments, there are also functional expansion filters available for spherical harmonics (SphericalHarmonicsFilter) and Zernike polynomials over a unit disk (ZernikeFilter). A separate LegendreFilter class can also be used for determining Legendre scattering moments (i.e., an expansion of the scattering cosine, $\mu$).
|
10,661
|
<ASSISTANT_TASK:>
Python Code:
%tikz \draw (0,0) rectangle (1,1);
%%tikz --scale 2 --size 300,300 -f jpg
\draw (0,0) rectangle (1,1);
\filldraw (0.5,0.5) circle (.1);
%%tikz --scale 2 --size 300,300 -f svg
\draw (0,0) rectangle (1,1);
\filldraw (0.5,0.5) circle (.1);
%%tikz -s 400,400 -sc 1.2 -f png
\draw [style=help lines, step=2] (-1,-1) grid (+7,+7);
\draw [line width=0.5mm, fill=blue!40!white] (+2,+2) rectangle (+4,+4);
\draw [blue!60!white] ( 2, 2) node[anchor=north east] {$(i ,j )$};
\draw [blue!60!white] ( 4, 2) node[anchor=north west] {$(i+1,j )$};
\draw [blue!60!white] ( 4, 4) node[anchor=south west] {$(i+1,j+1)$};
\draw [blue!60!white] ( 2, 4) node[anchor=south east] {$(i ,j+1)$};
\filldraw [color=gray] (0,0) circle (.1);
\filldraw [color=gray] (0,2) circle (.1);
\filldraw [color=gray] (0,4) circle (.1);
\filldraw [color=gray] (0,6) circle (.1);
\filldraw [color=gray] (2,0) circle (.1);
\filldraw [color=black] (2,2) circle (.1);
\filldraw [color=black] (2,4) circle (.1);
\filldraw [color=gray] (2,6) circle (.1);
\filldraw [color=gray] (4,0) circle (.1);
\filldraw [color=black] (4,2) circle (.1);
\filldraw [color=black] (4,4) circle (.1);
\filldraw [color=gray] (4,6) circle (.1);
\filldraw [color=gray] (6,0) circle (.1);
\filldraw [color=gray] (6,2) circle (.1);
\filldraw [color=gray] (6,4) circle (.1);
\filldraw [color=gray] (6,6) circle (.1);
%%tikz -l arrows,matrix -f svg
\matrix (m) [matrix of math nodes, row sep=3em, column sep=4em] {
A & B \\
C & D \\
};
\path[-stealth, line width=.4mm]
(m-1-1) edge node [left ] {$ac$} (m-2-1)
(m-1-1) edge node [above] {$ab$} (m-1-2)
(m-1-2) edge node [right] {$bd$} (m-2-2)
(m-2-1) edge node [below] {$cd$} (m-2-2);
%%tikz -l arrows,matrix -f svg -svgsc 3
\matrix (m) [matrix of math nodes, row sep=3em, column sep=4em] {
A & B \\
C & D \\
};
\path[-stealth, line width=.4mm]
(m-1-1) edge node [left ] {$ac$} (m-2-1)
(m-1-1) edge node [above] {$ab$} (m-1-2)
(m-1-2) edge node [right] {$bd$} (m-2-2)
(m-2-1) edge node [below] {$cd$} (m-2-2);
%%tikz -f svg -s 800,500
\begin{scope}%[xshift=16cm, yshift=4cm]
\tkzInit[xmax=15, ymax=10]
\tkzAxeXY
\tkzGrid[sub,color=yellow]
%----------------------------------------------------------
% Defining coordinates
%----------------------------------------------------------
\tkzDefPoint(3,2){A}
\tkzDefPoint(12,5){B}
\tkzLabelPoints[above left](B,A)
\tkzDefPoint(11, 1){C}
% draw red dots at the center of edges
\tkzDrawPoints[size=10, color=red, fill=red](A,B)
%----------------------------------------------------------
% Drawing the lines and segments
%----------------------------------------------------------
\tkzDrawLine[color=red,line width=3pt, add=-1.1 and -.1 ](C,A)
\tkzDrawLine[color=red,line width=3pt, add=-1.1 and -.1 ](C,B)
\tkzDrawLine(A,B)
\tkzDrawLines[dashed](A,C B,C)
% drawing arcs for angles
\tkzMarkAngle[size=1](C,A,B)
\tkzLabelAngle[pos=1.2,circle](C,A,B){$\mathsf{\phi}$}
\tkzDefPointWith[linear,K=1.5](A,C)
\tkzGetPoint{D}
\tkzDefPointWith[linear,K=.75](B,C)
\tkzGetPoint{E}
\tkzMarkAngle[size=1](D,C,E)
\tkzLabelAngle[pos=1.2,circle](D,C,E){$\mathsf{\theta}$}
%----------------------------------------------------------
% Drawing normals
%----------------------------------------------------------
\tkzDefLine[perpendicular=through A, K=.1](C,A)
\tkzGetPoint{a1}
\tkzDefLine[perpendicular=through A, K=-.7](C,A)
\tkzGetPoint{a2}
\tkzDefLine[perpendicular=through B, K=1.5](C,B)
\tkzGetPoint{b1}
\tkzDefLine[perpendicular=through B, K=-.1](C,B)
\tkzGetPoint{b2}
\tkzInterLL(a1,a2)(b1,b2) \tkzGetPoint{H}
\tkzMarkRightAngle[size=.2](H,A,C)
\tkzMarkRightAngle[size=.2](H,B,C)
\tkzDrawLines[dashed](a1,a2 b1,b2)
%----------------------------------------------------------
% Drawing mediator and psi line
%----------------------------------------------------------
% \tkzDefLine[mediator](A,B) \tkzGetPoints{m1}{M}
\tkzDefMidPoint(A,B)
\tkzGetPoint{M}
\tkzDefLine[perpendicular=through M, K=.4](A,B)
\tkzGetPoint{m1}
% \tkzDefLine[perpendicular=through M, K=-.3](A,B)
% \tkzGetPoint{m2}
\tkzMarkRightAngle[size=.2](B,M,m1)
\tkzDrawLines[](m1,M M,H)
\tkzMarkAngle[size=1](m1,M,H)
\tkzLabelAngle[pos=1.8,circle](m1,M,H){$\mathsf{\psi}$}%= \phi -\theta/2
\pgfresetboundingbox
\path(0, 0) rectangle (13, 10);
\end{scope}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: An example for scaling and exporting to a graphics format
Step2: An additional flag -svgsc allows scaling of the final image in the notebook view.
Step3: A more challenging exemple used in Figure 1 of this paper
|
10,662
|
<ASSISTANT_TASK:>
Python Code:
# Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.io # To read matlab files
import pylab
X = np.array([0.15, 0.41, 0.53, 0.80, 0.89, 0.92, 0.95])
s = np.array([0.09, 0.16, 0.63, 0.44, 0.55, 0.82, 0.95])
# <SOL>
plt.figure()
plt.scatter(X, s)
plt.xlabel('x')
plt.ylabel('s')
plt.show()
# </SOL>
# wML = <FILL IN>
wML = np.sum(X*s) / np.sum(X*X)
print("The ML estimate is {}".format(wML))
sigma_eps = 1
K = len(s)
wGrid = np.arange(-0.5, 2, 0.01)
p = []
for w in wGrid:
d = s - X*w
# p.append(<FILL IN>)
p.append((1.0/(np.sqrt(2*np.pi)*sigma_eps))**K * np.exp(-np.dot(d, d) / (2*sigma_eps**2)))
# Compute the likelihood for the ML parameter wML
# d = <FILL IN>
d = s-X*wML
# pML = [<FILL IN>]
pML = [(1.0/(np.sqrt(2*np.pi)*sigma_eps))**K * np.exp(-np.dot(d, d) / (2*sigma_eps**2))]
# Plot the likelihood function and the optimal value
plt.figure()
plt.plot(wGrid, p)
plt.stem([wML], pML)
plt.xlabel('$w$')
plt.ylabel('Likelihood function')
plt.show()
xgrid = np.arange(0, 1.2, 0.01)
# sML = <FILL IN>
sML = wML * xgrid
plt.figure()
plt.scatter(X, s)
# plt.plot(<FILL IN>)
plt.plot(xgrid, sML)
plt.xlabel('x')
plt.ylabel('s')
plt.axis('tight')
plt.show()
X = np.array([0.15, 0.41, 0.53, 0.80, 0.89, 0.92, 0.95])
s = np.array([0.09, 0.16, 0.63, 0.44, 0.55, 0.82, 0.95])
sigma_eps = 0.3
# Compute the extended input matrix Z
nx = len(X)
# Z = <FILL IN>
Z = np.hstack((np.ones((nx, 1)), X[:,np.newaxis], X[:,np.newaxis]**2))
# Compute the ML estimate using linalg.lstsq from Numpy.
# wML = <FILL IN>
wML = np.linalg.lstsq(Z, s)[0]
print(wML)
K = len(s)
# Compute the likelihood for the ML parameter wML
# d = <FILL IN>
d = s - np.dot(Z, wML)
# LwML = [<FILL IN>]
LwML = - K/2*np.log(2*np.pi*sigma_eps**2) - np.dot(d, d) / (2*sigma_eps**2)
print(LwML)
xgrid = np.arange(0, 1.2, 0.01)
nx = len(xgrid)
# Compute the input matrix for the grid data in x
# Z = <FILL IN>
Z = np.hstack((np.ones((nx, 1)), xgrid[:,np.newaxis], xgrid[:,np.newaxis]**2))
# sML = <FILL IN>
sML = np.dot(Z, wML)
plt.figure()
plt.scatter(X, s)
# plt.plot(<FILL IN>)
plt.plot(xgrid, sML)
plt.xlabel('x')
plt.ylabel('s')
plt.axis('tight')
plt.show()
K = len(s)
wGrid = np.arange(0, 6, 0.01)
p = []
Px = np.prod(X)
xs = np.dot(X,s)
for w in wGrid:
# p.append(<FILL IN>)
p.append((w**K)*Px*np.exp(-w*xs))
plt.figure()
# plt.plot(<FILL IN>)
plt.plot(wGrid, p)
plt.xlabel('$w$')
plt.ylabel('Likelihood function')
plt.show()
# wML = <FILL IN>
wML = np.float(K) /xs
print(wML)
xgrid = np.arange(0.1, 1.2, 0.01)
# sML = <FILL IN>
sML = 1 / (wML * xgrid)
plt.figure()
plt.scatter(X, s)
# plt.plot(<FILL IN>)
plt.plot(xgrid, sML)
plt.xlabel('x')
plt.ylabel('s')
plt.axis('tight')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A quick note on the mathematical notation
Step2: 4.1. Represent a scatter plot of the data points
Step3: 4.2. Compute the ML estimate
Step4: 4.3. Plot the likelihood as a function of parameter $w$ along the interval $-0.5\le w \le 2$, verifying that the ML estimate takes the maximum value.
Step5: 4.4. Plot the prediction function on top of the data scatter plot
Step6: 2.1. Model assumptions
Step7: have been generated by the polynomial Gaussian model
Step8: 5.1. Compute the ML estimate.
Step9: 5.2. Compute the value of the log-likelihood function for ${\bf w}={\bf w}_\text{ML}$.
Step10: 5.3. Plot the prediction function over the data scatter plot
Step11: Exercise 6
Step12: 6.3. Determine the maximum likelihood coefficient, $w_\text{ML}$.
Step13: 6.5. Assuming $w = w_\text{ML}$, compute the prediction function based on the estimate $s_{MSE}$
|
10,663
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import scipy.io
import scipy.signal as signal
from matplotlib import pyplot as plt
from pyeeg import dfa as dfa
def readFilePerceptualCrossing(filename):
data = scipy.io.loadmat(filename)
size = len(data['dataSeries'])
series = [data['dataSeries'][i][0] for i in range(size)]
series = np.array(series)[:,:,0]
series = signal.decimate(series, 10, zero_phase=True)
series = np.diff(series)
oppType = [data['dataOpponentType'][i][0] for i in range(size)]
oppType = np.array(oppType)[:,0]
return [series, oppType]
# Read data
[vel_player , oppTypes] = readFilePerceptualCrossing('dataPC-player.mat')
[vel_opponent, oppTypes] = readFilePerceptualCrossing('dataPC-opponent.mat')
[vel_relative, oppTypes] = readFilePerceptualCrossing('dataPC-distance.mat')
indexOscill = [i for i, x in enumerate(oppTypes) if x=="Oscillatory"]
indexShadow = [i for i, x in enumerate(oppTypes) if x=="Shadow"]
indexHuman = [i for i, x in enumerate(oppTypes) if x=="Human"]
series = vel_player
# Plot figures
plt.figure(figsize=(16, 4), dpi=72)
indexExamples = [60, 7, 11];
for i,ex in enumerate(indexExamples):
x = series[ex,:]
ax = plt.subplot(1,3,(i+1))
plt.title(oppTypes[ex]+r" ($\mu$={:0.2f}".format(np.mean(x))+r", $\sigma^2$={:0.2f}".format(np.var(x))+")")
ax.set(xlabel="Time", ylabel="Velocity", )
plt.plot(x);
# Calculate the average velocity of each serie
vel_stats=np.std(vel_player,axis=1) # velocity of the player
#vel_stats=np.std(vel_opponent,axis=1) # velocity of the opponent
#vel_stats=np.std(vel_relative,axis=1) # relative velocity between player
# Plot figure
plt.figure(figsize=(16, 4), dpi=72)
dataBox = [vel_stats[indexOscill], vel_stats[indexShadow], vel_stats[indexHuman]]
plt.boxplot(dataBox);
plt.ylabel("Average velocity")
plt.xticks([1, 2, 3], ['Oscillatory', 'Shadow', 'Human']);
def plot_dfa_perceptual(x, precision, title, drawPlot):
ix = np.arange(np.log2(len(x)/4), 4, -precision)
n = np.round(2**ix)
[_, n, F] = dfa(x, L=n)
n = n/115 # Time (seconds) = samples / sample_frequency
indexes = (n>10**-0.5)&(n<10**0.5) # Time interval for calculating the slope
P = np.polyfit(np.log(n[indexes]),np.log(F[indexes]), 1)
beta = 2*P[0]-1 # beta=2*alpha-1
if drawPlot:
plt.title(title+r" ($\beta$ = {:0.2f})".format(beta))
plt.xlabel('n')
plt.ylabel('F(n)')
plt.loglog(n, F)
plt.loglog(n[indexes], np.power(n[indexes], P[0])*np.exp(P[1]), 'r')
return [beta, n, F]
# Plot figures
series = vel_player
plt.figure(figsize=(16, 4), dpi=72)
indexExamples = [60, 7, 11];
for i,ex in enumerate(indexExamples):
x = series[ex,:]
ax = plt.subplot(1,3,(i+1))
plot_dfa_perceptual(x, 0.1, oppTypes[ex], True);
# Calculate the average velocity of each serie
series = vel_player
betas = np.zeros(len(series));
for i in range(len(series)):
[beta,_,_] = plot_dfa_perceptual(series[i,:], 0.5, oppTypes[i], False)
betas[i] = beta
# Plot figures
plt.figure(figsize=(16, 4), dpi=72)
dataBox = [betas[indexOscill], betas[indexShadow], betas[indexHuman]]
plt.boxplot(dataBox);
plt.ylabel(r'$\beta$');
plt.xticks([1, 2, 3], ['Oscillatory', 'Shadow', 'Human']);
# Data
series = vel_relative
# Plot figures
plt.figure(figsize=(16, 4), dpi=72)
indexExamples = [60, 7, 11];
for i,ex in enumerate(indexExamples):
ax = plt.subplot(1,3,(i+1))
plot_dfa_perceptual(series[ex,:], 0.1, oppTypes[ex], True);
# Data
series = vel_relative
# Plot figures
plt.figure(figsize=(16, 4), dpi=72)
indexExamples = [0];
for i in range(len(series)):
[beta,_,_] = plot_dfa_perceptual(series[i,:], 0.5, oppTypes[i], False)
betas[i] = beta
dataBox = [betas[indexOscill], betas[indexShadow], betas[indexHuman]]
plt.boxplot(dataBox);
plt.ylabel(r'$\beta$');
plt.xticks([1, 2, 3], ['Oscillatory', 'Shadow', 'Human']);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can display the box-plot of the velocity to check if there are differences between groups.
Step2: 3. Fractal analysis
Step3: Now, we display the boxplot of the results to get an statistical overview. For the cases of the derivative of the player's position or the opponent's possition, we cannot assure an statistical difference between the distributions of β.
Step4: 4. Interaction measures
Step5: The boxplot displays statistical differences.
|
10,664
|
<ASSISTANT_TASK:>
Python Code:
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print("Downloaded " + local)
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py")
import numpy as np
import random
import thinkstats2
import thinkplot
class HypothesisTest(object):
def __init__(self, data):
self.data = data
self.MakeModel()
self.actual = self.TestStatistic(data)
def PValue(self, iters=1000):
self.test_stats = [self.TestStatistic(self.RunModel())
for _ in range(iters)]
count = sum(1 for x in self.test_stats if x >= self.actual)
return count / iters
def TestStatistic(self, data):
raise UnimplementedMethodException()
def MakeModel(self):
pass
def RunModel(self):
raise UnimplementedMethodException()
class CoinTest(HypothesisTest):
def TestStatistic(self, data):
heads, tails = data
test_stat = abs(heads - tails)
return test_stat
def RunModel(self):
heads, tails = self.data
n = heads + tails
sample = [random.choice('HT') for _ in range(n)]
hist = thinkstats2.Hist(sample)
data = hist['H'], hist['T']
return data
ct = CoinTest((140, 110))
pvalue = ct.PValue()
pvalue
class DiffMeansPermute(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
group1, group2 = data
test_stat = abs(group1.mean() - group2.mean())
return test_stat
def MakeModel(self):
group1, group2 = self.data
self.n, self.m = len(group1), len(group2)
self.pool = np.hstack((group1, group2))
def RunModel(self):
np.random.shuffle(self.pool)
data = self.pool[:self.n], self.pool[self.n:]
return data
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/first.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct")
download(
"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz"
)
import first
live, firsts, others = first.MakeFrames()
data = firsts.prglngth.values, others.prglngth.values
ht = DiffMeansPermute(data)
pvalue = ht.PValue()
pvalue
ht.PlotCdf()
thinkplot.Config(xlabel='test statistic',
ylabel='CDF')
class DiffMeansOneSided(DiffMeansPermute):
def TestStatistic(self, data):
group1, group2 = data
test_stat = group1.mean() - group2.mean()
return test_stat
ht = DiffMeansOneSided(data)
pvalue = ht.PValue()
pvalue
class DiffStdPermute(DiffMeansPermute):
def TestStatistic(self, data):
group1, group2 = data
test_stat = group1.std() - group2.std()
return test_stat
ht = DiffStdPermute(data)
pvalue = ht.PValue()
pvalue
class CorrelationPermute(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
xs, ys = data
test_stat = abs(thinkstats2.Corr(xs, ys))
return test_stat
def RunModel(self):
xs, ys = self.data
xs = np.random.permutation(xs)
return xs, ys
cleaned = live.dropna(subset=['agepreg', 'totalwgt_lb'])
data = cleaned.agepreg.values, cleaned.totalwgt_lb.values
ht = CorrelationPermute(data)
pvalue = ht.PValue()
pvalue
ht.actual, ht.MaxTestStat()
class DiceTest(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
observed = data
n = sum(observed)
expected = np.ones(6) * n / 6
test_stat = sum(abs(observed - expected))
return test_stat
def RunModel(self):
n = sum(self.data)
values = [1, 2, 3, 4, 5, 6]
rolls = np.random.choice(values, n, replace=True)
hist = thinkstats2.Hist(rolls)
freqs = hist.Freqs(values)
return freqs
data = [8, 9, 19, 5, 8, 11]
dt = DiceTest(data)
pvalue = dt.PValue(iters=10000)
pvalue
class DiceChiTest(DiceTest):
def TestStatistic(self, data):
observed = data
n = sum(observed)
expected = np.ones(6) * n / 6
test_stat = sum((observed - expected)**2 / expected)
return test_stat
dt = DiceChiTest(data)
pvalue = dt.PValue(iters=10000)
pvalue
class PregLengthTest(thinkstats2.HypothesisTest):
def MakeModel(self):
firsts, others = self.data
self.n = len(firsts)
self.pool = np.hstack((firsts, others))
pmf = thinkstats2.Pmf(self.pool)
self.values = range(35, 44)
self.expected_probs = np.array(pmf.Probs(self.values))
def RunModel(self):
np.random.shuffle(self.pool)
data = self.pool[:self.n], self.pool[self.n:]
return data
def TestStatistic(self, data):
firsts, others = data
stat = self.ChiSquared(firsts) + self.ChiSquared(others)
return stat
def ChiSquared(self, lengths):
hist = thinkstats2.Hist(lengths)
observed = np.array(hist.Freqs(self.values))
expected = self.expected_probs * len(lengths)
stat = sum((observed - expected)**2 / expected)
return stat
data = firsts.prglngth.values, others.prglngth.values
ht = PregLengthTest(data)
p_value = ht.PValue()
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
def FalseNegRate(data, num_runs=1000):
Computes the chance of a false negative based on resampling.
data: pair of sequences
num_runs: how many experiments to simulate
returns: float false negative rate
group1, group2 = data
count = 0
for i in range(num_runs):
sample1 = thinkstats2.Resample(group1)
sample2 = thinkstats2.Resample(group2)
ht = DiffMeansPermute((sample1, sample2))
p_value = ht.PValue(iters=101)
if p_value > 0.05:
count += 1
return count / num_runs
neg_rate = FalseNegRate(data)
neg_rate
# Solution
def RunTests(live, iters=1000):
Runs the tests from Chapter 9 with a subset of the data.
live: DataFrame
iters: how many iterations to run
n = len(live)
firsts = live[live.birthord == 1]
others = live[live.birthord != 1]
# compare pregnancy lengths
data = firsts.prglngth.values, others.prglngth.values
ht = DiffMeansPermute(data)
p1 = ht.PValue(iters=iters)
data = (firsts.totalwgt_lb.dropna().values,
others.totalwgt_lb.dropna().values)
ht = DiffMeansPermute(data)
p2 = ht.PValue(iters=iters)
# test correlation
live2 = live.dropna(subset=['agepreg', 'totalwgt_lb'])
data = live2.agepreg.values, live2.totalwgt_lb.values
ht = CorrelationPermute(data)
p3 = ht.PValue(iters=iters)
# compare pregnancy lengths (chi-squared)
data = firsts.prglngth.values, others.prglngth.values
ht = PregLengthTest(data)
p4 = ht.PValue(iters=iters)
print('%d\t%0.2f\t%0.2f\t%0.2f\t%0.2f' % (n, p1, p2, p3, p4))
# Solution
n = len(live)
for _ in range(7):
sample = thinkstats2.SampleRows(live, n)
RunTests(sample)
n //= 2
# Solution
# My results:
# test1: difference in mean pregnancy length
# test2: difference in mean birth weight
# test3: correlation of mother's age and birth weight
# test4: chi-square test of pregnancy length
# n test1 test2 test2 test4
# 9148 0.16 0.00 0.00 0.00
# 4574 0.10 0.01 0.00 0.00
# 2287 0.25 0.06 0.00 0.00
# 1143 0.24 0.03 0.39 0.03
# 571 0.81 0.00 0.04 0.04
# 285 0.57 0.41 0.48 0.83
# 142 0.45 0.08 0.60 0.04
# Conclusion: As expected, tests that are positive with large sample
# sizes become negative as we take away data. But the pattern is
# erratic, with some positive tests even at small sample sizes.
# Solution
class DiffMeansResample(DiffMeansPermute):
Tests a difference in means using resampling.
def RunModel(self):
Run the model of the null hypothesis.
returns: simulated data
group1 = np.random.choice(self.pool, self.n, replace=True)
group2 = np.random.choice(self.pool, self.m, replace=True)
return group1, group2
# Solution
def RunResampleTest(firsts, others):
Tests differences in means by resampling.
firsts: DataFrame
others: DataFrame
data = firsts.prglngth.values, others.prglngth.values
ht = DiffMeansResample(data)
p_value = ht.PValue(iters=10000)
print('\ndiff means resample preglength')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
data = (firsts.totalwgt_lb.dropna().values,
others.totalwgt_lb.dropna().values)
ht = DiffMeansPermute(data)
p_value = ht.PValue(iters=10000)
print('\ndiff means resample birthweight')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
# Solution
RunResampleTest(firsts, others)
# Solution
# Conclusions: Using resampling instead of permutation has very
# little effect on the results.
# The two models are based on slightly difference assumptions, and in
# this example there is no compelling reason to choose one or the other.
# But in general p-values depend on the choice of the null hypothesis;
# different models can yield very different results.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hypothesis testing
Step2: And here's an example that uses it to compute the p-value of an experiment where we toss a coin 250 times and get 140 heads.
Step3: The p-value turns out to be about 7%, which is considered on the border of statistical significance.
Step4: Permutation test
Step5: Here's an example where we test the observed difference in pregnancy length for first babies and others.
Step6: The p-value is about 17%, which means it is plausible that the observed difference is just the result of random sampling, and might not be generally true in the population.
Step7: Here's the distrubution of the test statistic (the difference in means) over many simulated samples
Step8: Under the null hypothesis, we often see differences bigger than the observed difference.
Step9: If the hypothesis under test is that first babies come late, the appropriate test statistic is the raw difference between first babies and others, rather than the absolute value of the difference. In that case, the p-value is smaller, because we are testing a more specific hypothesis.
Step10: But in this example, the result is still not statistically significant.
Step11: But that's not statistically significant either.
Step12: Here's an example testing the correlation between birth weight and mother's age.
Step13: The reported p-value is 0, which means that in 1000 trials we didn't see a correlation, under the null hypothesis, that exceeded the observed correlation. That means that the p-value is probably smaller than $1/1000$, but it is not actually 0.
Step14: Testing proportions
Step15: Here's an example using the data from the book
Step16: The observed deviance from the expected values is not statistically significant.
Step17: Using this test, we get a smaller p-value
Step18: Taking this result at face value, we might consider the data statistically significant, but considering the results of both tests, I would not draw any strong conclusions.
Step19: If we specifically test the deviations of first babies and others from the expected number of births in each week of pregnancy, the results are statistically significant with a very small p-value. But at this point we have run so many tests, we should not be surprised to find at least one that seems significant.
Step21: Power
Step23: In this example, the false negative rate is 70%, which means that the power of the test (probability of statistical significance if the actual difference is 0.078 weeks) is only 30%.
Step27: Exercise
|
10,665
|
<ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
N = UNITS.newton
params = Params(v_init = 0 * m / s,
g = 9.8 * m/s**2,
M = 75 * kg, # mass of jumper
m_cord = 75 * kg, # mass of cord
area = 1 * m**2, # frontal area of jumper
rho = 1.2 * kg/m**3, # density of air
v_term = 60 * m / s, # terminal velocity of jumper
L = 25 * m, # length of cord
k = 40 * N / m) # spring constant of cord
def make_system(params):
Makes a System object for the given params.
params: Params object
returns: System object
M, m_cord = params.M, params.m_cord
g, rho, area = params.g, params.rho, params.area
v_init, v_term = params.v_init, params.v_term
# back out the coefficient of drag
C_d = 2 * M * g / (rho * area * v_term**2)
mu = m_cord / M
init = State(y=0*m, v=v_init)
t_end = 10 * s
return System(params, C_d=C_d, mu=mu,
init=init, t_end=t_end)
system = make_system(params)
def drag_force(v, system):
Computes drag force in the opposite direction of `v`.
v: velocity
returns: drag force in N
rho, C_d, area = system.rho, system.C_d, system.area
f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2
return f_drag
drag_force(20 * m/s, system)
def cord_acc(y, v, system):
Computes the force of the bungee cord on the jumper:
y: height of the jumper
v: velocity of the jumpter
returns: acceleration in m/s
L, mu = system.L, system.mu
a_cord = -v**2 / 2 / (2*L/mu + (L+y))
return a_cord
y = -20 * m
v = -20 * m/s
cord_acc(y, v, system)
def slope_func1(state, t, system):
Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: derivatives of y and v
y, v = state
M, g = system.M, system.g
a_drag = drag_force(v, system) / M
a_cord = cord_acc(y, v, system)
dvdt = -g + a_cord + a_drag
return v, dvdt
slope_func1(system.init, 0, system)
def event_func(state, t, system):
Run until y=-L.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: difference between y and -L
y, v = state
return y + system.L
event_func(system.init, 0, system)
results, details = run_ode_solver(system, slope_func1, events=event_func)
details.message
t_final = get_last_label(results)
def plot_position(results, **options):
plot(results.y, **options)
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
min(results.y)
def plot_velocity(results):
plot(results.v, color='C1', label='v')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results)
min(results.v)
a = gradient(results.v)
plot(a)
decorate(xlabel='Time (s)',
ylabel='Acceleration (m/$s^2$)')
max_acceleration = max(abs(a)) * m/s**2 / params.g
def max_acceleration(system):
mu = system.mu
return 1 + mu * (4+mu) / 8
max_acceleration(system)
def sweep_m_cord(m_cord_array, params):
sweep = SweepSeries()
for m_cord in m_cord_array:
system = make_system(Params(params, m_cord=m_cord))
results, details = run_ode_solver(system, slope_func1, events=event_func)
min_velocity = min(results.v) * m/s
sweep[m_cord.magnitude] = min_velocity
return sweep
m_cord_array = linspace(1, 201, 21) * kg
sweep = sweep_m_cord(m_cord_array, params)
plot(sweep)
decorate(xlabel='Mass of cord (kg)',
ylabel='Fastest downward velocity (m/s)')
def spring_force(y, system):
Computes the force of the bungee cord on the jumper:
y: height of the jumper
Uses these variables from system:
y_attach: height of the attachment point
L: resting length of the cord
k: spring constant of the cord
returns: force in N
L, k = system.L, system.k
distance_fallen = -y
extension = distance_fallen - L
f_spring = k * extension
return f_spring
spring_force(-25*m, system)
spring_force(-26*m, system)
def slope_func2(state, t, system):
Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing g, rho,
C_d, area, and mass
returns: derivatives of y and v
y, v = state
M, g = system.M, system.g
a_drag = drag_force(v, system) / M
a_spring = spring_force(y, system) / M
dvdt = -g + a_drag + a_spring
return v, dvdt
system1 = make_system(params)
event_func.direction=-1
results1, details1 = run_ode_solver(system1, slope_func1, events=event_func)
print(details1.message)
t_final = get_last_label(results1)
init2 = results1.row[t_final]
system2 = System(system1, t_0=t_final, init=init2)
event_func.direction=+1
results2, details2 = run_ode_solver(system2, slope_func2, events=event_func)
print(details2.message)
t_final = get_last_label(results2)
plot_position(results1, label='Phase 1')
plot_position(results2, label='Phase 2')
min(results2.y)
def simulate_system2(params):
system1 = make_system(params)
event_func.direction=-1
results1, details1 = run_ode_solver(system1, slope_func1, events=event_func)
t_final = get_last_label(results1)
init2 = results1.row[t_final]
system2 = System(system1, t_0=t_final, init=init2)
results2, details2 = run_ode_solver(system2, slope_func2, events=event_func)
t_final = get_last_label(results2)
return TimeFrame(pd.concat([results1, results2]))
results = simulate_system2(params);
plot_position(results)
params_no_cord = Params(params, m_cord=1*kg)
results_no_cord = simulate_system2(params_no_cord);
plot_position(results, label='m_cord = 75 kg')
plot_position(results_no_cord, label='m_cord = 1 kg')
savefig('figs/jump.png')
min(results_no_cord.y)
diff = min(results.y) - min(results_no_cord.y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bungee jumping
Step3: Now here's a version of make_system that takes a Params object as a parameter.
Step4: Let's make a System
Step6: drag_force computes drag as a function of velocity
Step7: Here's drag force at 20 m/s.
Step9: The following function computes the acceleration of the jumper due to tension in the cord.
Step10: Here's acceleration due to tension in the cord if we're going 20 m/s after falling 20 m.
Step12: Now here's the slope function
Step13: As always, let's test the slope function with the initial params.
Step15: We'll need an event function to stop the simulation when we get to the end of the cord.
Step16: We can test it with the initial conditions.
Step17: And then run the simulation.
Step18: Here's how long it takes to drop 25 meters.
Step19: Here's the plot of position as a function of time.
Step20: We can use min to find the lowest point
Step21: Here's velocity as a function of time
Step22: Velocity when we reach the end of the cord.
Step23: Although we compute acceleration inside the slope function, we don't get acceleration as a result from run_ode_solver.
Step24: The maximum downward acceleration, as a factor of g
Step25: Using Equation (1) from Heck, Uylings, and Kędzierska, we can compute the peak acceleration due to interaction with the cord, neglecting drag.
Step26: If you set C_d=0, the simulated acceleration approaches the theoretical result, although you might have to reduce max_step to get a good numerical estimate.
Step27: Here's what it looks like. As expected, a heavier cord gets the jumper going faster.
Step29: Phase 2
Step30: The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.
Step32: The slope function for Phase 2 includes the spring force, and drops the acceleration due to the cord.
Step33: I'll run Phase 1 again so we can get the final state.
Step34: Now I need the final time, position, and velocity from Phase 1.
Step35: And that gives me the starting conditions for Phase 2.
Step36: Here's how we run Phase 2, setting the direction of the event function so it doesn't stop the simulation immediately.
Step37: We can plot the results on the same axes.
Step38: And get the lowest position from Phase 2.
Step39: To see how big the effect of the cord is, I'll collect the previous code in a function.
Step40: Now we can run both phases and get the results in a single TimeFrame.
|
10,666
|
<ASSISTANT_TASK:>
Python Code:
from myhdl import *
from myhdlpeek import Peeker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sympy import *
init_printing()
import random
#https://github.com/jrjohansson/version_information
%load_ext version_information
%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, random
#helper functions to read in the .v and .vhd generated files into python
def VerilogTextReader(loc, printresult=True):
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
def VHDLTextReader(loc, printresult=True):
with open(f'{loc}.vhd', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText)
return VerilogText
CountVal=17
BitSize=int(np.log2(CountVal))+1; BitSize
ModBV=modbv(0)[BitSize:]
IntBV=intbv(0)[BitSize:]
print(f"`ModBV` max is {ModBV.max}; min is {ModBV.min}")
print(f"`IntBV` max is {IntBV.max}; min is {IntBV.min}")
for _ in range(ModBV.max*2):
try:
ModBV+=1; IntBV+=1
print(f"`ModBV` value is {ModBV}; `IntBV` value is {IntBV}")
except ValueError:
ModBV+=1
print(f"`ModBV` value is {ModBV}; `IntBV` value is {IntBV} and INVALID")
ModBV=modbv(2**BitSize -1)[BitSize:]
IntBV=intbv(2**BitSize -1)[BitSize:]
print(f"`ModBV` max is {ModBV.max}; min is {ModBV.min}")
print(f"`IntBV` max is {IntBV.max}; min is {IntBV.min}")
for _ in range(ModBV.max*2):
try:
ModBV-=1; IntBV-=1
print(f"`ModBV` value is {ModBV}; `IntBV` value is {IntBV}")
except ValueError:
ModBV-=0
print(f"`ModBV` value is {ModBV}; `IntBV` value is {IntBV} and INVALID")
@block
def Up_Counter(count, Trig, clk, rst, CountVal, BitSize):
UpCounter
Input:
clk(bool): system clock feed
rst(bool): clock reset signal
Ouput:
count (bit vector): current count value; count
Trig(bool)
Parmeter(Python Only):
CountVal(int): value to count to
BitSize (int): Bitvalue size is log_2(CountVal)+1
#internals
count_i=Signal(modbv(0)[BitSize:])
Trig_i=Signal(bool(0))
@always(clk.posedge, rst.negedge)
def logic():
if rst:
count_i.next=0
Trig_i.next=0
elif count_i%CountVal==0 and count_i!=0:
Trig_i.next=1
count_i.next=0
else:
count_i.next=count_i+1
@always_comb
def OuputBuffer():
count.next=count_i
Trig.next=Trig_i
return instances()
Peeker.clear()
clk=Signal(bool(0)); Peeker(clk, 'clk')
rst=Signal(bool(0)); Peeker(rst, 'rst')
Trig=Signal(bool(0)); Peeker(Trig, 'Trig')
count=Signal(modbv(0)[BitSize:]); Peeker(count, 'count')
DUT=Up_Counter(count, Trig, clk, rst, CountVal, BitSize)
def Up_CounterTB():
myHDL only Testbench for `Up_Counter` module
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
i=0
while True:
if i==int(CountVal*1.5):
rst.next=1
elif i==int(CountVal*1.5)+1:
rst.next=0
if i==int(CountVal*2.5):
raise StopSimulation()
i+=1
yield clk.posedge
return instances()
sim=Simulation(DUT, Up_CounterTB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
Up_CounterData=Peeker.to_dataframe()
Up_CounterData=Up_CounterData[Up_CounterData['clk']==1]
Up_CounterData.drop('clk', axis=1, inplace=True)
Up_CounterData.reset_index(drop=True, inplace=True)
Up_CounterData
DUT.convert()
VerilogTextReader('Up_Counter');
ResetAt=int(CountVal*1.5)+1
StopAt=int(CountVal*2.5)
@block
def Up_CounterTBV():
myHDL -> Verilog Testbench for `Up_Counter` module
clk=Signal(bool(0))
rst=Signal(bool(0))
Trig=Signal(bool(0))
count=Signal(modbv(0)[BitSize:])
@always_comb
def print_data():
print(clk, rst, Trig, count)
DUT=Up_Counter(count, Trig, clk, rst, CountVal, BitSize)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
i=0
while True:
if i==ResetAt:
rst.next=1
elif i==(ResetAt+1):
rst.next=0
else:
pass
if i==StopAt:
raise StopSimulation()
i+=1
yield clk.posedge
return instances()
TB=Up_CounterTBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('Up_CounterTBV');
@block
def Down_Counter(count, Trig, clk, rst, StartVal, BitSize):
DownCounter
Input:
clk(bool): system clock feed
rst(bool): clock reset signal
Ouput:
count (bit vector): current count value; count
Trig(bool)
Parmeter(Python Only):
StartVal(int): value to count from
BitSize (int): Bitvalue size is log_2(CountVal)+1
CatButt
#internal counter value
count_i=Signal(modbv(StartVal)[BitSize:])
@always(clk.posedge, rst.negedge)
def logic():
if rst:
count_i.next=StartVal
Trig.next=0
elif count_i==0:
Trig.next=1
count_i.next=StartVal
else:
count_i.next=count_i-1
@always_comb
def OuputBuffer():
count.next=count_i
return instances()
Peeker.clear()
clk=Signal(bool(0)); Peeker(clk, 'clk')
rst=Signal(bool(0)); Peeker(rst, 'rst')
Trig=Signal(bool(0)); Peeker(Trig, 'Trig')
count=Signal(modbv(0)[BitSize:]); Peeker(count, 'count')
DUT=Down_Counter(count, Trig, clk, rst, CountVal, BitSize)
def Down_CounterTB():
myHDL only Testbench for `Down_Counter` module
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
i=0
while True:
if i==int(CountVal*1.5):
rst.next=1
elif i==int(CountVal*1.5)+1:
rst.next=0
if i==int(CountVal*2.5):
raise StopSimulation()
i+=1
yield clk.posedge
return instances()
sim=Simulation(DUT, Down_CounterTB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
Down_CounterData=Peeker.to_dataframe()
Down_CounterData=Down_CounterData[Down_CounterData['clk']==1]
Down_CounterData.drop('clk', axis=1, inplace=True)
Down_CounterData.reset_index(drop=True, inplace=True)
Down_CounterData
DUT.convert()
VerilogTextReader('Down_Counter');
ResetAt=int(CountVal*1.5)
StopAt=int(CountVal*2.5)
@block
def Down_CounterTBV():
myHDL -> Verilog Testbench for `Down_Counter` module
clk=Signal(bool(0))
rst=Signal(bool(0))
Trig=Signal(bool(0))
count=Signal(modbv(0)[BitSize:])
@always_comb
def print_data():
print(clk, rst, Trig, count)
DUT=Down_Counter(count, Trig, clk, rst, CountVal, BitSize)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
i=0
while True:
if i==ResetAt:
rst.next=1
elif i==(ResetAt+1):
rst.next=0
else:
pass
if i==StopAt:
raise StopSimulation()
i+=1
yield clk.posedge
return instances()
TB=Down_CounterTBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('Down_CounterTBV');
#Create the Direction States for UpDown Counter
DirStates=enum('Up', 'Down')
print(f"`Up` state repersentation is {bin(DirStates.Up)}")
print(f"`Down` state repersentation is {bin(DirStates.Down)}")
@block
def UpDown_Counter(Dir, count, Trig, clk, rst,
CountVal, StartVal, BitSize):
UpDownCounter, hybrid of a simple Up Counter and
a simple Down Counter using `Dir` to control Up/Down
count Direction
Input:
Dir():
clk(bool): system clock feed
rst(bool): clock reset signal
Ouput:
count (bit vector): current count value; count
Trig(bool)
Parmeter(Python Only):
CountVal(int): Highest Value for counter
StartVal(int): starting value for internal counter
BitSize (int): Bitvalue size is log_2(CountVal)+1
#internal counter value
count_i=Signal(modbv(StartVal)[BitSize:])
@always(clk.posedge, rst.negedge)
def logic():
if rst:
count_i.next=StartVal
Trig.next=0
#counter contanment
elif count_i//CountVal==1 and rst==0:
count_i.next=StartVal
#up behavior
elif Dir==DirStates.Up:
count_i.next=count_i+1
#simple Triger at ends
if count_i%CountVal==0:
Trig.next=1
#down behavior
elif Dir==DirStates.Down:
count_i.next=count_i-1
#simple Triger at ends
if count_i%CountVal==0:
Trig.next=1
@always_comb
def OuputBuffer():
count.next=count_i
return instances()
Peeker.clear()
clk=Signal(bool(0)); Peeker(clk, 'clk')
rst=Signal(bool(0)); Peeker(rst, 'rst')
Trig=Signal(bool(0)); Peeker(Trig, 'Trig')
count=Signal(modbv(0)[BitSize:]); Peeker(count, 'count')
Dir=Signal(DirStates.Up); Peeker(Dir, 'Dir')
DUT=UpDown_Counter(Dir, count, Trig, clk, rst,
CountVal, StartVal=CountVal//2, BitSize=BitSize)
def UpDown_CounterTB():
myHDL only Testbench for `UpDown_Counter` module
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
i=0
while True:
if i==int(CountVal*1.5):
Dir.next=DirStates.Down
elif i==int(CountVal*2.5):
rst.next=1
elif i==int(CountVal*2.5)+1:
rst.next=0
if i==int(CountVal*3.5):
raise StopSimulation()
i+=1
yield clk.posedge
return instances()
sim=Simulation(DUT, UpDown_CounterTB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
UpDown_CounterData=Peeker.to_dataframe()
UpDown_CounterData=UpDown_CounterData[UpDown_CounterData['clk']==1]
UpDown_CounterData.drop('clk', axis=1, inplace=True)
UpDown_CounterData.reset_index(drop=True, inplace=True)
UpDown_CounterData
DUT.convert()
VerilogTextReader('UpDown_Counter');
StateChangeAt=int(CountVal*1.5)
ResetAt=int(CountVal*2.5)
StopAt=int(CountVal*3.5)
@block
def UpDown_CounterTBV():
myHDL -> Verilog Testbench for `Down_Counter` module
clk=Signal(bool(0))
rst=Signal(bool(0))
Trig=Signal(bool(0))
count=Signal(modbv(0)[BitSize:])
Dir=Signal(DirStates.Up)
DUT=UpDown_Counter(Dir, count, Trig, clk, rst,
CountVal, StartVal=CountVal//2, BitSize=BitSize)
@always_comb
def print_data():
print(clk, rst, Trig, count)
DUT=Down_Counter(count, Trig, clk, rst, CountVal, BitSize)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
i=0
while True:
if i==StateChangeAt:
Dir.next=DirStates.Down
elif i==ResetAt:
rst.next=1
elif i==ResetAt+1:
rst.next=0
else:
pass
if i==StopAt:
raise StopSimulation()
i+=1
yield clk.posedge
return instances()
TB=UpDown_CounterTBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('UpDown_CounterTBV');
@block
def ClockDivider(Divisor, clkOut, count, clk,rst):
Simple Clock Divider based on the Digilint Clock Divider
https://learn.digilentinc.com/Documents/262
Input:
Divisor(32 bit): the clock frequncy divide by value
clk(bool): The input clock
rst(bool): clockDivider Reset
Ouput:
clkOut(bool): the divided clock ouput
count(32bit): the value of the internal counter
count_i=Signal(modbv(0)[32:])
@always(clk.posedge, rst.posedge)
def counter():
if rst:
count_i.next=0
elif count_i==(Divisor-1):
count_i.next=0
else:
count_i.next=count_i+1
clkOut_i=Signal(bool(0))
@always(clk.posedge, rst.posedge)
def clockTick():
if rst:
clkOut_i.next=0
elif count_i==(Divisor-1):
clkOut_i.next=not clkOut_i
else:
clkOut_i.next=clkOut_i
@always_comb
def OuputBuffer():
count.next=count_i
clkOut.next=clkOut_i
return instances()
Peeker.clear()
clk=Signal(bool(0)); Peeker(clk, 'clk')
Divisor=Signal(intbv(0)[32:]); Peeker(Divisor, 'Divisor')
count=Signal(intbv(0)[32:]); Peeker(count, 'count')
clkOut=Signal(bool(0)); Peeker(clkOut, 'clkOut')
rst=Signal(bool(0)); Peeker(rst, 'rst')
DUT=ClockDivider(Divisor, clkOut, count, clk,rst)
def ClockDividerTB():
myHDL only Testbench for `ClockDivider` module
@always(delay(1))
def ClkGen():
clk.next=not clk
@instance
def stimules():
for i in range(2,6+1):
Divisor.next=i
rst.next=0
#run clock time
for _ in range(4*2**(i-1)):
yield clk.posedge
for j in range(1):
if j==0:
rst.next=1
yield clk.posedge
raise StopSimulation()
return instances()
sim=Simulation(DUT, ClockDividerTB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
ClockDividerData=Peeker.to_dataframe()
ClockDividerData
ClockDividerData_2=ClockDividerData[ClockDividerData['Divisor']==2]
ClockDividerData_2.reset_index(drop=True, inplace=True)
ClockDividerData_2.plot(y=['clk', 'clkOut']);
ClockDividerData_3=ClockDividerData[ClockDividerData['Divisor']==3]
ClockDividerData_3.reset_index(drop=True, inplace=True)
ClockDividerData_3.plot(y=['clk', 'clkOut']);
ClockDividerData_4=ClockDividerData[ClockDividerData['Divisor']==4]
ClockDividerData_4.reset_index(drop=True, inplace=True)
ClockDividerData_4.plot(y=['clk', 'clkOut']);
ClockDividerData_5=ClockDividerData[ClockDividerData['Divisor']==5]
ClockDividerData_5.reset_index(drop=True, inplace=True)
ClockDividerData_5.plot(y=['clk', 'clkOut']);
ClockDividerData_6=ClockDividerData[ClockDividerData['Divisor']==6]
ClockDividerData_6.reset_index(drop=True, inplace=True)
ClockDividerData_6.plot(y=['clk', 'clkOut']);
DUT.convert()
VerilogTextReader('ClockDivider');
@block
def ClockDividerTBV():
myHDL -> Verilog Testbench for `ClockDivider` module
clk=Signal(bool(0));
Divisor=Signal(intbv(0)[32:])
count=Signal(intbv(0)[32:])
clkOut=Signal(bool(0))
rst=Signal(bool(0))
@always_comb
def print_data():
print(clk, Divisor, count, clkOut, rst)
DUT=ClockDivider(Divisor, clkOut, count, clk,rst)
@instance
def clk_signal():
while True:
clk.next = not clk
yield delay(1)
@instance
def stimules():
for i in range(2,6+1):
Divisor.next=i
rst.next=0
#run clock time
for _ in range(4*2**(i-1)):
yield clk.posedge
for j in range(1):
if j==0:
rst.next=1
else:
pass
yield clk.posedge
raise StopSimulation()
return instances()
TB=ClockDividerTBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('ClockDividerTBV');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Counter Specs
Step2: myHDL modules bitvector type behavior
Step3: down counting behavior
Step5: Up-Counter
Step7: myHDL testing
Step8: Verilog Code
Step10: \begin{figure}
Step12: Down Counter
Step14: myHDL Testing
Step15: Verilog Code
Step17: \begin{figure}
Step19: Up/Down Counter
Step21: myHDL Testing
Step22: Verilog Code
Step24: \begin{figure}
Step26: Application
Step28: myHDL Testing
Step30: Verilog Code
|
10,667
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
g = 9.81 # m/s^2
l = 0.5 # length of pendulum, in meters
tmax = 50. # seconds
t = np.linspace(0, tmax, int(100*tmax))
def derivs(y, t, a, b, omega0):
Compute the derivatives of the damped, driven pendulum.
Parameters
----------
y : ndarray
The solution vector at the current time t[i]: [theta[i],omega[i]].
t : float
The current time t[i].
a, b, omega0: float
The parameters in the differential equation.
Returns
-------
dy : ndarray
The vector of derviatives at t[i]: [dtheta[i],domega[i]].
dy0 = y[1]
dy1 = -g/l * np.sin(y[0]) - a*dy0 - b*np.sin(omega0*t)
return(dy0,dy1)
derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0)
assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.])
def energy(y):
Compute the energy for the state array y.
The state array y can have two forms:
1. It could be an ndim=1 array of np.array([theta,omega]) at a single time.
2. It could be an ndim=2 array where each row is the [theta,omega] at single
time.
Parameters
----------
y : ndarray, list, tuple
A solution vector
Returns
-------
E/m : float (ndim=1) or ndarray (ndim=2)
The energy per mass.
if np.ndim(y) == 1:
y = np.array([y])
z = np.shape(y)[0]
Em = g * l * (1 - np.cos([y[i][0] for i in range(z)])) + 1/2 * l**2 * (np.array([y[i][1] for i in range(z)]))**2
return Em
assert np.allclose(energy(np.array([np.pi,0])),g)
assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1])))
a=0
b=0
omega0=0
ans = odeint(derivs, np.array([np.pi,0]), t, args=(a,b,omega0), atol=10**(-5), rtol=10**(-4))
plt.plot(t, energy(ans))
plt.title("Energy of Simple Pendulum at ( $\pi$, 0 )")
plt.xlabel("Time")
plt.ylabel("Energy")
ax = plt.gca()
ax.set_axis_bgcolor("#fcfcfc")
plt.plot(t, np.transpose(ans)[0], label="Omega")
plt.plot(t, np.transpose(ans)[1], label="Theta")
plt.title("Simple Pendulum")
plt.xlabel("Time")
ax = plt.gca()
ax.set_axis_bgcolor("#fcfcfc")
plt.legend(loc ='lower right')
assert True # leave this to grade the two plots and their tuning of atol, rtol.
def plot_pendulum(a=0.0, b=0.0, omega0=0.0):
Integrate the damped, driven pendulum and make a phase plot of the solution.
ans = np.transpose(odeint(derivs, np.array([-np.pi + 0.1,0]), t, args=(a,b,omega0)))
plt.plot(ans[0], ans[1])
plt.title("Damped Driven Pendulum")
plt.xlabel("Omega")
plt.ylim(-10,10)
plt.ylabel("Theta")
plt.grid(False)
ax = plt.gca()
ax.set_axis_bgcolor("white")
plt.xticks(np.linspace(-2*np.pi, 2*np.pi, 5), [r'$-2\pi$', r'$-\pi$', r'$0$', r'$\pi$', r'$2\pi$'])
plot_pendulum(0.5, 0.0, 0.0)
interact(plot_pendulum, a=(0.0,10.0,0.1), b=(0.0,10.0, 0.1), omega0=(0,10.0,0.1));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Damped, driven nonlinear pendulum
Step4: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
Step5: Simple pendulum
Step7: Damped pendulum
Step8: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
Step9: Use interact to explore the plot_pendulum function with
|
10,668
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
import random
values = [ [random.random() for i in range(0,20)] for _ in range(0,100000) ]
col = [ "col%d" % i for i in range(0,20) ]
import pandas
df = pandas.DataFrame( values, columns = col )
df.to_csv("df_text.txt", sep="\t")
df.to_pickle("df_text.bin")
%timeit pandas.read_csv("df_text.txt", sep="\t")
%timeit pandas.read_pickle("df_text.bin")
obj = dict(a=[50, "r"], gg=(5, 't'))
import jsonpickle
frozen = jsonpickle.encode(obj)
frozen
frozen = jsonpickle.encode(df)
len(frozen), type(frozen), frozen[:55]
def to_json(obj, filename):
frozen = jsonpickle.encode(obj)
with open(filename, "w", encoding="utf-8") as f:
f.write(frozen)
def read_json(filename):
with open(filename, "r", encoding="utf-8") as f:
enc = f.read()
return jsonpickle.decode(enc)
to_json(df, "df_text.json")
try:
df = read_json("df_text.json")
except Exception as e:
print(e)
import jsonpickle.ext.numpy as jsonpickle_numpy
jsonpickle_numpy.register_handlers()
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
y = iris.target
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf.fit(X,y)
clf.predict_proba([[0.1, 0.2]])
to_json(clf, "logreg.json")
try:
clf2 = read_json("logreg.json")
except AttributeError as e:
# Pour une raison inconnue, un bug sans doute, le code ne fonctionne pas.
print(e)
class EncapsulateLogisticRegression:
def __init__(self, obj):
self.obj = obj
def __getstate__(self):
return {k: v for k, v in sorted(self.obj.__getstate__().items())}
def __setstate__(self, data):
self.obj = LogisticRegression()
self.obj.__setstate__(data)
enc = EncapsulateLogisticRegression(clf)
to_json(enc, "logreg.json")
enc2 = read_json("logreg.json")
clf2 = enc2.obj
clf2.predict_proba([[0.1, 0.2]])
with open("logreg.json", "r") as f:
content = f.read()
content
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercice 1
Step2: Etape 2
Step3: Etape 3
Step4: Exercice 2
Step5: Ce module est équivalent au module json sur les types standard du langage Python (liste, dictionnaires, nombres, ...). Mais le module json ne fonctionne pas sur les dataframe.
Step6: La methode to_json donnera un résultat statisfaisant également mais ne pourra s'appliquer à un modèle de machine learning produit par scikit-learn.
Step7: Visiblement, cela ne fonctionne pas sur les DataFrame. Il faudra s'inspirer du module numpyson.
Step8: Donc on essaye d'une essaye d'une autre façon. Si le code précédent ne fonctionne pas et le suivant si, c'est un bug de jsonpickle.
|
10,669
|
<ASSISTANT_TASK:>
Python Code:
import suspect
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
data = suspect.io.load_rda("/home/jovyan/suspect/tests/test_data/siemens/SVS_30.rda")
import scipy.signal
window = scipy.signal.tukey(data.np * 2)[data.np:]
data = window * data
plt.plot(data.spectrum().real)
components = suspect.processing.water_suppression.hsvd(data, 20)
print(components[0])
hsvd_fid = suspect.processing.water_suppression.construct_fid(components, data.time_axis())
hsvd_fid = data.inherit(hsvd_fid)
# plot two axes, one of the whole spectrum and one focussing on the metabolite region
f, (ax1, ax2) = plt.subplots(2)
ax2.set_xlim([550, 850])
ax2.set_ylim([0, 2e5])
for ax in (ax1, ax2):
ax.plot(data.spectrum().real)
ax.plot(hsvd_fid.spectrum().real)
# plot two axes, one of the whole dataset and one of the metabolite region
f, (ax1, ax2) = plt.subplots(2)
ax2.set_xlim([550, 850])
ax2.set_ylim([-1e5, 5e5])
for component in components:
component_fid = suspect.processing.water_suppression.construct_fid([component], data.time_axis())
component_fid = data.inherit(component_fid)
ax1.plot(component_fid.spectrum().real)
ax2.plot(component_fid.spectrum().real)
water_components = [component for component in components if component["frequency"] < 70 or component["fwhm"] > 100]
water_fid = suspect.processing.water_suppression.construct_fid(water_components, data.time_axis())
water_fid = data.inherit(water_fid)
dry_fid = data - water_fid
# plot two axes, one of the whole spectrum and one focussing on the metabolite region
f, (ax1, ax2) = plt.subplots(2)
ax2.set_xlim([550, 850])
ax2.set_ylim([-1e5, 2e5])
for ax in (ax1, ax2):
ax.plot(data.spectrum().real)
ax.plot(water_fid.spectrum().real)
ax.plot(dry_fid.spectrum().real)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For this tutorial, we will be using the SVS_30.rda data included in the Suspect test data collection, so that we don't have to worry about channel combination or frequency correction here. However, we will repeat the apodisation step described in Tutorial 1.
Step2: If we plot the raw spectrum we immediately see that the water peak completely dominates all the other peaks in the spectrum
Step3: HSVD works by approximating the FID with a set of exponentially decaying components
Step4: The second argument to the function is the number of components to generate. This will depend on both the number of peaks in the spectrum and how Lorentzian they are. Too few components will not be able to correctly describe the signal but too many can lead to over-fitting. Around 20 is typically a good number for most cases, but do experiment with your own data to understand better exactly what is going on.
Step5: This components list can be turned back into an FID using the construct_fid() function, which takes a list of components to be used and a reference time axis. In this example we also set the resulting FID to inherit() all the MRS properties from the original data object.
Step6: Overall we see that the hsvd_fid is a very good approximation to the original data signal, although some of the smaller peaks such as the Glx region are not fitted. To get a better idea of what is going on, we can reconstruct each component individually and plot the whole set together.
Step7: What we find is that the major metabolite peaks each have one component associated with them, while the water peak has several. This is because it is not a perfect Lorentzian - to adequately describe the peak shape requires a series of progressively smaller correction terms to modify the main peak. Typically only the water peak gets multiple components as the others are too small, and the total number of components is limited.
Step8: In this case we have selected all the components with frequencies below 80Hz. The best value for this cut-off frequency will depend strongly on your data, and of course on the field strength of the magnet, but 80Hz is a reasonable starting point for most people at 3T. For our data we don't have any peaks downfield of water so we don't need a negative frequency cut-off.
|
10,670
|
<ASSISTANT_TASK:>
Python Code:
from google.colab import auth
auth.authenticate_user()
!pip install -q amplpy ampltools gspread --upgrade
MODULES=['ampl', 'coin']
from ampltools import cloud_platform_name, ampl_notebook
from amplpy import AMPL, register_magics
if cloud_platform_name() is None:
ampl = AMPL() # Use local installation of AMPL
else:
ampl = ampl_notebook(modules=MODULES) # Install AMPL and use it
register_magics(ampl_object=ampl) # Evaluate %%ampl_eval cells with ampl.eval()
%%ampl_eval
option version;
%%ampl_eval
set NUTR;
set FOOD;
param cost {FOOD} > 0;
param f_min {FOOD} >= 0;
param f_max {j in FOOD} >= f_min[j];
param n_min {NUTR} >= 0;
param n_max {i in NUTR} >= n_min[i];
param amt {NUTR,FOOD} >= 0;
var Buy {j in FOOD} >= f_min[j], <= f_max[j];
minimize Total_Cost: sum {j in FOOD} cost[j] * Buy[j];
subject to Diet {i in NUTR}:
n_min[i] <= sum {j in FOOD} amt[i,j] * Buy[j] <= n_max[i];
import gspread
from google.auth import default
creds, _ = default()
gclient = gspread.authorize(creds)
def open_spreedsheet(name):
if name.startswith('https://'):
return gclient.open_by_url(name)
return gclient.open(name)
# spreedsheet = open_spreedsheet('DietModelSheet')
spreedsheet = open_spreedsheet('https://docs.google.com/spreadsheets/d/1sTyJdgnMCrmuZDtUjs-cOpRLoKgByM8U-lHieNBNaRY/edit?usp=sharing')
def get_worksheet_values(name):
return spreedsheet.worksheet(name).get_values(value_render_option='UNFORMATTED_VALUE')
import pandas as pd
def table_to_dataframe(rows):
return pd.DataFrame(rows[1:], columns=rows[0]).set_index(rows[0][0])
def matrix_to_dataframe(rows, tr=False):
col_labels = rows[0][1:]
row_labels = [row[0] for row in rows[1:]]
def label(pair):
return pair if not tr else (pair[1], pair[0])
data = {
label((rlabel, clabel)): rows[i+1][j+1]
for i, rlabel in enumerate(row_labels)
for j, clabel in enumerate(col_labels)}
df = pd.Series(data).reset_index()
df.columns = ['index1', 'index2', rows[0][0]]
return df.set_index(['index1', 'index2'])
rows = get_worksheet_values('FOOD')
df = table_to_dataframe(rows)
ampl.set_data(df, set_name='FOOD') # send the data to AMPL
df
rows = get_worksheet_values('NUTR')
df = table_to_dataframe(rows)
ampl.set_data(df, set_name='NUTR') # Send the data to AMPL
df
rows = get_worksheet_values('amt')
df = matrix_to_dataframe(rows, tr=True)
ampl.set_data(df) # Send the data to AMPL
df
%%ampl_eval
option solver cbc;
solve;
display Buy;
ampl.var['Buy'].get_values().to_pandas()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup
Step2: Google Colab & Kaggle interagration
Step3: Use %%ampl_eval to evaluate AMPL commands
Step4: Define the model
Step5: Instatiate gspread client
Step6: Open speedsheet using name or URL
Step7: Define auxiliar functions to convert data from worksheets into dataframes
Step8: Load data from the first worksheet
Step9: Load the data from the second worksheet
Step10: Load the data from the third worksheet
Step11: Use %%ampl_eval to solve the model with cbc
Step12: Retrieve the solution as a pandas dataframe
|
10,671
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries.
import csv
import httplib2
from apiclient.discovery import build
import urllib
import json
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
# This API key is provided by google as described in the tutorial
API_KEY = 'AIzaSyCuKKwcT5mUYZP_Q-heqCPjvdWyacRuX00'
# This is the table id for the fusion table
TABLE_ID = '1ymz3EtGdi4qKGMl5AxEFXtTlgk3tKi8iCpjTzvM'
# open the data stored in a file called "data.json"
try:
fp = open("data.json")
response = json.load(fp)
# but if that file does not exist, download the data from fusiontables
except IOError:
service = build('fusiontables', 'v1', developerKey=API_KEY)
query = "SELECT * FROM " + TABLE_ID + " WHERE AnimalType = 'DOG'"
response = service.query().sql(sql=query).execute()
fp = open("data.json", "w")
json.dump(response, fp)
# Check how many rows we have.
print len(response['rows'])
data_df = pd.DataFrame(response[u'rows'], columns = response[u'columns'])
# Show a few rows.
data_df.head()
lat_lon_df = data_df[['Latitude', 'Longitude']]
lat_lon_df.head()
lat_lon_df = lat_lon_df.apply(lambda x: pd.to_numeric(x, errors='ignore')).dropna()
sns.pairplot(lat_lon_df)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Data from a Fusion Table
Step2: Load JSON Data into a Dataframe
Step3: Select and Clean Up Data
Step4: Now we can quickly convert both columns to numeric and remove values that could not be converted (NaN). It is all done on one line!
Step5: Visualize Data
|
10,672
|
<ASSISTANT_TASK:>
Python Code:
df = unpickle_object("FINAL_DATAFRAME_PROJ_5.pkl")
df.head()
def linear_extrapolation(df, window):
pred_lst = []
true_lst = []
cnt = 0
all_rows = df.shape[0]
while cnt < window:
start = df.iloc[cnt:all_rows-window+cnt, :].index[0].date()
end = df.iloc[cnt:all_rows-window+cnt, :].index[-1].date()
predicting = df.iloc[all_rows-window+cnt, :].name.date()
print("---- Running model from {} to {} and predicting on {} ----".format(start,end,predicting))
training_df = df.iloc[cnt:all_rows-window+cnt, :]
testing_df = df.iloc[all_rows-window+cnt, :]
true_val = testing_df[-1]
first_row_value = training_df.iloc[0, :]['mkt_price']
first_row_date = training_df.iloc[0, :].name
last_row_value = training_df.iloc[-1, :]['mkt_price']
last_row_date = training_df.iloc[-1, :].name
alpha = (last_row_value-first_row_value)/90
prediction = last_row_value + alpha
pred_lst.append(prediction)
true_lst.append(true_val)
cnt += 1
return pred_lst, true_lst
pred_lst, true_lst = linear_extrapolation(df, 30)
r2_score(true_lst, pred_lst)
df = unpickle_object("FINAL_DATAFRAME_PROJ_5.pkl")
df.head()
df.corr()
plot_corr_matrix(df)
beta_values, pred, true = master(df, 30)
r2_score(true, pred)#blows our Prophet TS only model away!
plt.plot(pred)
plt.plot(true)
plt.legend(["Prediction", 'Actual'], loc='upper left')
plt.xlabel("Prediction #")
plt.ylabel("Price")
plt.title("Nested TS - Price Prediction");
fig, ax = plt.subplots()
ax.scatter(true, pred, edgecolors=(0, 0, 0))
ax.plot([min(true), max(true)], [min(true), max(true)], 'k--', lw=3)
ax.set_xlabel('Actual')
ax.set_ylabel('Predicted')
plotting_dict_1 = {"eth_price": [], "pos_sent": [], "neg_sent": [], "unique_addr": [], "gold_price": [], "tot_num_trans": [], "mempool_trans":[], "hash_rate": [], "avg_trans_per_block":[]}
for index, sub_list in enumerate(beta_values):
for tup in sub_list:
plotting_dict_1[tup[0]].append(tup[1])
plot_key(plotting_dict_1, "pos_sent")# here we say the effect of positive sentiment through time!
plt.title("Positive Sentiment Effect on BTC Price")
plt.ylabel("Beta Value")
plt.xlabel("Model #")
plt.tight_layout()
plot_key(plotting_dict_1, "gold_price")
plt.title("Gold Price Effect on BTC Price")
plt.ylabel("Beta Value")
plt.xlabel("Model #")
plt.tight_layout()
plot_key(plotting_dict_1, "avg_trans_per_block")
plt.title("Avg. Trans per Block Effect on BTC Price")
plt.ylabel("Beta Value")
plt.xlabel("Model #")
plt.tight_layout()
df_pct = df.copy(deep=True)
df_pct = df_pct.pct_change()
df_pct.rename(columns={"mkt_price": "percent_change"}, inplace=True)
df_pct = df_pct.iloc[1:, :] #first row is all NaN's
df_pct.head()
beta_values_p, pred_p, true_p = master(df_pct, 30)
r2_score(true_p, pred_p) # this is expected due to the range of values on the y-axis!
#very good!
plt.plot(pred_p)
plt.plot(true_p)
plt.legend(["Prediction", 'Actual'], loc='upper left')
plt.xlabel("Prediction #")
plt.ylabel("Price")
plt.title("Nested TS - % Change Prediction");
fig, ax = plt.subplots()
ax.scatter(true_p, pred_p, edgecolors=(0, 0, 0))
ax.plot([min(true), max(true)], [min(true), max(true)], 'k--', lw=3)
ax.set_xlabel('Actual')
ax.set_ylabel('Predicted');
df.set_index('date', inplace=True)
prices_to_be_multiplied = df.loc[pd.date_range(start="2017-01-23", end="2017-02-21"), "mkt_price"]
forecast_price_lst = []
for index, price in enumerate(prices_to_be_multiplied):
predicted_percent_change = 1+float(pred_p[index])
forecasted_price = (predicted_percent_change)*price
forecast_price_lst.append(forecasted_price)
ground_truth_prices = df.loc[pd.date_range(start="2017-01-24", end="2017-02-22"), "mkt_price"]
ground_truth_prices = list(ground_truth_prices)
r2_score(ground_truth_prices, forecast_price_lst)
plt.plot(forecast_price_lst)
plt.plot(ground_truth_prices)
plt.legend(["Prediction", 'Actual'], loc='upper left')
plt.xlabel("Prediction #")
plt.ylabel("Price")
plt.title("Nested TS - % Change Prediction");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Naïve Model Caveats
Step2: Nested TS VS. FB Prophet TS
Step3: Percent change model!
Step4: From the above, it seems that our model is not tuned well enough to anticipate the large dip shown above. This is due to a lack of training data. However, while our model might not be the best in predicting percent change how does it fair when we turn the percent change into prices.
Step5: We have an $R^2$ of 0.87!
|
10,673
|
<ASSISTANT_TASK:>
Python Code:
import word2vec
word2vec.word2phrase('/Users/drodriguez/Downloads/text8', '/Users/drodriguez/Downloads/text8-phrases', verbose=True)
word2vec.word2vec('/Users/drodriguez/Downloads/text8-phrases', '/Users/drodriguez/Downloads/text8.bin', size=100, verbose=True)
word2vec.word2clusters('/Users/drodriguez/Downloads/text8', '/Users/drodriguez/Downloads/text8-clusters.txt', 100, verbose=True)
import word2vec
model = word2vec.load('/Users/drodriguez/Downloads/text8.bin')
model.vocab
model.vectors.shape
model.vectors
model['dog'].shape
model['dog'][:10]
indexes, metrics = model.cosine('socks')
indexes, metrics
model.vocab[indexes]
model.generate_response(indexes, metrics)
model.generate_response(indexes, metrics).tolist()
indexes, metrics = model.cosine('los_angeles')
model.generate_response(indexes, metrics).tolist()
indexes, metrics = model.analogy(pos=['king', 'woman'], neg=['man'], n=10)
indexes, metrics
model.generate_response(indexes, metrics).tolist()
clusters = word2vec.load_clusters('/Users/drodriguez/Downloads/text8-clusters.txt')
clusters['dog']
clusters.get_words_on_cluster(90).shape
clusters.get_words_on_cluster(90)[:10]
model.clusters = clusters
indexes, metrics = model.analogy(pos=['paris', 'germany'], neg=['france'], n=10)
model.generate_response(indexes, metrics).tolist()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run word2phrase to group up similar words "Los Angeles" to "Los_Angeles"
Step2: This will create a text8-phrases that we can use as a better input for word2vec.
Step3: That generated a text8.bin file containing the word vectors in a binary format.
Step4: That created a text8-clusters.txt with the cluster for every word in the vocabulary
Step5: Import the word2vec binary file created above
Step6: We can take a look at the vocabulaty as a numpy array
Step7: Or take a look at the whole matrix
Step8: We can retreive the vector of individual words
Step9: We can do simple queries to retreive words similar to "socks" based on cosine similarity
Step10: This returned a tuple with 2 items
Step11: There is a helper function to create a combined response
Step12: Is easy to make that numpy array a pure python response
Step13: Phrases
Step14: Analogies
Step15: Clusters
Step16: We can see get the cluster number for individual words
Step17: We can see get all the words grouped on an specific cluster
Step18: We can add the clusters to the word2vec model and generate a response that includes the clusters
|
10,674
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from __future__ import division
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import os
import sys
# 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题
sys.path.insert(0, os.path.abspath('../'))
import abupy
# 使用沙盒数据,目的是和书中一样的数据环境
abupy.env.enable_example_env_ipython()
from abupy import AbuFactorBuyBreak, AbuFactorSellBreak, AbuPositionBase
from abupy import AbuFactorAtrNStop, AbuFactorPreAtrNStop, AbuFactorCloseAtrNStop
from abupy import ABuPickTimeExecute, AbuBenchmark, AbuCapital
# buy_factors 60日向上突破,42日向上突破两个因子
buy_factors = [{'xd': 60, 'class': AbuFactorBuyBreak},
{'xd': 42, 'class': AbuFactorBuyBreak}]
# 四个卖出因子同时并行生效
sell_factors = [
{
'xd': 120,
'class': AbuFactorSellBreak
},
{
'stop_loss_n': 0.5,
'stop_win_n': 3.0,
'class': AbuFactorAtrNStop
},
{
'class': AbuFactorPreAtrNStop,
'pre_atr_n': 1.0
},
{
'class': AbuFactorCloseAtrNStop,
'close_atr_n': 1.5
}]
benchmark = AbuBenchmark()
capital = AbuCapital(1000000, benchmark)
# 我们假定choice_symbols是我们选股模块的结果,
choice_symbols = ['usTSLA', 'usNOAH', 'usSFUN', 'usBIDU', 'usAAPL',
'usGOOG', 'usWUBA', 'usVIPS']
%%time
capital = AbuCapital(1000000, benchmark)
orders_pd, action_pd, all_fit_symbols_cnt = ABuPickTimeExecute.do_symbols_with_same_factors(choice_symbols,
benchmark,
buy_factors,
sell_factors,
capital,
show=False)
orders_pd[:10]
action_pd[:10]
from abupy import AbuMetricsBase
metrics = AbuMetricsBase(orders_pd, action_pd, capital, benchmark)
metrics.fit_metrics()
metrics.plot_returns_cmp(only_show_returns=True)
class AbuKellyPosition(AbuPositionBase):
示例kelly仓位管理类
def fit_position(self, factor_object):
fit_position计算的结果是买入多少个单位(股,手,顿,合约)
需要factor_object策略因子对象通过历史回测统计胜率,期望收益,期望亏损,
并设置构造当前factor_object对象,通过kelly公司计算仓位
:param factor_object: ABuFactorBuyBases子类实例对象
:return:买入多少个单位(股,手,顿,合约)
# 败率
loss_rate = 1 - self.win_rate
# kelly计算出仓位比例
kelly_pos = self.win_rate - loss_rate / (self.gains_mean / self.losses_mean)
# 最大仓位限制,依然受上层最大仓位控制限制,eg:如果kelly计算出全仓,依然会减少到75%,如修改需要修改最大仓位值
kelly_pos = self.pos_max if kelly_pos > self.pos_max else kelly_pos
# 结果是买入多少个单位(股,手,顿,合约)
return self.read_cash * kelly_pos / self.bp * self.deposit_rate
def _init_self(self, **kwargs):
kelly仓位控制管理类初始化设置
# 默认kelly仓位胜率0.50
self.win_rate = kwargs.pop('win_rate', 0.50)
# 默认平均获利期望0.10
self.gains_mean = kwargs.pop('gains_mean', 0.10)
# 默认平均亏损期望0.05
self.losses_mean = kwargs.pop('losses_mean', 0.05)
以默认的设置kelly根据计算0.5 - 0.5 / (0.10 / 0.05) 仓位将是0.25即25%
from abupy import AbuKellyPosition
# 42d使用刚刚编写的AbuKellyPosition,60d仍然使用默认仓位管理类,即abupy中内置的AbuAtrPosition类
buy_factors2 = [{'xd': 60, 'class': AbuFactorBuyBreak},
{'xd': 42, 'position': {'class': AbuKellyPosition, 'win_rate': metrics.win_rate,
'gains_mean': metrics.gains_mean, 'losses_mean': -metrics.losses_mean},
'class': AbuFactorBuyBreak}]
capital = AbuCapital(1000000, benchmark)
orders_pd, action_pd, all_fit_symbols_cnt = ABuPickTimeExecute.do_symbols_with_same_factors(choice_symbols,
benchmark,
buy_factors2,
sell_factors,
capital,
show=False)
orders_pd[:10].filter(['symbol', 'buy_cnt', 'buy_factor', 'buy_pos'])
# 选定noah和sfun
target_symbols = ['usSFUN', 'usNOAH']
# 针对sfun只使用42d向上突破作为买入因子
buy_factors_sfun = [{'xd': 42, 'class': AbuFactorBuyBreak}]
# 针对sfun只使用60d向下突破作为卖出因子
sell_factors_sfun = [{'xd': 60, 'class': AbuFactorSellBreak}]
# 针对noah只使用21d向上突破作为买入因子
buy_factors_noah = [{'xd': 21, 'class': AbuFactorBuyBreak}]
# 针对noah只使用42d向下突破作为卖出因子
sell_factors_noah = [{'xd': 42, 'class': AbuFactorSellBreak}]
factor_dict = dict()
# 构建SFUN独立的buy_factors,sell_factors的dict
factor_dict['usSFUN'] = {'buy_factors': buy_factors_sfun,
'sell_factors': sell_factors_sfun}
# 构建NOAH独立的buy_factors,sell_factors的dict
factor_dict['usNOAH'] = {'buy_factors': buy_factors_noah,
'sell_factors': sell_factors_noah}
# 初始化资金
capital = AbuCapital(1000000, benchmark)
# 使用do_symbols_with_diff_factors执行
orders_pd, action_pd, all_fit_symbols = ABuPickTimeExecute.do_symbols_with_diff_factors(target_symbols,
benchmark,
factor_dict,
capital)
pd.crosstab(orders_pd.buy_factor, orders_pd.symbol)
%%time
from abupy import AbuPickTimeMaster
capital = AbuCapital(1000000, benchmark)
orders_pd, action_pd, _ = AbuPickTimeMaster.do_symbols_with_same_factors_process(
choice_symbols, benchmark, buy_factors, sell_factors, capital,
n_process_kl=4, n_process_pick_time=4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 之前的章节无论讲解策略优化,还是针对回测进行滑点或是手续费都是针对一支股票进行择时操作。
Step2: 1. 多支股票使用相同的因子进行择时
Step3: 使用ABuPickTimeExecute.do_symbols_with_same_factors()函数对多支股票使用相同的买入因子,卖出因子
Step4: 运行完毕,使用了ipython的magic code %%time去统计代码块运行时间,显示运行了19.2 s,本节最后会使用多进程模式运行相同的回测,会和这个时间进行比较。
Step5: 通过buy_cnt列可以发现每次交易数量都不一样,由于内部有资金管理控制模块默认使用atr进行仓位控制
Step6: 注意deal列代表了交易是否成交,由于内部有资金管理控制模块,所以不是所有交易信号都可以最后成交。
Step11: 2. 自定义仓位管理策略的实现
Step12: 自定义仓位管理代码如上AbuKellyPosition:
Step13: 从输出生成的orders_pd中可以看到buy Pos列所有42d突破都使用了AbuKellyPosition,60d仍然使用AbuAtrPosition
Step14: 3. 多支股票使用不同的因子进行择时
Step15: 如下代码通过pandas的交叉表来分析输出的orders_pd, 来证明
Step16: 4. 使用并行来提升择时运行效率
|
10,675
|
<ASSISTANT_TASK:>
Python Code:
# ### uncomment below if you want...
# ## ... copious amounts of logging info
# import logging
# logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
# rootLogger = logging.getLogger()
# rootLogger.setLevel(logging.INFO)
# ## ... or auto-reload of gensim during development
# %load_ext autoreload
# %autoreload 2
import re
contractions = re.compile(r"'|-|\"")
# all non alphanumeric
symbols = re.compile(r'(\W+)', re.U)
# single character removal
singles = re.compile(r'(\s\S\s)', re.I|re.U)
# separators (any whitespace)
seps = re.compile(r'\s+')
# cleaner (order matters)
def clean(text):
text = text.lower()
text = contractions.sub('', text)
text = symbols.sub(r' \1 ', text)
text = singles.sub(' ', text)
text = seps.sub(' ', text)
return text
# sentence splitter
alteos = re.compile(r'([!\?])')
def sentences(l):
l = alteos.sub(r' \1 .', l).rstrip("(\.)*\n")
return l.split(".")
from zipfile import ZipFile
import json
def YelpReviews(label):
with ZipFile("yelp_%s_set.zip"%label, 'r') as zf:
with zf.open("yelp_%s_set/yelp_%s_set_review.json"%(label,label)) as f:
for line in f:
rev = json.loads(line)
yield {'y':rev['stars'],\
'x':[clean(s).split() for s in sentences(rev['text'])]}
YelpReviews("test").next()
revtrain = list(YelpReviews("training"))
print len(revtrain), "training reviews"
## and shuffle just in case they are ordered
import numpy as np
np.random.shuffle(revtrain)
def StarSentences(reviews, stars=[1,2,3,4,5]):
for r in reviews:
if r['y'] in stars:
for s in r['x']:
yield s
from gensim.models import Word2Vec
import multiprocessing
## create a w2v learner
basemodel = Word2Vec(
workers=multiprocessing.cpu_count(), # use your cores
iter=3, # iter = sweeps of SGD through the data; more is better
hs=1, negative=0 # we only have scoring for the hierarchical softmax setup
)
print basemodel
basemodel.build_vocab(StarSentences(revtrain))
from copy import deepcopy
starmodels = [deepcopy(basemodel) for i in range(5)]
for i in range(5):
slist = list(StarSentences(revtrain, [i+1]))
print i+1, "stars (", len(slist), ")"
starmodels[i].train( slist, total_examples=len(slist) )
docprob takes two lists
* docs: a list of documents, each of which is a list of sentences
* models: the candidate word2vec models (each potential class)
it returns the array of class probabilities. Everything is done in-memory.
import pandas as pd # for quick summing within doc
def docprob(docs, mods):
# score() takes a list [s] of sentences here; could also be a sentence generator
sentlist = [s for d in docs for s in d]
# the log likelihood of each sentence in this review under each w2v representation
llhd = np.array( [ m.score(sentlist, len(sentlist)) for m in mods ] )
# now exponentiate to get likelihoods,
lhd = np.exp(llhd - llhd.max(axis=0)) # subtract row max to avoid numeric overload
# normalize across models (stars) to get sentence-star probabilities
prob = pd.DataFrame( (lhd/lhd.sum(axis=0)).transpose() )
# and finally average the sentence probabilities to get the review probability
prob["doc"] = [i for i,d in enumerate(docs) for s in d]
prob = prob.groupby("doc").mean()
return prob
# read in the test set
revtest = list(YelpReviews("test"))
# get the probs (note we give docprob a list of lists of words, plus the models)
probs = docprob( [r['x'] for r in revtest], starmodels )
%matplotlib inline
probpos = pd.DataFrame({"out-of-sample prob positive":probs[[3,4]].sum(axis=1),
"true stars":[r['y'] for r in revtest]})
probpos.boxplot("out-of-sample prob positive",by="true stars", figsize=(12,5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, we define a super simple parser
Step2: And put everything together in a review generator that provides tokenized sentences and the number of stars for every review.
Step3: For example
Step4: Now, since the files are small we'll just read everything into in-memory lists. It takes a minute ...
Step5: Finally, write a function to generate sentences -- ordered lists of words -- from reviews that have certain star ratings
Step6: Word2Vec modeling
Step7: Build vocab from all sentences (you could also pre-train the base model from a neutral or un-labeled vocabulary)
Step8: Now, we will deep copy each base model and do star-specific training. This is where the big computations happen...
Step10: Inversion of the distributed representations
Step11: Test set example
|
10,676
|
<ASSISTANT_TASK:>
Python Code:
live, firsts, others = first.MakeFrames()
formula = 'totalwgt_lb ~ agepreg'
model = smf.ols(formula, data=live)
results = model.fit()
regression.SummarizeResults(results)
##Results are also available as parameters:
inter = results.params['Intercept']
slope = results.params['agepreg']
slope_pvalue = results.pvalues['agepreg']
results.rsquared
##this gives the p-value associated with the model as a whole
results.f_pvalue
residuals = results.resid
##this returns a sequence of values corresponding
##to agepreg.
fitted_values = results.fittedvalues
# results.summary() provides a lot of info
#the following is easier:
regression.SummarizeResults(results)
diff_weight = firsts.totalwgt_lb.mean() - others.totalwgt_lb.mean()
diff_weight
diff_age = firsts.agepreg.mean() - others.agepreg.mean()
diff_age
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: spurious - result for which there is no obvious mechanism that would explain it. e.g. why would first babies be lighter than others? Perhaps because mothers of first babies are younger...
|
10,677
|
<ASSISTANT_TASK:>
Python Code:
from Bio import SeqIO
help(SeqIO)
# we show the first 3 only
from Bio import SeqIO
for i, seq_record in enumerate(SeqIO.parse("data/ls_orchid.fasta", "fasta")):
print(seq_record.id)
print(repr(seq_record.seq))
print(len(seq_record))
if i == 2:
break
#we show the frist 3
from Bio import SeqIO
for i, seq_record in enumerate(SeqIO.parse("data/ls_orchid.gbk", "genbank")):
print(seq_record.id)
print(seq_record.seq)
print(len(seq_record))
if i == 2:
break
from Bio import SeqIO
identifiers=[seq_record.id for seq_record in SeqIO.parse("data/ls_orchid.gbk", "genbank")][:10] # ten only
identifiers
record_iterator = SeqIO.parse("data/ls_orchid.fasta", "fasta")
first_record = next(record_iterator)
print(first_record.id)
print(first_record.description)
second_record = next(record_iterator)
print(second_record.id)
print(second_record.description)
from Bio import SeqIO
next(SeqIO.parse("data/ls_orchid.gbk", "genbank"))
from Bio import SeqIO
records = list(SeqIO.parse("data/ls_orchid.gbk", "genbank"))
print("Found %i records" % len(records))
print("The last record")
last_record = records[-1] #using Python's list tricks
print(last_record.id)
print(repr(last_record.seq))
print(len(last_record))
print("The first record")
first_record = records[0] #remember, Python counts from zero
print(first_record.id)
print(repr(first_record.seq))
print(len(first_record))
from Bio import SeqIO
record_iterator = SeqIO.parse("data/ls_orchid.gbk", "genbank")
first_record = next(record_iterator)
print(first_record)
print(first_record.annotations["source"])
print(first_record.annotations["organism"])
from Bio import SeqIO
all_species = []
for seq_record in SeqIO.parse("data/ls_orchid.gbk", "genbank"):
all_species.append(seq_record.annotations["organism"])
print(all_species[:10]) # we print only 10
from Bio import SeqIO
all_species = [seq_record.annotations["organism"]
for seq_record in SeqIO.parse("data/ls_orchid.gbk", "genbank")
]
print(all_species[:10])
from Bio import SeqIO
all_species = []
for seq_record in SeqIO.parse("data/ls_orchid.fasta", "fasta"):
all_species.append(seq_record.description.split()[1])
print(all_species[:10])
from Bio import SeqIO
all_species == [
seq_record.description.split()[1]
for seq_record in SeqIO.parse("data/ls_orchid.fasta", "fasta")]
print(all_species[:10])
from Bio import SeqIO
record_iterator = SeqIO.parse("data/ls_orchid.fasta", "fasta")
first_record = next(record_iterator)
first_record.id
first_record.id = "new_id"
first_record.id
from Bio import SeqIO
record_iterator = SeqIO.parse("data/ls_orchid.fasta", "fasta")
first_record = next(record_iterator)
first_record.id = "new_id"
first_record.description = first_record.id + " " + "desired new description"
print(first_record.format("fasta")[:200])
from Bio import SeqIO
print(sum(len(r) for r in SeqIO.parse("data/ls_orchid.gbk", "gb")))
from Bio import SeqIO
with open("data/ls_orchid.gbk") as handle:
print(sum(len(r) for r in SeqIO.parse(handle, "gb")))
from Bio import SeqIO
handle = open("data/ls_orchid.gbk")
print(sum(len(r) for r in SeqIO.parse(handle, "gb")))
handle.close()
import gzip
from Bio import SeqIO
with gzip.open("data/ls_orchid.gbk.gz", "rt") as handle:
print(sum(len(r) for r in SeqIO.parse(handle, "gb")))
import bz2
from Bio import SeqIO
with bz2.open("data/ls_orchid.gbk.bz2", "rt") as handle:
print(sum(len(r) for r in SeqIO.parse(handle, "gb")))
from Bio import Entrez
from Bio import SeqIO
Entrez.email = "A.N.Other@example.com"
with Entrez.efetch(
db="nucleotide", rettype="fasta", retmode="text", id="6273291"
) as handle:
seq_record = SeqIO.read(handle, "fasta")
print("%s with %i features" % (seq_record.id, len(seq_record.features)))
from Bio import Entrez
from Bio import SeqIO
Entrez.email = "A.N.Other@example.com"
with Entrez.efetch(
db="nucleotide", rettype="gb", retmode="text", id="6273291"
) as handle:
seq_record = SeqIO.read(handle, "gb") # using "gb" as an alias for "genbank"
print("%s with %i features" % (seq_record.id, len(seq_record.features)))
from Bio import Entrez
from Bio import SeqIO
Entrez.email = "A.N.Other@example.com"
with Entrez.efetch(
db="nucleotide", rettype="gb", retmode="text", id="6273291,6273290,6273289"
) as handle:
for seq_record in SeqIO.parse(handle, "gb"):
print("%s %s..." % (seq_record.id, seq_record.description[:50]))
print(
"Sequence length %i, %i features, from: %s"
% (
len(seq_record),
len(seq_record.features),
seq_record.annotations["source"],
)
)
from Bio import ExPASy
from Bio import SeqIO
with ExPASy.get_sprot_raw("O23729") as handle:
seq_record = SeqIO.read(handle, "swiss")
print(seq_record.id)
print(seq_record.name)
print(seq_record.description)
print(repr(seq_record.seq))
print("Length %i" % len(seq_record))
print(seq_record.annotations["keywords"])
from Bio import SeqIO
orchid_dict = SeqIO.to_dict(SeqIO.parse("data/ls_orchid.gbk", "genbank"))
len(orchid_dict)
list(orchid_dict.keys())[:10] #ten only
list(orchid_dict.values())[:5] # Ok not all at once...
seq_record = orchid_dict["Z78475.1"]
print(seq_record.description)
seq_record.seq
from Bio import SeqIO
orchid_dict = SeqIO.to_dict(SeqIO.parse("data/ls_orchid.fasta", "fasta"))
print(list(orchid_dict.keys())[:10])
def get_accession(record):
"Given a SeqRecord, return the accession number as a string.
e.g. "gi|2765613|emb|Z78488.1|PTZ78488" -> "Z78488.1"
parts = record.id.split("|")
assert len(parts) == 5 and parts[0] == "gi" and parts[2] == "emb"
return parts[3]
from Bio import SeqIO
orchid_dict = SeqIO.to_dict(SeqIO.parse("data/ls_orchid.fasta", "fasta"), key_function=get_accession)
print(orchid_dict.keys())
print(list(orchid_dict.keys())[:10])
from Bio import SeqIO
from Bio.SeqUtils.CheckSum import seguid
for i, record in enumerate(SeqIO.parse("data/ls_orchid.gbk", "genbank")):
print(record.id, seguid(record.seq))
if i == 4: # OK, 5 is enough!
break
from Bio import SeqIO
from Bio.SeqUtils.CheckSum import seguid
seguid_dict = SeqIO.to_dict(SeqIO.parse("data/ls_orchid.gbk", "genbank"),
lambda rec : seguid(rec.seq))
record = seguid_dict["MN/s0q9zDoCVEEc+k/IFwCNF2pY"]
print(record.id)
print(record.description)
from Bio import SeqIO
orchid_dict = SeqIO.index("data/ls_orchid.gbk", "genbank")
len(orchid_dict)
print(list(orchid_dict.keys()))
seq_record = orchid_dict["Z78475.1"]
print(seq_record.description)
seq_record.seq
orchid_dict.close()
from Bio import SeqIO
orchid_dict = SeqIO.index("data/ls_orchid.fasta", "fasta")
len(orchid_dict)
print(list(orchid_dict.keys())[:10])
def get_acc(identifier):
"Given a SeqRecord identifier string, return the accession number as a string.
e.g. "gi|2765613|emb|Z78488.1|PTZ78488" -> "Z78488.1"
parts = identifier.split("|")
assert len(parts) == 5 and parts[0] == "gi" and parts[2] == "emb"
return parts[3]
from Bio import SeqIO
orchid_dict = SeqIO.index("data/ls_orchid.fasta", "fasta", key_function=get_acc)
print(list(orchid_dict.keys()))
#Use this to download the file
!wget -c ftp://ftp.uniprot.org/pub/databases/uniprot/current_release/knowledgebase/complete/uniprot_sprot.dat.gz -O data/uniprot_sprot.dat.gz
!gzip -d data/uniprot_sprot.dat.gz
from Bio import SeqIO
uniprot = SeqIO.index("data/uniprot_sprot.dat", "swiss")
with open("selected.dat", "wb") as out_handle:
for acc in ["P33487", "P19801", "P13689", "Q8JZQ5", "Q9TRC7"]:
out_handle.write(uniprot.get_raw(acc))
# For illustration only, see reduced example below
$ rsync -avP ("ftp.ncbi.nih.gov::genbank/gbvrl*.seq.gz")
$ gunzip gbvrl*.seq.gz
# Reduced example, download only the first four chunks
$ curl -O ftp://ftp.ncbi.nih.gov/genbank/gbvrl1.seq.gz
$ curl -O ftp://ftp.ncbi.nih.gov/genbank/gbvrl2.seq.gz
$ curl -O ftp://ftp.ncbi.nih.gov/genbank/gbvrl3.seq.gz
$ curl -O ftp://ftp.ncbi.nih.gov/genbank/gbvrl4.seq.gz
$ gunzip gbvrl*.seq.gz
#this will download the files - Currently there are more than 16, but we will do only 4
import os
for i in range(1, 5):
os.system('wget ftp://ftp.ncbi.nih.gov/genbank/gbvrl%i.seq.gz -O data/gbvrl%i.seq.gz' % (i, i))
os.system('gzip -d data/gbvrl%i.seq.gz' % i)
files = ["data/gbvrl%i.seq" % i for i in range(1, 5)]
gb_vrl = SeqIO.index_db("data/gbvrl.idx", files, "genbank")
print("%i sequences indexed" % len(gb_vrl))
print(gb_vrl["AB811634.1"].description)
print(gb_vrl.get_raw("AB811634.1"))
from Bio import SeqIO
orchid_dict = SeqIO.index("data/ls_orchid.gbk", "genbank")
len(orchid_dict)
orchid_dict.close()
from Bio import SeqIO
orchid_dict = SeqIO.index("data/ls_orchid.gbk.bgz", "genbank")
len(orchid_dict)
orchid_dict.close()
from Bio import SeqIO
orchid_dict = SeqIO.index_db("data/ls_orchid.gbk.bgz.idx", "data/ls_orchid.gbk.bgz", "genbank")
len(orchid_dict)
orchid_dict.close()
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
rec1 = SeqRecord(
Seq(
"MMYQQGCFAGGTVLRLAKDLAENNRGARVLVVCSEITAVTFRGPSETHLDSMVGQALFGD" \
+"GAGAVIVGSDPDLSVERPLYELVWTGATLLPDSEGAIDGHLREVGLTFHLLKDVPGLISK" \
+"NIEKSLKEAFTPLGISDWNSTFWIAHPGGPAILDQVEAKLGLKEEKMRATREVLSEYGNM" \
+"SSAC",
),
id="gi|14150838|gb|AAK54648.1|AF376133_1",
description="chalcone synthase [Cucumis sativus]")
rec2 = SeqRecord(
Seq(
"YPDYYFRITNREHKAELKEKFQRMCDKSMIKKRYMYLTEEILKENPSMCEYMAPSLDARQ" \
+"DMVVVEIPKLGKEAAVKAIKEWGQ",
),
id="gi|13919613|gb|AAK33142.1|",
description="chalcone synthase [Fragaria vesca subsp. bracteata]")
rec3 = SeqRecord(
Seq(
"MVTVEEFRRAQCAEGPATVMAIGTATPSNCVDQSTYPDYYFRITNSEHKVELKEKFKRMC" \
+"EKSMIKKRYMHLTEEILKENPNICAYMAPSLDARQDIVVVEVPKLGKEAAQKAIKEWGQP" \
+"KSKITHLVFCTTSGVDMPGCDYQLTKLLGLRPSVKRFMMYQQGCFAGGTVLRMAKDLAEN" \
+"NKGARVLVVCSEITAVTFRGPNDTHLDSLVGQALFGDGAAAVIIGSDPIPEVERPLFELV" \
+"SAAQTLLPDSEGAIDGHLREVGLTFHLLKDVPGLISKNIEKSLVEAFQPLGISDWNSLFW" \
+"IAHPGGPAILDQVELKLGLKQEKLKATRKVLSNYGNMSSACVLFILDEMRKASAKEGLGT" \
+"TGEGLEWGVLFGFGPGLTVETVVLHSVAT",
),
id="gi|13925890|gb|AAK49457.1|",
description="chalcone synthase [Nicotiana tabacum]")
my_records = [rec1, rec2, rec3]
SeqIO.write(my_records, "data/my_example.faa", "fasta")
from Bio import SeqIO
records = SeqIO.parse("data/ls_orchid.gbk", "genbank")
count = SeqIO.write(records, "data/my_example.fasta", "fasta")
print("Converted %i records" % count)
from Bio import SeqIO
count = SeqIO.convert("data/ls_orchid.gbk", "genbank", "data/my_example.fasta", "fasta")
print("Converted %i records" % count)
from Bio import SeqIO
help(SeqIO.convert)
from Bio import SeqIO
for i, record in enumerate(SeqIO.parse("data/ls_orchid.gbk", "genbank")):
print(record.id)
print(record.seq.reverse_complement())
if i == 2: # 3 is enough
break
from Bio import SeqIO
records = [rec.reverse_complement(id="rc_"+rec.id, description = "reverse complement") \
for rec in SeqIO.parse("data/ls_orchid.fasta", "fasta")]
len(records)
records = [rec.reverse_complement(id="rc_"+rec.id, description = "reverse complement") \
for rec in SeqIO.parse("data/ls_orchid.fasta", "fasta") if len(rec)<700]
len(records)
records = (rec.reverse_complement(id="rc_"+rec.id, description = "reverse complement") \
for rec in SeqIO.parse("data/ls_orchid.fasta", "fasta") if len(rec)<700)
from Bio import SeqIO
records = (rec.reverse_complement(id="rc_"+rec.id, description = "reverse complement") \
for rec in SeqIO.parse("data/ls_orchid.fasta", "fasta") if len(rec)<700)
SeqIO.write(records, "data/rev_comp.fasta", "fasta")
from Bio import SeqIO
from io import StringIO
records = SeqIO.parse("data/ls_orchid.gbk", "genbank")
out_handle = StringIO()
SeqIO.write(records, out_handle, "fasta")
fasta_data = out_handle.getvalue()
print(fasta_data[:500])
from Bio import SeqIO
with open("data/ls_orchid_long.tab", "w") as out_handle:
for record in SeqIO.parse("data/ls_orchid.gbk", "genbank"):
if len(record) > 100:
out_handle.write(record.format("tab"))
from Bio import SeqIO
records = (rec for rec in SeqIO.parse("data/ls_orchid.gbk", "genbank") if len(rec) > 100)
SeqIO.write(records, "data/ls_orchid.tab", "tab")
from Bio.SeqIO.FastaIO import SimpleFastaParser
count = 0
total_len = 0
with open("data/ls_orchid.fasta") as in_handle:
for title, seq in SimpleFastaParser(in_handle):
count += 1
total_len += len(seq)
print("%i records with total sequence length %i" % (count, total_len))
out_handle.write(">%s\n%s\n" % (title, seq))
from Bio.SeqIO.QualityIO import FastqGeneralIterator
count = 0
total_len = 0
with open("data/example.fastq") as in_handle:
for title, seq, qual in FastqGeneralIterator(in_handle):
count += 1
total_len += len(seq)
print("%i records with total sequence length %i" % (count, total_len))
out_handle.write("@%s\n%s\n+\n%s\n" % (title, seq, qual))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The 'catch' is that you have to work with SeqRecord objects (see Chapter 4), which contain a Seq object (Chapter 3) plus annotation like an identifier and description. Note that when dealing with very large FASTA or FASTQ files, the overhead of working with all these objects can make scripts too slow. In this case consider the low-level SimpleFastaParser and FastqGeneralIterator parsers which return just a tuple of strings for each record (see Section 5.6)
Step2: The above example is repeated from the introduction in Section 2.4, and will load the orchid DNA sequences in the FASTA format file ls_orchid.fasta. If instead you wanted to load a GenBank format file like ls_orchid.gbk then all you need to do is change the filename and the format string
Step3: Similarly, if you wanted to read in a file in another file format, then assuming Bio.SeqIO.parse() supports it you would just need to change the format string as appropriate, for example 'swiss' for SwissProt files or 'embl' for EMBL text files. There is a full listing on the wiki page (http
Step4: There are more examples using SeqIO.parse() in a list comprehension like this in Section 20.2 (e.g. for plotting sequence lengths or GC%).
Step5: Note that if you try to use next() and there are no more results, you'll get the special StopIteration exception.
Step6: A word of warning here -- using the next() function like this will silently ignore any additional records in the file.
Step7: You can of course still use a for loop with a list of SeqRecord objects. Using a list is much more flexible than an iterator (for example, you can determine the number of records from the length of the list), but does need more memory because it will hold all the records in memory at once.
Step8: This gives a human readable summary of most of the annotation data for the SeqRecord. For this example we're going to use the .annotations attribute which is just a Python dictionary. The contents of this annotations dictionary were shown when we printed the record above. You can also print them out directly
Step9: In general, 'organism' is used for the scientific name (in Latin, e.g. Arabidopsis thaliana),
Step10: Another way of writing this code is to use a list comprehension
Step11: Great. That was pretty easy because GenBank files are annotated in a standardised way.
Step12: The concise alternative using list comprehensions would be
Step13: In general, extracting information from the FASTA description line is not very nice.
Step14: Note, if you want to change the way FASTA is output when written to a file (see Section 5.5), then you should modify both the id and description attributes. To ensure the correct behaviour, it is best to include the id plus a space at the start of the desired description
Step15: 5.2 Parsing sequences from compressed files
Step16: Here we use a file handle instead, using the \verb|with| statement
Step17: Or, the old fashioned way where you manually close the handle
Step18: Now, suppose we have a gzip compressed file instead? These are very
Step19: Similarly if we had a bzip2 compressed file
Step20: There is a gzip (GNU Zip) variant called BGZF (Blocked GNU Zip Format), which can be treated like an ordinary gzip file for reading, but has advantages for random access later which we'll talk about later in Section 5.4.4.
Step21: The NCBI will also let you ask for the file in other formats, in particular as
Step22: Notice this time we have three features.
Step23: See Chapter 9 for more about the Bio.Entrez module, and make sure to read about the NCBI guidelines for using Entrez (Section 9.1).
Step24: 5.4 5Sequence files as dictionaries
Step25: There is just one required argument for Bio.SeqIO.to_dict(), a list or
Step26: Under Python 3 the dictionary methods like ".keys()" and ".values()" are iterators rather than lists.
Step27: We can access a single SeqRecord object via the keys and manipulate the object as normal
Step28: So, it is very easy to create an in memory 'database' of our GenBank records. Next we'll try this for the FASTA file instead.
Step30: You should recognise these strings from when we parsed the FASTA file earlier in Section 2.4.1. Suppose you would rather have something else as the keys - like the accession numbers. This brings us nicely to SeqIO.to_dict()'s optional argument key_function, which lets you define what to use as the dictionary key for your records.
Step31: Then we can give this function to the SeqIO.to_dict() function to use in building the dictionary
Step32: Finally, as desired, the new dictionary keys
Step33: Not to complicated, I hope!
Step34: Now, recall the Bio.SeqIO.to_dict() function's key_function argument expects a function which turns a SeqRecord into a string. We can't use the seguid() function directly because it expects to be given a Seq object (or a string). However, we can use Python's lambda feature to create a 'one off' function to give to Bio.SeqIO.to_dict() instead
Step35: That should have retrieved the record Z78532.1, the second entry in the file.
Step36: Note that Bio.SeqIO.index() won’t take a handle, but only a filename. There are good reasons for this, but it is a little technical. The second argument is the file format (a lower case string as used in the other Bio.SeqIO functions). You can use many other simple file formats, including FASTA and FASTQ files (see the example in Section 20.1.11). However, alignment formats like PHYLIP or Clustal are not supported. Finally as an optional argument you can supply a key function.
Step38: 5.4.2.1 Specifying the dictionary keys
Step39: Then we can give this function to the Bio.SeqIO.index()
Step40: Easy when you know how?
Step41: Note with Python 3 onwards, we have to open the file for writing in binary mode because the get_raw() method returns bytes strings.
Step42: Unless you care about viruses, that’s a lot of data to download just for this example - so let’s download just the first four chunks (about 25MB each compressed), and decompress them (taking in all about 1GB of space)
Step43: Now, in Python, index these GenBank files as follows
Step44: Indexing the full set of virus GenBank files took about ten minutes on my machine, just the first four files took about a minute or so.
Step45: 5.4.3.1 Getting the raw data for a record
Step46: 5.4.4 Indexing compressed files
Step47: You could compress this (while keeping the original file) at the command
Step48: or
Step49: The SeqIO indexing automatically detects the BGZF compression. Note
Step50: Now we have a list of SeqRecord objects, we'll write them to a FASTA format file
Step51: And if you open this file in your favourite text editor it should look like this
Step52: Still, that is a little bit complicated. So, because file conversion is such a
Step53: The Bio.SeqIO.convert() function will take handles or filenames.
Step54: In principle, just by changing the filenames and the format names, this code could be used to convert between any file formats available in Biopython. However, writing some formats requires information (e.g. quality scores) which other files formats don’t contain. For example, while you can turn a FASTQ file into a FASTA file, you can’t do the reverse. See also Sections 20.1.9 and 20.1.10 in the cookbook chapter which looks at inter-converting between different FASTQ formats.
Step55: Now, if we want to save these reverse complements to a file, we’ll need to make SeqRecord objects. We can use the SeqRecord object’s built in .reverse_complement() method (see Section 4.9) but we must decide how to name our new records.
Step56: Now list comprehensions have a nice trick up their sleeves, you can add a conditional statement
Step57: That would create an in memory list of reverse complement records where the sequence length was under 700 base pairs. However, we can do exactly the same with a generator expression - but with the advantage that this does not create a list of all the records in memory at once
Step58: As a complete example
Step59: There is a related example in Section 20.1.3, translating each record in a FASTA file from nucleotides to amino acids.
Step60: This isn’t entirely straightforward the first time you see it! On the bright side, for the special case where you would like a string containing a single record in a particular file format, use the the SeqRecord class’ format() method (see Section 4.6).
Step61: While this style of code will work for a simple sequential file format like FASTA or the simple tab separated format used here, it will not work for more complex or interlaced file formats. This is why we still recommend using Bio.SeqIO.write(), as in the following example
Step62: Making a single call to SeqIO.write(...) is also much quicker than
Step63: As long as you don’t care about line wrapping (and you probably don’t for short read high-througput data), then outputing FASTA format from these strings is also very fast
Step64: Likewise, when parsing FASTQ files, internally Bio.SeqIO.parse() calls the low-level FastqGeneralIterator with the file handle. If you don’t need the quality scores turned into integers, or can work with them as ASCII strings this is ideal
Step65: There are more examples of this in the Cookbook (Chapter 20), including how to output FASTQ efficiently from strings using this code snippet
|
10,678
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import time
import itertools
import h5py
import numpy as np
from scipy.stats import norm
from scipy.stats import expon
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import seaborn as sns
sns.set(style="ticks", color_codes=True, font_scale=1.5)
sns.set_style({"xtick.direction": "in", "ytick.direction": "in"})
h5file = "data/cossio_kl1.3_Dx1_Dq1.h5"
f = h5py.File(h5file, 'r')
data = np.array(f['data'])
f.close()
fig, ax = plt.subplots(2,1,figsize=(12,3), sharex=True,sharey=False)
ax[0].plot(data[:,0],data[:,1],'.', markersize=1)
ax[1].plot(data[:,0],data[:,2],'g.', markersize=1)
ax[0].set_ylim(-10,10)
ax[1].set_xlim(0,25000)
ax[0].set_ylabel('x')
ax[1].set_ylabel('q')
ax[1].set_xlabel('Time')
plt.tight_layout(h_pad=0)
fig, ax = plt.subplots(figsize=(6,4))
hist, bin_edges = np.histogram(data[:,1], bins=np.linspace(-7,7,20), \
density=True)
bin_centers = [0.5*(bin_edges[i]+bin_edges[i+1]) \
for i in range(len(bin_edges)-1)]
ax.plot(bin_centers, -np.log(hist),label="x")
hist, bin_edges = np.histogram(data[:,2], bins=np.linspace(-7,7,20), \
density=True)
bin_centers = [0.5*(bin_edges[i]+bin_edges[i+1]) \
for i in range(len(bin_edges)-1)]
ax.plot(bin_centers, -np.log(hist),label="q")
ax.set_xlim(-7,7)
ax.set_ylim(1,9)
#ax.set_xlabel('x')
ax.set_ylabel('PMF ($k_BT$)')
ax.legend()
H, x_edges, y_edges = np.histogram2d(data[:,1],data[:,2], \
bins=[np.linspace(-7,7,20), np.linspace(-7,7,20)])
fig, ax = plt.subplots(figsize=(6,5))
pmf = -np.log(H.transpose())
pmf -= np.min(pmf)
cs = ax.contourf(pmf, extent=[x_edges.min(), x_edges.max(), \
y_edges.min(), y_edges.max()], \
cmap=cm.rainbow, levels=np.arange(0, 6,0.5))
cbar = plt.colorbar(cs)
ax.set_xlim(-7,7)
ax.set_ylim(-7,7)
ax.set_yticks(range(-5,6,5))
ax.set_xlabel('$x$', fontsize=20)
ax.set_ylabel('$q$', fontsize=20)
plt.tight_layout()
from scipy.stats import binned_statistic_2d
statistic, x_edge, y_edge, binnumber = \
binned_statistic_2d(data[:,1],data[:,2],None,'count', \
bins=[np.linspace(-7,7,20), np.linspace(-7,7,20)])
fig, ax = plt.subplots(figsize=(6,5))
grid = ax.imshow(-np.log(statistic.transpose()),origin="lower",cmap=plt.cm.rainbow)
cbar = plt.colorbar(grid)
ax.set_yticks(range(0,20,5))
ax.set_xticks(range(0,20,5))
ax.set_xlabel('$x_{bin}$', fontsize=20)
ax.set_ylabel('$q_{bin}$', fontsize=20)
plt.tight_layout()
fig,ax=plt.subplots(3,1,figsize=(12,6),sharex=True)
plt.subplots_adjust(wspace=0, hspace=0)
ax[0].plot(range(0,len(data[:,1])),data[:,1])
ax[1].plot(range(0,len(data[:,2])),data[:,2],color="g")
ax[2].plot(binnumber)
ax[0].set_ylabel('x')
ax[1].set_ylabel('q')
ax[2].set_ylabel("s")
ax[2].set_xlabel("time (ps)")
ax[2].set_xlim(0,2000)
from mastermsm.trajectory import traj
from mastermsm.msm import msm
distraj = traj.TimeSeries(distraj=list(binnumber), dt=1)
distraj.find_keys()
distraj.keys.sort()
msm_2D = msm.SuperMSM([distraj])
for i in [1, 2, 5, 10, 20, 50, 100]:
msm_2D.do_msm(i)
msm_2D.msms[i].do_trans(evecs=True)
msm_2D.msms[i].boots()
fig, ax = plt.subplots()
for i in range(5):
tau_vs_lagt = np.array([[x,msm_2D.msms[x].tauT[i],msm_2D.msms[x].tau_std[i]] \
for x in sorted(msm_2D.msms.keys())])
ax.errorbar(tau_vs_lagt[:,0],tau_vs_lagt[:,1],fmt='o-', yerr=tau_vs_lagt[:,2], markersize=10)
#ax.fill_between(10**np.arange(-0.2,3,0.2), 1e-1, 10**np.arange(-0.2,3,0.2), facecolor='lightgray')
ax.set_xlabel(r'$\Delta$t [ps]', fontsize=16)
ax.set_ylabel(r'$\tau$ [ps]', fontsize=16)
ax.set_xlim(0.8,200)
ax.set_ylim(1,1000)
ax.set_yscale('log')
_ = ax.set_xscale('log')
lt=2
plt.figure()
plt.imshow(msm_2D.msms[lt].trans, interpolation='none', \
cmap='viridis_r',origin="lower")
plt.ylabel('$\it{i}$')
plt.xlabel('$\it{j}$')
plt.colorbar()
plt.figure()
plt.imshow(np.log(msm_2D.msms[lt].trans), interpolation='none', \
cmap='viridis_r',origin="lower")
plt.ylabel('$\it{i}$')
plt.xlabel('$\it{j}$')
plt.colorbar()
fig, ax = plt.subplots()
ax.errorbar(range(1,12),msm_2D.msms[lt].tauT[0:11], fmt='o-', \
yerr= msm_2D.msms[lt].tau_std[0:11], ms=10)
ax.set_xlabel('Eigenvalue')
ax.set_ylabel(r'$\tau_i$ [ns]')
fig, ax = plt.subplots(figsize=(10,4))
ax.plot(msm_2D.msms[2].rvecsT[:,1])
ax.fill_between(range(len(msm_2D.msms[lt].rvecsT[:,1])), 0, \
msm_2D.msms[lt].rvecsT[:,1], \
where=msm_2D.msms[lt].rvecsT[:,1]>0,\
facecolor='c', interpolate=True,alpha=.4)
ax.fill_between(range(len(msm_2D.msms[lt].rvecsT[:,1])), 0, \
msm_2D.msms[lt].rvecsT[:,1], \
where=msm_2D.msms[lt].rvecsT[:,1]<0,\
facecolor='g', interpolate=True,alpha=.4)
ax.set_ylabel("$\Psi^R_1$")
plt.show()
fig,ax = plt.subplots(1,2,figsize=(10,5),sharey=True,sharex=True)
rv_mat = np.zeros((20,20), float)
for i in [x for x in zip(msm_2D.msms[lt].keep_keys, \
msm_2D.msms[lt].rvecsT[:,1])]:
unr_ind=np.unravel_index(i[0],(21,21))
rv_mat[unr_ind[0]-1,unr_ind[1]-1] = -i[1]
ax[0].imshow(rv_mat.transpose(), interpolation="none", \
cmap='bwr',origin="lower")
ax[1].imshow(-np.log(statistic.transpose()), \
cmap=plt.cm.rainbow,origin="lower")
ax[1].set_yticks(range(0,20,5))
ax[1].set_xticks(range(0,20,5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here we upload the data obtained from Brownian Dynamics simulations of isotropic diffusion on a 2D potential.
Step2: Trajectory analysis
Step3: Representation of the bistable 2D free energy surface as a function of the measured q and molecular x extensions
Step4: Assignment
Step5: Master Equation Model
Step6: Convergence Test
Step7: There is no dependency of the relaxation times $\tau$ on the lag time $\Delta$t.
Step8: The first mode captured by $\lambda_1$ is significantly slower than the others. That mode, which is described by the right eigenvector $\psi^R_1$ as the transition of the protein between the folded and unfolded states.
Step9: The projection of $\psi^R_1$ on the 2D grid shows the transitions between the two conformational states (red and blue).
|
10,679
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
import numpy as np
import pandas as pd
import statsmodels.formula.api as smf
%matplotlib inline
def select_cols(df):
cols = ['cntry', 'tvtot', 'tvpol', 'rdtot', 'rdpol', 'nwsptot', 'nwsppol', 'netuse',
'rlgblg', 'rlgdgr', 'eduyrs', 'hinctnta', 'yrbrn', 'eisced', 'pspwght', 'pweight']
df = df[cols]
return df
df1 = pd.read_stata('ESS1e06_4.dta', convert_categoricals=False)
df1['hinctnta'] = df1.hinctnt
df1 = select_cols(df1)
df1.head()
df2 = pd.read_stata('ESS2e03_4.dta', convert_categoricals=False)
df2['hinctnta'] = df2.hinctnt
df2 = select_cols(df2)
df2.head()
df3 = pd.read_stata('ESS3e03_5.dta', convert_categoricals=False)
df3['hinctnta'] = df3.hinctnt
df3 = select_cols(df3)
df3.head()
df4 = pd.read_stata('ESS4e04_3.dta', convert_categoricals=False)
df4 = select_cols(df4)
df4.head()
df5 = pd.read_stata('ESS5e03_2.dta', convert_categoricals=False)
df5 = select_cols(df5)
df5.head()
df = pd.concat([df1, df2, df3, df4, df5], ignore_index=True)
df.head()
df.tvtot.replace([77, 88, 99], np.nan, inplace=True)
df.tvtot.value_counts().sort_index()
df.rdtot.replace([77, 88, 99], np.nan, inplace=True)
df.rdtot.value_counts().sort_index()
df.nwsptot.replace([77, 88, 99], np.nan, inplace=True)
df.nwsptot.value_counts().sort_index()
df.tvpol.replace([66, 77, 88, 99], np.nan, inplace=True)
df.tvpol.value_counts().sort_index()
df.rdpol.replace([66, 77, 88, 99], np.nan, inplace=True)
df.rdpol.value_counts().sort_index()
df.nwsppol.replace([66, 77, 88, 99], np.nan, inplace=True)
df.nwsppol.value_counts().sort_index()
df.netuse.replace([77, 88, 99], np.nan, inplace=True)
df.netuse.value_counts().sort_index()
df.rlgblg.replace([7, 8, 9], np.nan, inplace=True)
df.rlgblg.value_counts().sort_index()
df.rlgdgr.replace([77, 88, 99], np.nan, inplace=True)
df.rlgdgr.value_counts().sort_index()
df.hinctnta.replace([77, 88, 99], np.nan, inplace=True)
df.hinctnta.value_counts().sort_index()
df['hinctnta5'] = df.hinctnta - 5
df.hinctnta5.describe()
df.yrbrn.replace([7777, 8888, 9999], np.nan, inplace=True)
df.yrbrn.describe()
df['yrbrn60'] = df.yrbrn - 1960
df.yrbrn60.describe()
df.eduyrs.replace([77, 88, 99], np.nan, inplace=True)
df.loc[df.eduyrs > 25, 'eduyrs'] = 25
df.eduyrs.value_counts().sort_index()
df.eduyrs.describe()
df['eduyrs12'] = df.eduyrs - 12
df.eduyrs12.describe()
df.cntry.value_counts().sort_index()
df['hasrelig'] = (df.rlgblg==1).astype(int)
def run_model(df, formula):
model = smf.logit(formula, data=df)
results = model.fit(disp=False)
return results
formula = ('hasrelig ~ yrbrn60 + eduyrs12 + hinctnta5 +'
'tvtot + tvpol + rdtot + rdpol + nwsptot + nwsppol + netuse')
res = run_model(df, formula)
res.summary()
formula = ('hasrelig ~ yrbrn60 + eduyrs12 + hinctnta5 +'
'tvtot + rdtot + nwsptot + netuse')
res = run_model(df, formula)
res.summary()
def fill_var(df, var):
fill = df[var].dropna().sample(len(df), replace=True)
fill.index = df.index
df[var].fillna(fill, inplace=True)
fill_var(df, var='hinctnta5')
formula = ('hasrelig ~ yrbrn60 + eduyrs12 + hinctnta5 +'
'tvtot + rdtot + nwsptot + netuse')
res = run_model(df, formula)
res.summary()
def extract_res(res, var='netuse'):
param = res.params[var]
pvalue = res.pvalues[var]
stars = '**' if pvalue < 0.01 else '*' if pvalue < 0.05 else ''
return res.nobs, param, stars
extract_res(res)
formula = ('rlgdgr ~ yrbrn60 + eduyrs12 + hinctnta5 +'
'tvtot + rdtot + nwsptot + netuse')
model = smf.ols(formula, data=df)
res = model.fit(disp=False)
res.summary()
grouped = df.groupby('cntry')
for name, group in grouped:
print(name, len(group))
gb = grouped.get_group('DK')
run_model(gb, formula).summary()
for name, group in grouped:
try:
fill_var(group, var='hinctnta5')
res = run_model(group, formula)
nobs, param, stars = extract_res(res)
arrow = '<--' if stars and param > 0 else ''
print(name, len(group), nobs, '%0.3g'%param, stars, arrow, sep='\t')
except:
print(name, len(group), ' ', 'NA', sep='\t')
group = grouped.get_group('FR')
len(group)
for col in group.columns:
print(col, sum(group[col].isnull()))
fill_var(group, 'hinctnta5')
formula
res = run_model(group, formula)
res.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The following function selects the columns I need.
Step2: Read data from Cycle 1.
Step3: Read data from Cycle 2.
Step4: Read data from Cycle 3.
Step5: Read data from Cycle 4.
Step6: Read data from Cycle 5.
Step7: Concatenate the cycles.
Step8: TV watching time on average weekday
Step9: Radio listening, total time on average weekday.
Step10: Newspaper reading, total time on average weekday.
Step11: TV watching
Step12: Radio listening
Step13: Newspaper reading
Step14: Personal use of Internet, email, www
Step15: Belong to a particular religion or denomination
Step16: How religious
Step17: Total household net income, all sources
Step18: Shift income to mean near 0.
Step19: Year born
Step20: Shifted to mean near 0
Step21: Number of years of education
Step22: There are a bunch of really high values for eduyrs, need to investigate.
Step23: Shift to mean near 0
Step24: Country codes
Step25: Make a binary dependent variable
Step26: Run the model
Step27: Here's the model with all control variables and all media variables
Step28: Most of the media variables are not statistically significant. If we drop the politial media variables, we get a cleaner model
Step29: And if we fill missing values for income, cleaner still.
Step30: Now all variables have small p-values. All parameters have the expected signs
Step31: Running a similar model with degree of religiosity.
Step32: Group by country
Step33: Run a sample country
Step34: Run all countries
Step35: In more than half of the countries, the association between Internet use and religious affiliation is statistically significant. In all except two, the association is negative.
|
10,680
|
<ASSISTANT_TASK:>
Python Code:
import datetime
import matplotlib.pyplot as plt
import pandas as pd
import pinkfish as pf
import strategy
# Format price data.
pd.options.display.float_format = '{:0.2f}'.format
pd.set_option('display.max_rows', None)
%matplotlib inline
# Set size of inline plots
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
# Symbol Lists
BitCoin = ['BTC-USD']
CryptoCurrencies_2016 = ['BTC-USD', 'ETH-USD', 'XRP-USD', 'LTC-USD',
'XEM-USD', 'DASH-USD', 'MAID-USD', 'LSK-USD', 'DOGE-USD']
# 'DAO-USD' is a dead coin, so missing from above
CryptoCurrencies_2017 = ['BTC-USD', 'ETH-USD', 'XRP-USD', 'LTC-USD', 'ETC-USD',
'XEM-USD', 'MIOTA-USD', 'DASH-USD', 'BTS-USD']
# 'STRAT-USD' last trade date is 2020-11-18, so removed
CryptoCurrencies_2018 = ['BTC-USD', 'ETH-USD', 'XRP-USD', 'BCH-USD', 'EOS-USD',
'LTC-USD', 'XLM-USD', 'ADA-USD', 'TRX-USD', 'MIOTA-USD']
CryptoCurrencies_2019 = ['BTC-USD', 'ETH-USD', 'XRP-USD', 'LTC-USD', 'BCH-USD',
'EOS-USD', 'BNB-USD', 'USDT-USD', 'BSV-USD', 'CRO-USD']
Stocks_Bonds_Gold_Crypto = ['SPY', 'QQQ', 'TLT', 'GLD', 'BTC-USD']
# Set 'continuous_timeseries' : False (for mixed asset classes)
start_1900 = datetime.datetime(1900, 1, 1)
start_2016 = datetime.datetime(2016, 6, 26)
start_2017 = datetime.datetime(2017, 6, 25)
start_2018 = datetime.datetime(2018, 6, 24)
start_2019 = datetime.datetime(2019, 6, 30)
# Pick one of the above symbols and start pairs
symbols = CryptoCurrencies_2016
start = start_2016
capital = 10000
end = datetime.datetime.now()
# NOTE: Cryptocurrencies have 7 days a week timeseries. You can test them with
# their entire timeseries by setting stock_market_calendar=False. Alternatively,
# to trade with stock market calendar by setting stock_market_calendar=True.
# For mixed asset classes that include stocks or ETFs, you must set
# stock_market_calendar=True.
options = {
'use_adj' : False,
'use_cache' : True,
'use_continuous_calendar' : False,
'force_stock_market_calendar' : True,
'stop_loss_pct' : 1.0,
'margin' : 1,
'lookback' : 1,
'sma_timeperiod': 20,
'sma_pct_band': 3,
'use_regime_filter' : False,
'use_vola_weight' : True
}
s = strategy.Strategy(symbols, capital, start, end, options=options)
s.run()
s.rlog.head()
s.tlog.head()
s.dbal.tail()
pf.print_full(s.stats)
weights = {symbol: 1 / len(symbols) for symbol in symbols}
totals = s.portfolio.performance_per_symbol(weights=weights)
totals
corr_df = s.portfolio.correlation_map(s.ts)
corr_df
benchmark = pf.Benchmark('BTC-USD', s.capital, s.start, s.end, use_adj=True)
benchmark.run()
pf.plot_equity_curve(s.dbal, benchmark=benchmark.dbal)
df = pf.plot_bar_graph(s.stats, benchmark.stats)
df
kelly = pf.kelly_criterion(s.stats, benchmark.stats)
kelly
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some global data
Step2: Run Strategy
Step3: View log DataFrames
Step4: Generate strategy stats - display all available stats
Step5: View Performance by Symbol
Step6: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats
Step7: Plot Equity Curves
Step8: Bar Graph
Step9: Analysis
|
10,681
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import permutation_cluster_1samp_test
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
tmin, tmax, event_id = -0.3, 0.6, 1
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
# Load condition 1
event_id = 1
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
# Take only one channel
ch_name = 'MEG 1332'
epochs.pick_channels([ch_name])
evoked = epochs.average()
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet. Decimation occurs after frequency decomposition and can
# be used to reduce memory usage (and possibly computational time of downstream
# operations such as nonparametric statistics) if you don't need high
# spectrotemporal resolution.
decim = 5
frequencies = np.arange(8, 40, 2) # define frequencies of interest
sfreq = raw.info['sfreq'] # sampling in Hz
tfr_epochs = tfr_morlet(epochs, frequencies, n_cycles=4., decim=decim,
average=False, return_itc=False, n_jobs=1)
# Baseline power
tfr_epochs.apply_baseline(mode='logratio', baseline=(-.100, 0))
# Crop in time to keep only what is between 0 and 400 ms
evoked.crop(0., 0.4)
tfr_epochs.crop(0., 0.4)
epochs_power = tfr_epochs.data[:, 0, :, :] # take the 1 channel
threshold = 2.5
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_1samp_test(epochs_power, n_permutations=100,
threshold=threshold, tail=0)
evoked_data = evoked.data
times = 1e3 * evoked.times
plt.figure()
plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43)
# Create new stats image with only significant clusters
T_obs_plot = np.nan * np.ones_like(T_obs)
for c, p_val in zip(clusters, cluster_p_values):
if p_val <= 0.05:
T_obs_plot[c] = T_obs[c]
vmax = np.max(np.abs(T_obs))
vmin = -vmax
plt.subplot(2, 1, 1)
plt.imshow(T_obs, cmap=plt.cm.gray,
extent=[times[0], times[-1], frequencies[0], frequencies[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.imshow(T_obs_plot, cmap=plt.cm.RdBu_r,
extent=[times[0], times[-1], frequencies[0], frequencies[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title('Induced power (%s)' % ch_name)
ax2 = plt.subplot(2, 1, 2)
evoked.plot(axes=[ax2])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Compute statistic
Step3: View time-frequency plots
|
10,682
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
white_house = pd.read_csv("../data/2015_white_house.csv")
print(white_house.shape)
print(white_house.iloc[-1])
white_house
%matplotlib notebook
import matplotlib.pyplot as plt
plt.hist(white_house["Salary"])
plt.show()
# Calculate the length of each employee's title and add to the DataFrame
white_house['LengthOfTitle'] = white_house['Position Title'].apply(len)
white_house.head()
# Plot the length of employee title versus salary to look for correlation
plt.plot(white_house['LengthOfTitle'], white_house['Salary'])
plt.title('How does length of employee titles correlate to salary?')
plt.xlabel('Length of Employee Title')
plt.ylabel('Salary ($)')
# Get the values in Pay Basis and figure out how many unique ones there are
types_of_pay_basis = set(white_house['Pay Basis'])
types_of_pay_basis
# Compute pairwise correlation of columns, excluding NA/null values
correlations = white_house.corr()
correlations
# Linear Regression using ordinary least squares
import statsmodels.api as sm
model = sm.OLS(white_house['Salary'], white_house['LengthOfTitle'])
residuals = model.fit()
print(residuals.summary())
total_salary = sum(white_house['Salary'])
total_salary
highest_paid = white_house[white_house['Salary'] == max(white_house['Salary'])]
highest_paid
lowest_paid = white_house[white_house['Salary'] == min(white_house['Salary'])]
lowest_paid
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So far we have imported a dataset from a CSV file into a Pandas DataFrame using the read_csv() function. Then we displayed the data, first as a table, and secondly as a historgram.
Step2: Uh ok, maybe I was wrong about visuallizing being great for detecting correlation ;-)
Step3: Ok, only one pay basis, annually. So that wasn't an issue.
Step4: So yea, there is a real positive correlation between length of employee title and salary!
Step5: The white house pays about $40 Million per year in total salary.
|
10,683
|
<ASSISTANT_TASK:>
Python Code:
!scala -version
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName('1.3. BigQuery Storage & Spark MLlib - Python')\
.config('spark.jars', 'gs://spark-lib/bigquery/spark-bigquery-latest.jar') \
.getOrCreate()
spark.version
spark.conf.set("spark.sql.repl.eagerEval.enabled",True)
table = 'bigquery-public-data.samples.natality'
df_natality_table = spark.read \
.format("bigquery") \
.option("table", table) \
.load()
limit = 10000
df_natality_select = df_natality_table \
.select("weight_pounds", "mother_age", "father_age", "gestation_weeks", "weight_gain_pounds", "apgar_5min") \
.where(
weight_pounds IS NOT NULL
AND mother_age IS NOT NULL
AND father_age IS NOT NULL
AND gestation_weeks IS NOT NULL
AND weight_gain_pounds IS NOT NULL
AND apgar_5min IS NOT NULL
) \
.limit(limit) \
.cache()
df_natality_select.printSchema()
df_natality_select.count()
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
assembler = VectorAssembler(
inputCols=["mother_age", "father_age", "gestation_weeks", "weight_gain_pounds", "apgar_5min"],
outputCol="features")
df_assembler_output = assembler.transform(df_natality_select)
df_assembler_output
df_training_data = df_assembler_output \
.select("features", "weight_pounds") \
.withColumnRenamed("weight_pounds","label")
df_training_data.cache()
df_training_data
(df_training, df_test) = df_training_data.randomSplit([0.7, 0.3])
from pyspark.ml.regression import LinearRegression
lr = LinearRegression(maxIter=5, regParam=0.2, solver="normal")
model = lr.fit(df_training)
print("Coefficients:" + str(model.coefficients))
print("Intercept:" + str(model.intercept))
trainingSummary = model.summary
print("numIterations: %d" % trainingSummary.totalIterations)
print("objectiveHistory: %s" % str(trainingSummary.objectiveHistory))
print("RMSE: %f" % trainingSummary.rootMeanSquaredError)
print("r2: %f" % trainingSummary.r2)
trainingSummary.residuals
predictions = model.transform(df_test)
predictions.select("prediction", "label", "features")
from pyspark.ml.evaluation import RegressionEvaluator
evaluator = RegressionEvaluator(
labelCol="label", predictionCol="prediction", metricName="rmse")
rmse = evaluator.evaluate(predictions)
print("Root Mean Squared Error (RMSE) on test data = %g" % rmse)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Spark Session
Step2: Enable repl.eagerEval
Step3: Read the data from BigQuery as a Spark Dataframe
Step5: Limit no of rows and cache data
Step6: Optional
Step7: Create an input DataFrame for Spark MLlib using VectorAssembler
Step8: Create a training data DataFrame
Step9: Split the data into training and test sets
Step10: Construct a new LinearRegression object and fit the training data
Step11: Print the coefficients and intercept for linear regression
Step12: Summarize the model over the training data and print metrics
Step13: Make predictions on test data
Step14: Select (prediction, true label) and compute test error
|
10,684
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
# fix random seed for reproducibility
np.random.seed(23)
# load data
def load_data(path='Class13_mnist.pkl.gz'):
import gzip
from six.moves import cPickle
import sys
#path = get_file(path, origin='https://s3.amazonaws.com/img-datasets/mnist.pkl.gz')
if path.endswith('.gz'):
f = gzip.open(path, 'rb')
else:
f = open(path, 'rb')
if sys.version_info < (3,):
data = cPickle.load(f)
else:
data = cPickle.load(f, encoding='bytes')
f.close()
return data # (X_train, y_train), (X_test, y_test)
(features_train, labels_train), (features_test, labels_test) = load_data('Class13_mnist.pkl.gz')
print('Training Features:')
print(features_train.shape)
print('Training Labels:')
print(labels_train.shape)
print('Testing Features:')
print(features_test.shape)
print('Testing Labels:')
print(labels_test.shape)
import matplotlib.pyplot as plt
testnum = 45076
testimage = features_train[testnum]
print('Expected Digit: {0:d}'.format(labels_train[testnum]))
print('Max value in image: {}'.format(testimage.max()))
plt.gray()
plt.matshow(testimage)
# normalize inputs from 0-255 to 0-1
features_train_set = features_train.astype('float32') / 255
features_test_set = features_test.astype('float32') / 255
pixdim1, pixdim2 = features_train_set.shape[1], features_train_set.shape[2]
num_pixels = pixdim1*pixdim2
features_train_set = features_train_set.reshape(features_train_set.shape[0], 1, pixdim1, pixdim2)
features_test_set = features_test_set.reshape(features_test_set.shape[0], 1, pixdim1, pixdim2)
features_train_set.shape
from keras import backend as K
K.set_image_dim_ordering('th')
from keras.utils import np_utils
# one hot encode outputs
labels_train_cat = np_utils.to_categorical(labels_train)
labels_test_cat = np_utils.to_categorical(labels_test)
num_classes = labels_test_cat.shape[1]
# One Dense Layer model
#-------------------------------------------------------------------------------
def simpleModel():
from keras.layers.core import Activation, Dense
from keras.models import Sequential
# create model and tell the tool that each layer we define will be added in sequential order
model = Sequential()
# our first layer is called a "Dense" layer - it is a fully connected set of nodes (every input connects to every node)
# We tell it what shape to expect the input: in this case it is the total number of input pixels.
# The output dimension is the number of nodes to create for this layer. We want one node for each pixel, so we use the number of pixels as our output dimension
model.add(Dense(input_dim=num_pixels, output_dim=num_pixels, init='normal'))
# The next thing we need to do is tell the network how to adjust the probabilities that the nodes compute. We'll use the hyperbolic tangent:
# It is like the logistic regression, but on the scale of -1 to 1 instead of 0 to 1.
model.add(Activation('tanh'))
# Finally we need to create out output layer. In this tool we need to specify an output dimension the same as the number of classes we are trying to predict
model.add(Dense(output_dim=num_classes, activation='softmax'))
# This time, the softmax activation is the same as the logistic regression, but generalized for more than 2 output choices (we have 10 classes here)
#model.add(Activation('softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# flatten 28*28 images to a 784 vector for each image
X_train = features_train.reshape(features_train_set.shape[0], num_pixels)
X_test = features_test.reshape(features_test_set.shape[0], num_pixels)
# build the model
model = simpleModel()
# Fit the model
model.fit(X_train, labels_train_cat, validation_data=(X_test, labels_test_cat), nb_epoch=2, batch_size=600, verbose=1)
# Final evaluation of the model
scores = model.evaluate(X_test, labels_test_cat, verbose=0)
print("Baseline Accuracy {:.2f}%".format(scores[1]*100))
actuals = labels_test
predictions = model.predict_classes(X_test)
import sklearn.metrics as metrics
cnf_matrix = metrics.confusion_matrix(actuals, predictions)
def show_confusion_matrix(cnf_matrix, class_labels):
fig, ax = plt.subplots(figsize=(20, 10))
ax.matshow(cnf_matrix,cmap=plt.cm.YlGn,alpha=0.7)
ax.set_xlabel('Predicted Label', fontsize=16)
ax.set_xticks(range(0,len(class_labels)))
ax.set_xticklabels(class_labels)
ax.set_ylabel('Actual Label', fontsize=16, rotation=90)
ax.set_yticks(range(0,len(class_labels)))
ax.set_yticklabels(class_labels)
ax.xaxis.set_label_position('top')
ax.xaxis.tick_top()
for row in range(len(cnf_matrix)):
for col in range(len(cnf_matrix[row])):
ax.text(col, row, cnf_matrix[row][col], va='center', ha='center', fontsize=16)
show_confusion_matrix(cnf_matrix,class_labels=range(10))
score = metrics.accuracy_score(actuals, predictions)
print("Accuracy Score: {}".format(score))
# Simplified LeNet model
#-------------------------------------------------------------------------------
def leNet_model(pixdim1, pixdim2):
from keras.layers.core import Activation, Dense
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers import Dropout, Flatten
# create model
model = Sequential()
model.add(Convolution2D(20, 5, 5, border_mode='valid', input_shape=(1, pixdim1, pixdim2)))
model.add(Activation('tanh'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(50, 3, 3))
model.add(Activation('tanh'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(500))
model.add(Activation('tanh'))
model.add(Dense(40))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# build the model
model = leNet_model(pixdim1,pixdim2)
# Fit the model
model.fit(features_train_set, labels_train_cat, validation_data=(features_test_set, labels_test_cat), nb_epoch=2, batch_size=600, verbose=1)
# Final evaluation of the model
scores = model.evaluate(features_test_set, labels_test_cat, verbose=0)
print("Baseline Accuracy {:.2f}%".format(scores[1]*100))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So we now have 60,000 training images and 10,000 test images. Each image is $28 \times 28$ pixels and has an output label indicating what number is supposed to have been drawn.
Step2: The neural network will work much better if we normalize the inputs. It looks like the image values run from 0 to 255. Let's re-scale that to 0-1. We change the type to a float32 since the keras tool uses that internally.
Step3: Finally, we need to shift our output predictions to a dummy variable since we are looking for a categorical output. Whether this step is necessary is dependent on the specific neural network tool you are using
Step4: Simple One-Layer Network
Step5: The last thing we need to do is to flatten out our input arrays. This simple neural network works better if we are only working with a single dimension of data.
Step6: So we're above 90% accuracy after only 2 epochs. Try increasing the number of epochs to see if there is a point where this levels out.
Step7: Convolution Neural Networks
|
10,685
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import warnings
from scipy.optimize import minimize
import emcee
import corner
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
N = 200
m = np.pi
b = np.e
x = np.random.uniform(0,20,N)
y_var = .5*x
epilson = np.random.randn(N)*y_var
y = m*x + b + epilson
plt.plot(x, y, 'bo');
plt.xlabel('x');
plt.ylabel('y');
def lnlike(theta, x, y):
m, b, y_base, y_hetero = theta
y_var = y_base + y_hetero*x
model = m*x+b
return -.5*np.sum(((model-y)**2/y_var**2) + np.log(2*np.pi*y_var**2))
def lnprior(theta):
m, b, y_base, y_hetero = theta
if m > 0 and 0 < y_base < 50 and 0 < y_hetero < 1:
return 0.0
else:
return -np.inf
def lnpost(theta, x, y):
lp = lnprior(theta)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(theta, x, y)
ndim, nwalkers = 4, 100
pos = [1 + 1e-4*np.random.randn(ndim) for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnpost, args=(x, y))
with warnings.catch_warnings():
warnings.simplefilter("ignore")
sampler.run_mcmc(pos, 15000);
samples = sampler.chain[:, 50:, :].reshape((-1, ndim))
# Burn-in
thetas = samples[-10000:]
thetas_mean = np.mean(samples[-10000:], axis=0)
emp_loss = lambda theta: np.sum((thetas - theta)**2)
res = minimize(emp_loss, np.ones(4), method='BFGS')
# Minimizer of squared loss
res.x
# Empirical mean
thetas_mean
fig = corner.corner(samples, labels=["$m$", "$b$", "$\sigma_y$", "$\epsilon_y$"],
truths=[m, b, 0, .5], plot_contours=False)
theta_hat = res.x
t = np.linspace(min(x), max(x), 1000)
f_approx = lambda t: theta_hat[0]*t + theta_hat[1]
f = lambda t: m*t + b
plt.figure(figsize=(10,6))
plt.plot(x, y, 'bo');
plt.plot(t, f_approx(t), 'red', label="$\hat{f}$")
plt.plot(t, f(t), 'purple', label = '$f$')
plt.xlabel('x');
plt.ylabel('y');
plt.legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let us examine a regression problem wherein we wish model two continous variables; one in terms of the other. Here underlying relationship between $x$ and $y$ is linear
Step2: Without knowing underlying distribution of the data, it is clear $y$ is linear in $x$, and that {$x,y$} are heteroskedastic. It is safe to assume
Step3: Lets run our sampler!
Step4: Decision Theory
Step5: They agree! In fact mean of the posterior always minimizes the squared loss function.
|
10,686
|
<ASSISTANT_TASK:>
Python Code:
from numpy import matrix
A = matrix([[72, 0, 0, 9, 0, 0],
[ 0, 2.88, 0, 0, 0, -4.5],
[ 0, 0, 18, 9, 0, 0],
[ 9, 0, 9, 12, 0, 0],
[ 0, 0, 0, 0, 33, 0],
[ 0, -4.5, 0, 0, 0, 33]])
b = matrix([[2],
[0.5],
[1],
[0],
[1.2],
[5]])
A.I*b
A.shape
A.shape[0]
from numpy import zeros, copy
x0 = matrix(zeros(A.shape[0])).T
x0
A.diagonal()
divisores = copy(A.diagonal().T)
divisores
from numpy import fill_diagonal
fill_diagonal(A, 0)
A
-A*x0
from numpy import divide
divide(-A*x0, divisores)
x1 = divide(b, divisores) - divide(A*x0, divisores)
x1
dif = x1 - x0
dif
from numpy import sqrt
e1 = sqrt((dif.T*dif)[0,0])
e1
x2 = divide(b, divisores) - divide(A*x1, divisores)
x2
dif = x2 - x1
e2 = sqrt((dif.T*dif)[0,0])
e2
x3 = divide(b, divisores) - divide(A*x2, divisores)
x3
dif = x3 - x2
e3 = sqrt((dif.T*dif)[0,0])
e3
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Con lo que, trivialmente, podemos ver su solución por medio de la implementación mas simple
Step2: Sin embargo, deseamos implementar un método diferente, por lo que empezaremos obteniendo información acerca del sistema, por ejemplo, si queremos saber la dimensión de la matriz $A$, y por consecuente la dimensión del sistema, tendremos
Step3: Ya sabemos que esta matriz es cuadrada, por lo que podemos obtener el primer elemento solamente con
Step4: Recordando el método de Jacobi, tenemos que empezar con una aproximación $0$ en todas las variables, por lo que definimos un arreglo de ceros, con la funcion zeros, y lo convertimos en un objeto de tipo matrix
Step5: Lo siguiente que voy a necesitar es un arreglo con todos los elementos de la diagonal principal, para dividir cada una de las formulas de la aproximación con estos elementos
Step6: Una vez que obtengo la diagonal, voy a sustituir todos los elementos de esta por $0$, de manera que cuando multiplique cada una de las filas de esta matriz, por el valor de la aproximación de $x$, $x_0$, obtendré la mayoría de los elementos de cada una de las formulas
Step7: Tomando en cuenta que nuestra primera aproximación es toda $0$, este vector es completamente correcto, ahora tenemos que dividir cada uno de estos elementos por el vector diagonal
Step8: Por ultimo tenemos que restar cada uno de estos elementos a la division del vector $b$ con la diagonal, de tal manera que tengamos
Step9: lo cual corresponde a nuestra primer aproximación, por lo que lo hemos guardado en x1.
Step10: podemos obtener el cuadrado de cada una de estas diferencias con dif.T*dif, es decir
Step11: por lo que obtenemos nuestro error inicial. Ahora tan solo tenemos que seguir haciendo iteraciones hasta obtener una buena aproximación
|
10,687
|
<ASSISTANT_TASK:>
Python Code:
import math
q = 0.6
P_mars = 2.0 * 10 ** -6
A_left = 7.6 # cm^2
L_left = 131.2 # cm
A_right = 6.3 # cm^2
L_right = 126.1 # cm
def solar_torque(P, A, L, q):
Calculates the solar torque (T) based on the Solar Pressure (P), spacecraft Area (A),
distance from centroid of surface A (L), and reflective factor (q)
This function uses the following formula:
T = P * A * L * (1 + q)
Parameters:
-----------
:param P: Solar Pressure of the orbiting planet (in W/m^2)
:param A: Area of the spacecraft side (in m^2)
:param L: Distance from the centroid of the surface A (in m)
:param q: Reflectance factor between 0 and 1
if not 0 <= q <=1:
raise ValueError("q must be between 0 and 1")
return P * A * L * (1 + q)
T_right = solar_torque(P_mars, A_right / 100, L_right / 100, q)
T_left = solar_torque(P_mars, A_left / 100, L_left / 100, q)
print("Total Torque = {}".format(T_right + T_left))
def magnetic_torque(D, B=None, M=None, r=None):
Calculates the magnetic torque on a space craft orbiting a planetary object based on the
residule dipole (D) of the spacecraft and the planetary object's magnetic field (B).
This function uses the following formula:
T = 10e-7 * D * B
Where:
B = 2 * M / r^3
If B isn't defined, it's assumed that M and r will be, otherwise a ValueError is raised.
If B is defined, the function uses that value, even when M and/or r is defined.
Parameters:
-----------
:param D: Residual dipole of the spacecraft (in pole-cm)
:param B: Planetary object's magnetic field (in gauss)
:param M: Magnetic moment of the planetary object (in emu)
:param r: Spacecraft orbital radius (in cm)
if B is None and (M is None or r is None):
raise ValueError("B or M and r must be defined!")
if B is None:
B = 2 * M / r ** 3
return 10 ** -7 * D * B
mars_r = 3.397 # km
mars_orbit_dist = .400 # km
mars_B_o = 5 * 10**-8
mars_r_o = mars_r * 10 ** 8
r = mars_r + mars_orbit_dist * 10 ** 8
B = (mars_B_o * mars_r_o ** 3) / (r ** 3) * math.sqrt((3 * math.sin(0)**2 + 1))
B
T_m_left = magnetic_torque(A_left, B)
T_m_right = magnetic_torque(A_right, B)
print(T_m_left, T_m_right)
T_m_right < T_right and T_m_left < T_left
def gravity_gradient_torque(u, r, I_z, I_y, theta):
return 3 * u / r ** 3 * abs(I_z - I_y) * theta
mars_u = 324858.8
T_g = gravity_gradient_torque(mars_u, r, L_left / 100, L_right / 100, math.pi / 4)
T_g
T_g < T_left and T_g < T_right
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Solar Torques
Step4: Magnetic Torques
Step5: Since both the magnetic torques are less than the solar torques, their sum is also less.
|
10,688
|
<ASSISTANT_TASK:>
Python Code:
data = pd.read_csv('lab1v1.csv')
P, D, S = data['Price'].values, data['Demand'].values, data['Supply'].values
data
def plot(*args, x='Quantity', y='Price', **kw):
plt.figure(figsize=(15, 10))
plt.plot(*args)
plt.xlabel(x)
plt.ylabel(y)
plt.legend(kw['legend'])
plt.title(kw['title'])
plt.show()
plot(D, P, 'ro', S, P, 'bo', legend=['Demand', 'Supply'], title='Demand vs Supply')
def compare(x, y, fs, label, title):
plt.figure(figsize=(15, 10))
scores, cs = [], []
for i, f in enumerate(fs):
c, _ = curve_fit(f, x, y)
fy = f(x, *c)
plt.plot(fy, x, '-', label=f.__name__)
scores.append(np.sum(y - fy)**2)
cs.append(c)
plt.plot(y, x, 'o', label=label)
plt.xlabel('Quantity'); plt.ylabel('Price')
plt.legend(); plt.title(title); plt.show()
return fs[np.argmin(scores)], cs[np.argmin(scores)]
def inverted_linear(x, a, b, c): return a / (b*x + c)
def inverted_exponent(x, a, b, c): return a / (np.exp(b*x) + c)
d_f, d_c = compare(P, D, [inverted_linear, inverted_exponent], label='Demand', title='Demand approximation')
'Demand is better to approximate with {} function'.format(d_f.__name__)
def exponent(x, a, b): return np.exp(a*x) + b
def logarithmic(x, a, b): return a*np.log(x) + b
s_f, s_c = compare(P, S, [exponent, logarithmic], label='Supply', title='Supply approximation')
'Supply is better to approximate with {} function'.format(s_f.__name__)
def diff(x): return abs(d_f(x, *d_c) - s_f(x, *s_c))
bounds = (P.min(), P.max())
e_p = minimize(diff, 1, bounds=[bounds]).x[0]
e_q = d_f(e_p, *d_c)
plot(d_f(P, *d_c), P, 'r-', s_f(P, *s_c), P, 'b-', D, P, 'ro', S, P, 'bo', e_q, e_p, 'go',
legend=['Demand(approx)', 'Supply(approx)', 'Demand', 'Supply', 'Equilibrium'], title='Equilibrium')
'Equilibrium is ({0:.2f}, {1:.2f})'.format(e_q, e_p)
d_c, s_c
def elasticity_d(p): return -d_c[1]*p*np.exp(d_c[1]*p) / (np.exp(d_c[1]*p) - d_c[2])
def elasticity_s(p): return s_c[0] / (s_c[0]*np.log(p) + s_c[1])
print('Demand elasticity: {0:.2f}'.format(elasticity_d(e_p)))
print('Supply elasticity: {0:.2f}'.format(elasticity_s(e_p)))
def arc_elasticity(P, Q): return (Q[-1] - Q[0]) / (P[-1] - P[0]) * np.sum(P) / np.sum(Q)
print('Demand arc elasticity: {0:.2f}'.format(arc_elasticity(D, P)))
print('Supply arc elasticity: {0:.2f}'.format(arc_elasticity(S, P)))
def taxed_d_f(x): return d_f(x + 0.5, *d_c)
def diff(x): return abs(taxed_d_f(x) - s_f(x, *s_c))
e_p = minimize(diff, 1, bounds=[bounds]).x[0]
e_q = s_f(e_p, *s_c)
plot(d_f(P, *d_c), P, 'r-', taxed_d_f(P), P, 'r--', s_f(P, *s_c), P, 'b-', e_q, e_p, 'go',
legend=['Demand', 'Demand(taxed)', 'Supply', 'Equilibrium'], title='Tax in demand')
'Equilibrium is ({0:.2f}, {1:.2f})'.format(e_q, e_p)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Approximation
Step2: We'll try to approximate our demand function with
Step3: Now let's approximate our supply function with
Step4: Elasticity
Step5: Provided that $a = 55.4, b = 0.97, c = -0.55, l = 39.02, m = 81.92$ we have
Step6: Because $|E_d(e_p)| < |E_s(e_p)|$ we can say that equilibrium is not stable.
Step7: Tax in demand
|
10,689
|
<ASSISTANT_TASK:>
Python Code:
# Ensure python 3 compatibility
from __future__ import division, print_function, absolute_import, unicode_literals
# Import necessary libraries:
# General utilities:
from os import path
# Computation:
import numpy as np
import h5py
# Visualization:
import matplotlib.pyplot as plt
# Finally, pycroscopy itself
import pycroscopy as px
# set up notebook to show plots within the notebook
% matplotlib inline
ui_file_window = False
try:
from PyQt5 import QtWidgets
def uiGetFile(filter='H5 file (*.h5)', caption='Select File'):
Presents a File dialog used for selecting the .mat file
and returns the absolute filepath of the selecte file\n
Parameters
----------
extension : String or list of strings
file extensions to look for
caption : (Optional) String
Title for the file browser window
Returns
-------
file_path : String
Absolute path of the chosen file
app = QtWidgets.QApplication([])
path = QtWidgets.QFileDialog.getOpenFileName(caption=caption, filter=filter)[0]
app.exit()
del app
return str(path)
ui_file_window = True
except ImportError:
print('***********************************************************')
print('* *')
print('* You will need to specify the file path manually below *')
print('* *')
print('***********************************************************')
if ui_file_window:
input_file_path = uiGetFile(caption='Select translated .h5 file or raw experiment data',
filter='Parameters for raw G-Line data (*.txt);; \
Translated file (*.h5)')
else:
input_file_path = '/Volumes/IFgroup/SPM software development/Raw_Data/G_mode/GVS/2015_04_08_PZT_AuCu_nanocaps/GLine_8V_10kHz_256x256_0001/GLine_8V_10kHz_256x256.h5'
folder_path, _ = path.split(input_file_path)
if input_file_path.endswith('.txt'):
print('Translating raw data to h5. Please wait')
tran = px.GLineTranslator()
h5_path = tran.translate(input_file_path)
else:
h5_path = input_file_path
print('Working on:\n' + h5_path)
hdf = px.ioHDF5(h5_path)
h5_main = px.hdf_utils.getDataSet(hdf.file, 'Raw_Data')[-1]
parms_dict = h5_main.parent.parent.attrs
samp_rate = parms_dict['IO_rate_[Hz]']
ex_freq = parms_dict['BE_center_frequency_[Hz]']
h5_spec_vals = px.hdf_utils.getAuxData(h5_main, auxDataName='Spectroscopic_Values')[0]
pixel_ex_wfm = h5_spec_vals[0, :int(h5_spec_vals.shape[1]/parms_dict['grid_num_cols'])]
print('Datasets and datagroups within the file:\n------------------------------------')
px.io.hdf_utils.print_tree(hdf.file)
print('\nThe main dataset:\n------------------------------------')
print(h5_main)
print('\nThe ancillary datasets:\n------------------------------------')
print(hdf.file['/Measurement_000/Channel_000/Position_Indices'])
print(hdf.file['/Measurement_000/Channel_000/Position_Values'])
print(hdf.file['/Measurement_000/Channel_000/Spectroscopic_Indices'])
print(hdf.file['/Measurement_000/Channel_000/Spectroscopic_Values'])
print('\nMetadata or attributes in a datagroup\n------------------------------------')
for key in hdf.file['/Measurement_000'].attrs:
print('{} : {}'.format(key, hdf.file['/Measurement_000'].attrs[key]))
row_ind = 40
raw_row = h5_main[row_ind].reshape(-1, pixel_ex_wfm.size)
fig, axes = px.plot_utils.plot_loops(pixel_ex_wfm, raw_row, x_label='Bias (V)', title='Raw Measurement',
plots_on_side=4, y_label='Deflection (a.u.)',
subtitles='Row: ' + str(row_ind) + ' Col:')
filter_parms = dict()
filter_parms['noise_threshold'] = 1E-4
filter_parms['comb_[Hz]'] = [ex_freq, 1E+3, 10]
# filter_parms['LPF_cutOff_[Hz]'] = -1
# Noise frequencies - 15.6 kHz ~ 14-17.5, 7.8-8.8, 45-49.9 ~ 48.9414 kHz
# filter_parms['band_filt_[Hz]'] = None # [[8.3E+3, 15.6E+3, 48.9414E+3], [1E+3, 0.5E+3, 0.1E+3]]
# filter_parms['phase_[rad]'] = 0
filter_parms['samp_rate_[Hz]'] = samp_rate
filter_parms['num_pix'] = 1
# Test filter on a single line:
row_ind = 40
filt_line, fig_filt, axes_filt = px.processing.gmode_utils.test_filter(h5_main[row_ind], filter_parms, samp_rate,
show_plots=True, use_rainbow_plots=False)
fig_filt.savefig(path.join(folder_path, 'FFT_filter_on_line_{}.png'.format(row_ind)), format='png', dpi=300)
filt_row = filt_line.reshape(-1, pixel_ex_wfm.size)
fig, axes = px.plot_utils.plot_loops(pixel_ex_wfm, filt_row, x_label='Bias (V)', title='FFT Filtering',
plots_on_side=4, y_label='Deflection (a.u.)',
subtitles='Row: ' + str(row_ind) + ' Col:')
# fig.savefig(path.join(folder_path, 'FFT_filtered_loops_on_line_{}.png'.format(row_ind)), format='png', dpi=300)
# h5_filt_grp = px.hdf_utils.findH5group(h5_main, 'FFT_Filtering')[-1]
h5_filt_grp = px.processing.gmode_utils.fft_filter_dataset(h5_main, filter_parms, write_filtered=True)
h5_filt = h5_filt_grp['Filtered_Data']
# Test to make sure the filter gave the same results
filt_row = h5_filt[row_ind].reshape(-1, pixel_ex_wfm.size)
fig, axes = px.plot_utils.plot_loops(pixel_ex_wfm, filt_row, x_label='Bias (V)', title='FFT Filtering',
plots_on_side=4, y_label='Deflection (a.u.)',
subtitles='Row: ' + str(row_ind) + ' Col:')
# h5_resh = h5_filt_grp['Filtered_Data-Reshape_000/Reshaped_Data']
h5_resh = px.processing.gmode_utils.reshape_from_lines_to_pixels(h5_filt, pixel_ex_wfm.size, 1)
fig, axes = px.plot_utils.plot_loops(pixel_ex_wfm, h5_resh, x_label='Bias (V)', title='FFT Filtering',
plots_on_side=5, y_label='Deflection (a.u.)')
# fig.savefig(path.join(folder_path, 'FFT_filtered_loops_on_line_{}.png'.format(row_ind)), format='png', dpi=300)
hdf.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: G-Mode filtering and inspection using pycroscopy
Step2: Make the data pycroscopy compatible
Step3: Open the .h5 file and extract some basic parameters
Step4: Inspect the contents of this h5 data file
Step5: Inspect the raw data
Step6: Try different FFT filters on the data
Step7: Apply selected filter to entire dataset
Step8: Now break up the filtered lines into "pixels"
|
10,690
|
<ASSISTANT_TASK:>
Python Code:
# Загружаем трейновую выборку и ответы
import pandas as pd
raw_train_dataframe = pd.read_table('train.csv', sep=',')
train_dataframe = raw_train_dataframe.drop('id', axis=1)
import numpy as np
train_sample = train_dataframe.drop('label', axis=1).values
train_sample_answers = train_dataframe['label'].values
# Обучаемся
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state=42)
classifier.fit(train_sample, train_sample_answers)
# Загружаем тестовыую выборку
raw_test_dataframe = pd.read_table('test.csv', sep=',')
test_dataframe = raw_test_dataframe.drop('id', axis=1)
test_sample = test_dataframe.values
# Предсказываем
test_sample_answers = classifier.predict(test_sample).reshape(-1, 1)
# Пишем выходной файлик
submission = np.hstack([np.arange(1, len(test_sample_answers) + 1).reshape(-1, 1), test_sample_answers])
import csv
with open('submission.csv', 'w', newline='') as csvfile:
writer = csv.writer(csvfile, delimiter=',')
writer.writerow(['id', 'label'])
writer.writerows(submission)
# your code here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Получилось так
|
10,691
|
<ASSISTANT_TASK:>
Python Code:
test_stationary(usa_log)
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(usa)
trend = decomposition.trend
seasonal = decomposition.seasonal
residual = decomposition.resid
plt.subplot(411)
plt.plot(usa, label='Original')
plt.legend(loc='best')
plt.subplot(412)
plt.plot(trend, label='Trend')
plt.legend(loc='best')
plt.subplot(413)
plt.plot(seasonal, label='Seasonality')
plt.legend(loc='best')
plt.subplot(414)
plt.plot(residual, label='Residuals')
plt.legend(loc='best')
plt.tight_layout()
usa_diff = usa - usa.shift(periods=1)
usa_diff.dropna(inplace=True)
plt.plot(usa_diff)
test_stationary(usa_diff)
#ACF and PACF plots
plt.figure(figsize=(15, 6))
def plot_acf_pacf(ts):
lag_acf = acf(ts, nlags=10)
lag_pacf = pacf(ts, nlags=10, method='ols')
#Plot ACF:
plt.subplot(121)
plt.plot(lag_acf)
plt.axhline(y=0,linestyle='--',color='gray')
plt.axhline(y=-7.96/np.sqrt(len(ts)),linestyle='--',color='gray')
plt.axhline(y=7.96/np.sqrt(len(ts)),linestyle='--',color='gray')
plt.title('Autocorrelation Function')
#Plot PACF:
plt.subplot(122)
plt.plot(lag_pacf)
plt.axhline(y=0,linestyle='--',color='gray')
plt.axhline(y=-7.96/np.sqrt(len(ts)),linestyle='--',color='gray')
plt.axhline(y=7.96/np.sqrt(len(ts)),linestyle='--',color='gray')
plt.title('Partial Autocorrelation Function')
plt.tight_layout()
usa_diff = usa - usa.shift(periods=1)
usa_diff.dropna(inplace=True)
plot_acf_pacf(usa_diff)
from statsmodels.tsa.arima_model import ARIMA
model = ARIMA(usa, order=(2, 1, 0))
results_ARIMA = model.fit(disp=-1)
plt.plot(usa_diff, label = "Original")
plt.plot(results_ARIMA.fittedvalues, color='red', label = "fitted")
plt.title('RSS: %.4f'% sum((results_ARIMA.fittedvalues - usa_diff)**2))
plt.legend()
print(results_ARIMA.summary())
residuals = pd.DataFrame(results_ARIMA.resid)
residuals.plot(kind='kde')
print(residuals.describe())
predictions_ARIMA_diff = pd.Series(results_ARIMA.fittedvalues, copy=True)
predictions_ARIMA_diff.head()
predictions_ARIMA_diff_cumsum = predictions_ARIMA_diff.cumsum()
predictions_ARIMA = pd.Series(usa.ix[0], index=usa.index)
predictions_ARIMA = predictions_ARIMA.add(predictions_ARIMA_diff_cumsum, fill_value=0)
plt.plot(usa, label = "Actual")
plt.plot(predictions_ARIMA, label = "ARIMA fitted")
plt.legend()
from sklearn.metrics import mean_squared_error
size = int(len(usa) - 5)
train, test = usa[0:size], usa[size:len(usa)]
history = [x for x in train]
predictions = list()
print('Printing Predicted vs Expected Values...')
print('\n')
for t in range(len(test)):
model = ARIMA(history, order=(2,1,0))
model_fit = model.fit(disp=0)
output = model_fit.forecast()
yhat = output[0]
predictions.append(float(yhat))
obs = test[t]
history.append(obs)
print('predicted=%f, expected=%f' % (yhat, obs))
error = mean_squared_error(test, predictions)
print('\n')
print('Printing Mean Squared Error of Predictions...')
print('Test MSE: %.6f' % error)
predictions_series = pd.Series(predictions, index = test.index)
#rolling one-step out-of-sample
fig, ax = plt.subplots()
ax.set(title='GDP Per Capita forecasting', xlabel='Date', ylabel='Per Capita GDP')
ax.plot(usa, marker = '.', color = "red", label='observed')
ax.plot(predictions_series, color = "green", label='forecast')
legend = ax.legend(loc='upper left')
legend.get_frame().set_facecolor('w')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Because the p-value is now smaller than 0.05, we have necessary evidence to reject the null hypothesis of the Dickey-Fuller test. This shows that the time series is stationary at log scale. We can apply ARIMA model forcasting on this log scaled data.
Step2: Now that you have stationarized your time series, you could go on and model residuals (fit lines between values in the plot). However, as the patterns for the trend and seasonality information extracted from the series that are plotted after decomposition are still not consistent and cannot be scaled back to the original values, you cannot use this approach to create reliable forecasts.
Step3: Find optimal parameters and build an ARIMA (auto regressive integrated moving average) model
Step4: In this plot, the 'p' and 'q' values can be determined as follows
Step5: Measure the variance between the data and the values predicted by the model
Step6: Scale predictions
Step7: Making Predictions
|
10,692
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import TextVectorization
# Example training data, of dtype `string`.
training_data = np.array([["This is the 1st sample."], ["And here's the 2nd sample."]])
# Create a TextVectorization layer instance. It can be configured to either
# return integer token indices, or a dense token representation (e.g. multi-hot
# or TF-IDF). The text standardization and text splitting algorithms are fully
# configurable.
vectorizer = TextVectorization(output_mode="int")
# Calling `adapt` on an array or dataset makes the layer generate a vocabulary
# index for the data, which can then be reused when seeing new data.
vectorizer.adapt(training_data)
# After calling adapt, the layer is able to encode any n-gram it has seen before
# in the `adapt()` data. Unknown n-grams are encoded via an "out-of-vocabulary"
# token.
integer_data = vectorizer(training_data)
print(integer_data)
from tensorflow.keras.layers import TextVectorization
# Example training data, of dtype `string`.
training_data = np.array([["This is the 1st sample."], ["And here's the 2nd sample."]])
# Create a TextVectorization layer instance. It can be configured to either
# return integer token indices, or a dense token representation (e.g. multi-hot
# or TF-IDF). The text standardization and text splitting algorithms are fully
# configurable.
vectorizer = TextVectorization(output_mode="binary", ngrams=2)
# Calling `adapt` on an array or dataset makes the layer generate a vocabulary
# index for the data, which can then be reused when seeing new data.
vectorizer.adapt(training_data)
# After calling adapt, the layer is able to encode any n-gram it has seen before
# in the `adapt()` data. Unknown n-grams are encoded via an "out-of-vocabulary"
# token.
integer_data = vectorizer(training_data)
print(integer_data)
from tensorflow.keras.layers import Normalization
# Example image data, with values in the [0, 255] range
training_data = np.random.randint(0, 256, size=(64, 200, 200, 3)).astype("float32")
normalizer = Normalization(axis=-1)
normalizer.adapt(training_data)
normalized_data = normalizer(training_data)
print("var: %.4f" % np.var(normalized_data))
print("mean: %.4f" % np.mean(normalized_data))
from tensorflow.keras.layers import CenterCrop
from tensorflow.keras.layers import Rescaling
# Example image data, with values in the [0, 255] range
training_data = np.random.randint(0, 256, size=(64, 200, 200, 3)).astype("float32")
cropper = CenterCrop(height=150, width=150)
scaler = Rescaling(scale=1.0 / 255)
output_data = scaler(cropper(training_data))
print("shape:", output_data.shape)
print("min:", np.min(output_data))
print("max:", np.max(output_data))
# Let's say we expect our inputs to be RGB images of arbitrary size
inputs = keras.Input(shape=(None, None, 3))
from tensorflow.keras import layers
# Center-crop images to 150x150
x = CenterCrop(height=150, width=150)(inputs)
# Rescale images to [0, 1]
x = Rescaling(scale=1.0 / 255)(x)
# Apply some convolution and pooling layers
x = layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu")(x)
x = layers.MaxPooling2D(pool_size=(3, 3))(x)
x = layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu")(x)
x = layers.MaxPooling2D(pool_size=(3, 3))(x)
x = layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu")(x)
# Apply global average pooling to get flat feature vectors
x = layers.GlobalAveragePooling2D()(x)
# Add a dense classifier on top
num_classes = 10
outputs = layers.Dense(num_classes, activation="softmax")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
data = np.random.randint(0, 256, size=(64, 200, 200, 3)).astype("float32")
processed_data = model(data)
print(processed_data.shape)
model.summary()
# Get the data as Numpy arrays
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Build a simple model
inputs = keras.Input(shape=(28, 28))
x = layers.Rescaling(1.0 / 255)(inputs)
x = layers.Flatten()(x)
x = layers.Dense(128, activation="relu")(x)
x = layers.Dense(128, activation="relu")(x)
outputs = layers.Dense(10, activation="softmax")(x)
model = keras.Model(inputs, outputs)
model.summary()
# Compile the model
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy")
# Train the model for 1 epoch from Numpy data
batch_size = 64
print("Fit on NumPy data")
history = model.fit(x_train, y_train, batch_size=batch_size, epochs=1)
# Train the model for 1 epoch using a dataset
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size)
print("Fit on Dataset")
history = model.fit(dataset, epochs=1)
print(history.history)
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")],
)
history = model.fit(dataset, epochs=1)
val_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(batch_size)
history = model.fit(dataset, epochs=1, validation_data=val_dataset)
loss, acc = model.evaluate(val_dataset) # returns loss and metrics
print("loss: %.2f" % loss)
print("acc: %.2f" % acc)
predictions = model.predict(val_dataset)
print(predictions.shape)
# Example training data, of dtype `string`.
samples = np.array([["This is the 1st sample."], ["And here's the 2nd sample."]])
labels = [[0], [1]]
# Prepare a TextVectorization layer.
vectorizer = TextVectorization(output_mode="int")
vectorizer.adapt(samples)
# Asynchronous preprocessing: the text vectorization is part of the tf.data pipeline.
# First, create a dataset
dataset = tf.data.Dataset.from_tensor_slices((samples, labels)).batch(2)
# Apply text vectorization to the samples
dataset = dataset.map(lambda x, y: (vectorizer(x), y))
# Prefetch with a buffer size of 2 batches
dataset = dataset.prefetch(2)
# Our model should expect sequences of integers as inputs
inputs = keras.Input(shape=(None,), dtype="int64")
x = layers.Embedding(input_dim=10, output_dim=32)(inputs)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="adam", loss="mse", run_eagerly=True)
model.fit(dataset)
# Our dataset will yield samples that are strings
dataset = tf.data.Dataset.from_tensor_slices((samples, labels)).batch(2)
# Our model should expect strings as inputs
inputs = keras.Input(shape=(1,), dtype="string")
x = vectorizer(inputs)
x = layers.Embedding(input_dim=10, output_dim=32)(x)
outputs = layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="adam", loss="mse", run_eagerly=True)
model.fit(dataset)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: Example
Step3: Example
Step4: Example
Step5: Building models with the Keras Functional API
Step6: After defining your input(s), you can chain layer transformations on top of your inputs,
Step7: Once you have defined the directed acyclic graph of layers that turns your input(s) into
Step8: This model behaves basically like a bigger layer. You can call it on batches of data, like
Step9: You can print a summary of how your data gets transformed at each stage of the model.
Step10: The Functional API also makes it easy to build models that have multiple inputs (for
Step11: The fit() call returns a "history" object which records what happened over the course
Step12: For a detailed overview of how to use fit(), see the
Step13: Passing validation data to fit()
Step14: Using callbacks for checkpointing (and more)
Step15: You can also generate NumPy arrays of predictions (the activations of the output
Step16: Using fit() with a custom training step
Step17: Compare this to doing text vectorization as part of the model
|
10,693
|
<ASSISTANT_TASK:>
Python Code:
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.filters import Q1500US
universe = Q1500US()
from quantopian.pipeline.data import morningstar
sector = morningstar.asset_classification.morningstar_sector_code.latest
#from quantopian.pipeline.classifiers.morningstar import Sector
#morningstar_sector = Sector()
energy_sector = sector.eq(309)
from quantopian.pipeline.factors import SimpleMovingAverage, AverageDollarVolume
# Dollar volume factor
dollar_volume = AverageDollarVolume(window_length = 30)
# High dollar volume filter
high_dollar_volume = dollar_volume.percentile_between(90, 100)
# Top open price filter (high dollar volume securities)
top_open_price = USEquityPricing.open.latest.top(50,
mask = high_dollar_volume)
# Top percentile close price filter (high dollar volume, top 50 open price)
high_close_price = USEquityPricing.close.latest.percentile_between(90, 100,
mask = top_open_price)
def make_pipeline():
# Base universe filter.
base_universe = Q1500US()
# Sector Classifier as Filter
energy_sector = sector.eq(309)
# Masking Base Energy Stocks
base_energy = base_universe & energy_sector
# Dollar volume factor
dollar_volume = AverageDollarVolume(window_length = 30)
# Top half of dollar volume filter
high_dollar_volume = dollar_volume.percentile_between(95, 100)
# Final Filter Mask
top_half_base_energy = base_energy & high_dollar_volume
# 10-day close price average.
mean_10 = SimpleMovingAverage(inputs=[USEquityPricing.close],
window_length = 10,
mask = top_half_base_energy)
# 30-day close price average.
mean_30 = SimpleMovingAverage(inputs=[USEquityPricing.close],
window_length = 30,
mask = top_half_base_energy)
# Percent difference factor.
percent_difference = (mean_10 - mean_30) / mean_30
# Create a filter to select securities to short.
shorts = percent_difference < 0
# Create a filter to select securities to long.
longs = percent_difference > 0
# Filter for the securities that we want to trade.
securities_to_trade = (shorts | longs)
return Pipeline(
columns = {
'longs': longs,
'shorts': shorts,
'percent_diff':percent_difference
},
screen=securities_to_trade
)
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
result
result.info()
from quantopian.algorithm import attach_pipeline,pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import AverageDollarVolume,SimpleMovingAverage
from quantopian.pipeline.filters.morningstar import Q1500US
from quantopian.pipeline.data import morningstar
def initialize(context):
schedule_function(my_rebalance,date_rules.week_start(),time_rules.market_open(hours = 1))
my_pipe = make_pipeline()
attach_pipeline(my_pipe, 'my_pipeline')
def my_rebalance(context,data):
for security in context.portfolio.positions:
if security not in context.longs and security not in context.shorts and data.can_trade(security):
order_target_percent(security,0)
for security in context.longs:
if data.can_trade(security):
order_target_percent(security,context.long_weight)
for security in context.shorts:
if data.can_trade(security):
order_target_percent(security,context.short_weight)
def my_compute_weights(context):
if len(context.longs) == 0:
long_weight = 0
else:
long_weight = 0.5 / len(context.longs)
if len(context.shorts) == 0:
short_weight = 0
else:
short_weight = 0.5 / len(context.shorts)
return (long_weight,short_weight)
def before_trading_start(context,data):
context.output = pipeline_output('my_pipeline')
# LONG
context.longs = context.output[context.output['longs']].index.tolist()
# SHORT
context.shorts = context.output[context.output['shorts']].index.tolist()
context.long_weight,context.short_weight = my_compute_weights(context)
def make_pipeline():
# Universe Q1500US
base_universe = Q1500US()
# Energy Sector
sector = morningstar.asset_classification.morningstar_sector_code.latest
energy_sector = sector.eq(309)
# Make Mask of 1500US and Energy
base_energy = base_universe & energy_sector
# Dollar Volume (30 Days) Grab the Info
dollar_volume = AverageDollarVolume(window_length = 30)
# Grab the top 5% in avg dollar volume
high_dollar_volume = dollar_volume.percentile_between(95, 100)
# Combine the filters
top_five_base_energy = base_energy & high_dollar_volume
# 10 day mean close
mean_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length = 10, mask = top_five_base_energy)
# 30 day mean close
mean_30 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length = 30, mask = top_five_base_energy)
# Percent Difference
percent_difference = (mean_10-mean_30)/mean_30
# List of Shorts
shorts = percent_difference < 0
# List of Longs
longs = percent_difference > 0
# Final Mask/Filter for anything in shorts or longs
securities_to_trade = (shorts | longs)
# Return Pipeline
return Pipeline(columns={
'longs':longs,
'shorts':shorts,
'perc_diff':percent_difference
},screen=securities_to_trade)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the Securities we want.
Step2: There are two main benefits of the Q500US and Q1500US. Firstly, they greatly reduce the risk of an order not being filled. Secondly, they allow for more meaningful comparisons between strategies as now they will be used as the standard universes for algorithms.
Step3: Filtering the universe further with Classifiers
Step4: Alternative
Step5: Masking Filters
Step6: Applying Filters and Factors
Step7: Executing this Strategy in the IDE
|
10,694
|
<ASSISTANT_TASK:>
Python Code:
x, fs = librosa.load('simple_loop.wav')
IPython.display.Audio(x, rate=fs)
spectral_centroids = librosa.feature.spectral_centroid(x, sr=fs)
plt.plot(spectral_centroids[0])
spectral_bandwidth = librosa.feature.spectral_bandwidth(x, sr=fs)
plt.plot(spectral_bandwidth[0])
spectral_contrast = librosa.feature.spectral_contrast(x, sr=fs)
# For visualization, scale each feature dimension to have zero mean and unit variance
spectral_contrast = sklearn.preprocessing.scale(spectral_contrast, axis=1)
librosa.display.specshow(spectral_contrast, x_axis='time')
spectral_rolloff = librosa.feature.spectral_rolloff(x, sr=fs)
plt.plot(spectral_rolloff[0])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: librosa.feature.spectral_bandwidth
Step2: librosa.feature.spectral_contrast
Step3: librosa.feature.spectral_rolloff
|
10,695
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), self.scaler(y)
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x = tf.layers.dense(z, output_dim)
# Output layer, 32x32x3
logits = tf.maximum(x * alpha, x)
out = tf.tanh(logits)
return out
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x =
logits =
out =
return out, logits
def model_loss(input_real, input_z, output_dim, alpha=0.2):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=0.2)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, 0.5)
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img)
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(50, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 5, 10, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
real_size = (32,32,3)
z_size = 100
learning_rate = 0.001
batch_size = 64
epochs = 1
alpha = 0.01
beta1 = 0.9
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
# Load the data and train the network here
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 5, 10, figsize=(10,5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data
Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
Step5: Network Inputs
Step6: Generator
Step7: Discriminator
Step9: Model Loss
Step11: Optimizers
Step12: Building the model
Step13: Here is a function for displaying generated images.
Step14: And another function we can use to train our network.
Step15: Hyperparameters
|
10,696
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import json
#fields for csv
site_fields = ['id', 'name', 'geometry','accuracy']
observation_fields = ['entered', 'values','observer_id', 'site_id', 'type_id', 'parentobs_id']
df = pd.read_csv('C:/Users/Peter/Documents/scc/challenge/obs_types/analysis.csv')
#add foreign key fields
mystartID = 1000 #primary key to start numbering new data from
df['observer_id'] = 1 # this is my observer_id
df['site_id'] = np.nan
df['type_id'] = np.nan
df['parentobs_id'] = np.nan
df['id']=df.index+mystartID
df.columns
#get soil samples fields
soil_samples_renaming = {"value1": "top_cm", "value2": "bottom_cm","date": "oldDate", "id": "sampleID", "type": "description"}
df.rename(columns=soil_samples_renaming, inplace=True)
df['date'] = pd.to_datetime(df['oldDate'],infer_datetime_format=True)
df.columns
#add a few needed fields
df['entered'] = "2017-06-01 00:00:00.000" #arbitrary for loading data
df['observer_id'] = 1 #given that all these observations are mine
df['site_id'] = 0
df['type_id'] = 31 # for soil samples
df['parentobs_id'] = 0
df['samplers'] = ''
#use regex to replace substrings with numbers for num_composited field
replacements = {
r'8': 8,
r'3': 3,
r'4': 4,
r'pit':4,
r'single': 1,
r'density': 1
}
df['num_composited'] = df.description.replace(replacements, regex=True)
#df.loc[df.text.str.contains('\.'), 'text'] = 'other'
df.num_composited.value_counts() #gives occurrences of each unique value
#here we filter for the soil samples only, not the analyses or calculated stats
searchfor = ['single','density','composite sample','8','4','3']
#y = df[df.description.str.contains('|'.join(searchfor))] #df w rows that contain terms
#x = df[~df.description.str.contains('|'.join(searchfor))] #df without rows that contain terms
df = df[df.description.str.contains('|'.join(searchfor))] #df w rows that contain terms
df['description'] = df['description'] + ". " + df['note']
#in order to make a few text changes, e.g. describe samples a bit more
#df.to_csv('C:/Users/Peter/Documents/scc/challenge/obs_types/soil_samples.csv', index=False)
df = pd.read_csv('C:/Users/Peter/Documents/scc/challenge/obs_types/soil_samples.csv')
df=pd.read_csv('C:/Users/Peter/Documents/scc/challenge/obs_types/soil_samples.csv')
JSONfield = ['top_cm', 'bottom_cm', 'description','num_composited','sampleID','date','samplers']
jsonvalues= df[JSONfield]
jsonvalues.columns
#create dataframe with same length to hold JSON field
json = pd.DataFrame(index = df.index, columns = ['values'])
for i, row in jsonvalues.iterrows():
json.values[i]= jsonvalues.loc[i].to_json()
#print(values.values[i])
#now we create a df with all fields, including the JSON values field
merged = df.merge(json, left_index=True, right_index=True)
merged.to_csv('C:/Users/Peter/Documents/scc/challenge/obs_types/soil_samples.csv', index=False)
mystart = 1000 #primary key to start with
merged['id'] = merged.index + mystart
observation_fields
#observation_fields.append('group')
final = merged[observation_fields]
final
final.to_csv('C:/Users/Peter/Documents/scc/challenge/obs_types/soil_samples_readyFK.csv', index=False)
final = pd.read_csv('C:/Users/Peter/Documents/scc/challenge/obs_types/soil_samples_readyFK.csv')
final
final[final['group']=='BCLA1']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: soil samples from analysis.csv
Step2: For soil samples, type=31 and values fields are as follows
|
10,697
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Jona Sassenhagen <jona.sassenhagen@gmail.com>
# Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.io import Raw
from mne.preprocessing import ICA
from mne.preprocessing.ica import corrmap
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = Raw(raw_fname, preload=True)
raw.filter(1, 30, method='iir')
picks = mne.pick_types(raw.info, meg=False, eeg=True, eog=True, ecg=False,
stim=False, exclude='bads')
events = mne.find_events(raw, stim_channel='STI 014')
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
reject = dict(eog=250e-6)
tmin, tmax = -0.5, 0.75
all_epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
proj=False, picks=picks, baseline=(None, 0),
preload=True, reject=None, verbose=False)
all_epochs = [all_epochs[start:stop] for start, stop in
[(0, 100), (101, 200), (201, 300)]]
icas = [ICA(n_components=20, random_state=1).fit(epochs)
for epochs in all_epochs]
# 2) Use corrmap to identify the maps best corresponding
# to a pre-specified template across all subsets
# (or, in the real world, multiple participant data sets)
template = (0, 0)
fig_template, fig_detected = corrmap(icas, template=template, label="blinks",
show=True, threshold=.8)
# 3) Zeroing the identified blink components for all data sets
# results in individually cleaned data sets. Specific components
# can be accessed using the label_ attribute.
for ica in icas:
print(ica.labels_)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup paths and prepare epochs data
Step2: 1) Fit ICA to all "subjects".
|
10,698
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-3', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
10,699
|
<ASSISTANT_TASK:>
Python Code:
input_file_name = './movies.csv'
n_most_popular = 15 # Parametro N
import pandas as pd
import ast
import numpy as np
def get_items(arg_string):
return [d['name'] for d in ast.literal_eval(arg_string)]
#get_items("[{'iso_3166_1': 'DE', 'name': 'Germany'}, {'iso_3166_1': 'US', 'name': 'United States of America'}]")
df = pd.read_csv('movies.csv')
#df
info_dict = {}
for (index, record) in df.iterrows():
info_dict[index] = (record['original_title'], record['tagline'], record['original_language'])
info_dict
country_list = []
for (index, record) in df.iterrows():
country_list.extend(get_items(record['production_countries']))
country_list
pop_dict = {}
for (index, record) in df.iterrows():
if np.isnan(record['popularity']) == False:
for gen in get_items(record['genres']):
value = pop_dict.get(gen, [])
value.append([record['popularity'], index])
pop_dict[gen] = value
pop_dict
from collections import Counter
country_rank_list = Counter(country_list).most_common()[:10]
country_rank_list
tuple_list = [(genere, sorted(pop_dict[genere])[::-1][:n_most_popular]) for genere in pop_dict]
pop_rank_dict = dict(tuple_list)
pop_rank_dict
tuple_list = []
for genere in pop_rank_dict:
new_list = []
for film in pop_rank_dict[genere]:
film_id = film[1]
original_title = info_dict[film_id][0]
tagline = info_dict[film_id][1]
new_film = [original_title, tagline]
new_list.append(new_film)
tuple_list.append((genere, new_list))
pop_rank_dict_out = dict(tuple_list)
pop_rank_dict_out
tuple_list = []
for genere in pop_rank_dict:
language_set = set()
for film in pop_rank_dict[genere]:
language_set.add(info_dict[film[1]][2])
tuple_list.append((genere, language_set))
language_set_dict = dict(tuple_list)
language_set_dict
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importazione dei moduli pandas e ast e numpy.
Step2: 1) Definizione della funzione get_items()
Step3: 2) Lettura del file csv con Pandas
Step4: 3) Costruzione delle tre strutture dati di base
Step5: b) Lista dei paesi che hanno prodotto almeno un film (ciascun paese deve essere presente nella lista esattamente il numero di volte in cui ha prodotto un film).
Step6: c) Dizionario delle popolarità
Step7: 4) Estrazione dei 10 paesi che hanno prodotto più film
Step8: 5) Estrazione, per ogni genere, degli n_most_popular film più popolari ordinati per popolarità descrescente, ed estrazione delle lingue coinvolte per ciascuno dei generi
Step9: b) Derivare dal dizionario precedente il dizionario con la stessa struttura in cui le liste [popolarità, id] sono sostituite dalle liste [titolo originale, tagline] (secondo output).
Step10: c) Estrarre dal dizionario del punto 5a il dizionario degli insiemi delle lingue originali coinvolte
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.