Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
7,000
|
<ASSISTANT_TASK:>
Python Code:
import six
import json
import numpy as np
import pandas as pd
import tensorflow as tf
import annoy
from verta import Client
from verta.utils import ModelAPI, TFSavedModel
REPRESENTATION_LENGTH = 25
MAX_INPUT_LENGTH = 50
HOST = 'app.verta.ai'
DATA_DIR = ''
DATA_FILE = DATA_DIR + 'imdb_master.csv'
EMBEDDING_FILE = DATA_DIR + 'glove.twitter.27B/glove.twitter.27B.25d.txt'
SAVED_MODEL_DIR = 'saved-model'
# Load the glove embeddings
embeddings_index = dict()
with open(EMBEDDING_FILE) as f:
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
embeddings_index['UNK'] = [0.0] * REPRESENTATION_LENGTH
print('Loaded %s word vectors.' % len(embeddings_index))
embedding_matrix = np.zeros((len(embeddings_index), REPRESENTATION_LENGTH))
word_to_index = {}
ctr = 0
UNK_INDEX = -1
for word, embedding in embeddings_index.items():
if word == 'UNK':
UNK_INDEX = ctr
word_to_index[word] = ctr
embedding_matrix[ctr, :] = embedding
ctr += 1
VOCABULARY_SIZE = embedding_matrix.shape[0]
reviews = pd.read_csv(DATA_FILE)['review'].values.tolist()
input_data = reviews[:1000]
input_data[0]
import shutil; shutil.rmtree(SAVED_MODEL_DIR, ignore_errors=True)
batch_indexes = tf.placeholder(tf.int32, shape=[50], name="input")
tf_embedding = tf.Variable(
tf.constant(0.0, shape=[VOCABULARY_SIZE, REPRESENTATION_LENGTH]),
trainable=False,
name="Embedding",
)
tf_embedding_placeholder = tf.placeholder(
tf.float32,
[VOCABULARY_SIZE, REPRESENTATION_LENGTH]
)
tf_embedding_init = tf_embedding.assign(tf_embedding_placeholder)
embedding_list = tf.nn.embedding_lookup(
params=tf_embedding, ids=batch_indexes)
concatenated_embedding = tf.concat(embedding_list, -1)
embedding = tf.reduce_mean(concatenated_embedding, axis=0, name="output")
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
_ = sess.run(
tf_embedding_init,
feed_dict={
tf_embedding_placeholder: embedding_matrix
}
)
# save the model for use later
tf.saved_model.simple_save(
sess,
SAVED_MODEL_DIR,
{'batch_indexes': batch_indexes},
{'embedding': embedding}
)
from annoy import AnnoyIndex
model = TFSavedModel(SAVED_MODEL_DIR)
t = AnnoyIndex(REPRESENTATION_LENGTH, 'angular') # Length of item vector that will be indexed
for i in range(len(reviews)):
review = reviews[i]
words = review.split()[:MAX_INPUT_LENGTH]
batch_indexes = [word_to_index.get(w.lower(), word_to_index['UNK']) for w in words]
batch_indexes += [UNK_INDEX] * (MAX_INPUT_LENGTH - len(batch_indexes))
# calculate embedding with TF
embedding = model.predict(batch_indexes=batch_indexes)['embedding']
t.add_item(i, embedding)
t.build(10) # 10 trees
t.save('reviews.ann')
client = Client(HOST)
client.set_project("TF")
client.set_experiment("SavedModel")
run = client.set_experiment_run()
run.log_artifact("saved_model", SAVED_MODEL_DIR)
with open("word_to_index.json", 'w') as f:
json.dump(word_to_index, f)
run.log_artifact("word_to_index", "word_to_index.json")
run.log_artifact("reviews_index", "reviews.ann")
class TextNNSearch(object):
def __init__(self, artifacts):
self.session = tf.Session()
tf.compat.v1.saved_model.load(self.session, ['serve'], artifacts['saved_model'])
with open(artifacts['word_to_index'], 'r') as f:
self.word_to_index = json.load(f)
self.index = annoy.AnnoyIndex(REPRESENTATION_LENGTH, "angular")
self.index.load(artifacts['reviews_index'])
def predict(self, input_strs):
predictions = []
for input_str in input_strs:
words = input_str.split()[:MAX_INPUT_LENGTH]
batch_indexes = [self.word_to_index.get(w.lower(), self.word_to_index['UNK']) for w in words]
batch_indexes += [UNK_INDEX] * (MAX_INPUT_LENGTH - len(batch_indexes))
# calculate embedding with TF
embedding = self.session.run("output:0", {"input:0": batch_indexes})
# find embedding vectors of ten nearest neighbors
predictions.append({
input_str: self.index.get_nns_by_vector(embedding, 10)
})
return predictions
artifacts = run.fetch_artifacts(["saved_model", "word_to_index", "reviews_index"])
model = TextNNSearch(artifacts)
prediction = model.predict(["omg I love this film"])
similar_reviews = [reviews[i] for i in prediction[0].values()[0]]
print(similar_reviews[0])
run.log_model(
TextNNSearch,
custom_modules=[],
model_api=ModelAPI(input_data, model.predict(input_data)),
artifacts=["saved_model", "word_to_index", "reviews_index"],
)
run.log_requirements(["tensorflow", "annoy==1.15.2"])
run
# remove reviews with weird bytes
bad_reviews = []
for i, review in enumerate(input_data):
try:
unicode(review, 'utf-8')
except UnicodeDecodeError:
bad_reviews.append(i)
else:
pass
for i in sorted(bad_reviews, reverse=True):
del input_data[i]
from verta.deployment import DeployedModel
embeddings = DeployedModel(HOST, run.id).predict(input_data[:1000], compress=True)
embeddings[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the Glove embeddings
Step2: Build Simple TF model
Step3: Build Index
Step4: Instantiate Client
Step5: Log Artifacts
Step6: Build a NearestNeighbor Search class
Step7: Run some simple tests
Step8: Deploy Model
Step9: Deploy the model and make predictions
|
7,001
|
<ASSISTANT_TASK:>
Python Code:
name = "YOUR NAME HERE"
print("Hello {0}!".format(name))
%matplotlib inline
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100 # This makes all the plots a little bigger.
import numpy as np
import matplotlib.pyplot as plt
# Load the data from the CSV file.
x, y, yerr = np.loadtxt("linear.csv", delimiter=",", unpack=True)
# Plot the data with error bars.
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.xlim(0, 5);
A = np.vander(x, 2) # Take a look at the documentation to see what this function does!
ATA = np.dot(A.T, A / yerr[:, None]**2)
w = np.linalg.solve(ATA, np.dot(A.T, y / yerr**2))
V = np.linalg.inv(ATA)
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
for m, b in np.random.multivariate_normal(w, V, size=50):
plt.plot(x, m*x + b, "g", alpha=0.1)
plt.xlim(0, 5);
def lnlike_linear((m, b)):
# CODE HERE
f = m*x + b
return -0.5 * np.sum(((y-f) / yerr) ** 2)
p_1, p_2 = (0.0, 0.0), (0.01, 0.01)
ll_1, ll_2 = lnlike_linear(p_1), lnlike_linear(p_2)
if not np.allclose(ll_2 - ll_1, 535.8707738280209):
raise ValueError("It looks like your implementation is wrong!")
print("☺︎")
def lnprior_linear((m, b)):
if not (-10 < m < 10):
return -np.inf
if not (-10 < b < 10):
return -np.inf
return 0.0
def lnpost_linear(theta):
return lnprior_linear(theta) + lnlike_linear(theta)
def metropolis_step(lnpost_function, theta_t, lnpost_t, step_cov):
# CODE HERE
q = np.random.multivariate_normal(theta_t, step_cov)
lp1 = lnpost_function(q)
if lp1 - lnpost_t > np.log(np.random.rand()):
return q, lp1
return theta_t, lnpost_t
lptest = lambda x: -0.5 * np.sum(x**2)
th = np.array([0.0])
lp = 0.0
chain = np.array([th for th, lp in (metropolis_step(lptest, th, lp, [[0.3]])
for _ in range(10000))])
if np.abs(np.mean(chain)) > 0.1 or np.abs(np.std(chain) - 1.0) > 0.1:
raise ValueError("It looks like your implementation is wrong!")
print("☺︎")
# Edit these guesses.
m_initial = 0.5
b_initial = 3
# You shouldn't need to change this plotting code.
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
for m, b in np.random.multivariate_normal(w, V, size=24):
plt.plot(x, m_initial*x + b_initial, "g", alpha=0.1)
plt.xlim(0, 5);
# Edit this line to specify the proposal covariance:
step = np.diag([1e-4, 1e-4])
# Edit this line to choose the number of steps you want to take:
nstep = 50000
# Edit this line to set the number steps to discard as burn-in.
nburn = 1000
# You shouldn't need to change any of the lines below here.
p0 = np.array([m_initial, b_initial])
lp0 = lnpost_linear(p0)
chain = np.empty((nstep, len(p0)))
for i in range(len(chain)):
p0, lp0 = metropolis_step(lnpost_linear, p0, lp0, step)
chain[i] = p0
# Compute the acceptance fraction.
acc = float(np.any(np.diff(chain, axis=0), axis=1).sum()) / (len(chain)-1)
print("The acceptance fraction was: {0:.3f}".format(acc))
# Plot the traces.
fig, axes = plt.subplots(2, 1, figsize=(8, 5), sharex=True)
axes[0].plot(chain[:, 0], "k")
axes[0].axhline(w[0], color="g", lw=1.5)
axes[0].set_ylabel("m")
axes[0].axvline(nburn, color="g", alpha=0.5, lw=2)
axes[1].plot(chain[:, 1], "k")
axes[1].axhline(w[1], color="g", lw=1.5)
axes[1].set_ylabel("b")
axes[1].axvline(nburn, color="g", alpha=0.5, lw=2)
axes[1].set_xlabel("step number")
axes[0].set_title("acceptance: {0:.3f}".format(acc));
if np.any(np.abs(np.mean(chain, axis=0)-w)>0.01) or np.any(np.abs(np.cov(chain, rowvar=0)-V)>1e-4):
raise ValueError("It looks like your implementation is wrong!")
print("☺︎")
import triangle
triangle.corner(chain[nburn:, :], labels=["m", "b"], truths=w);
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
for m, b in chain[nburn+np.random.randint(len(chain)-nburn, size=50)]:
plt.plot(x, m*x + b, "g", alpha=0.1)
plt.xlim(0, 5);
# Edit these guesses.
alpha_initial = 100
beta_initial = -1
# These are the edges of the distribution (don't change this).
a, b = 1.0, 5.0
# Load the data.
events = np.loadtxt("poisson.csv")
# Make a correctly normalized histogram of the samples.
bins = np.linspace(a, b, 12)
weights = 1.0 / (bins[1] - bins[0]) + np.zeros(len(events))
plt.hist(events, bins, range=(a, b), histtype="step", color="k", lw=2, weights=weights)
# Plot the guess at the rate.
xx = np.linspace(a, b, 500)
plt.plot(xx, alpha_initial * xx ** beta_initial, "g", lw=2)
# Format the figure.
plt.ylabel("number")
plt.xlabel("x");
sme = np.sum(np.log(events))
K = len(events)
def lnlike_poisson((alpha, beta)):
if beta == -1.0:
norm = alpha * (np.log(a) - np.log(b)) + K * np.log(alpha)
else:
bp1 = beta+1
norm = alpha / bp1 * (a**bp1-b**bp1) + K * np.log(alpha)
return norm + beta * sme
p_1, p_2 = (1000.0, -1.), (1500., -2.)
ll_1, ll_2 = lnlike_poisson(p_1), lnlike_poisson(p_2)
if not np.allclose(ll_2 - ll_1, 337.039175916):
raise ValueError("It looks like your implementation is wrong!")
print("☺︎")
def lnprior_poisson((alpha, beta)):
if not (0 < alpha < 1000):
return -np.inf
if not (-10 < beta < 10):
return -np.inf
return 0.0
def lnpost_poisson(theta):
return lnprior_poisson(theta) + lnlike_poisson(theta)
# Edit this line to specify the proposal covariance:
step = np.diag([100., 1e-1])
# Edit this line to choose the number of steps you want to take:
nstep = 50000
# Edit this line to set the number steps to discard as burn-in.
nburn = 1000
# You shouldn't need to change any of the lines below here.
p0 = np.array([alpha_initial, beta_initial])
lp0 = lnpost_poisson(p0)
chain = np.empty((nstep, len(p0)))
for i in range(len(chain)):
p0, lp0 = metropolis_step(lnpost_poisson, p0, lp0, step)
chain[i] = p0
# Compute the acceptance fraction.
acc = float(np.any(np.diff(chain, axis=0), axis=1).sum()) / (len(chain)-1)
print("The acceptance fraction was: {0:.3f}".format(acc))
# Plot the traces.
fig, axes = plt.subplots(2, 1, figsize=(8, 5), sharex=True)
axes[0].plot(chain[:, 0], "k")
axes[0].set_ylabel("alpha")
axes[0].axvline(nburn, color="g", alpha=0.5, lw=2)
axes[1].plot(chain[:, 1], "k")
axes[1].set_ylabel("beta")
axes[1].axvline(nburn, color="g", alpha=0.5, lw=2)
axes[1].set_xlabel("step number")
axes[0].set_title("acceptance: {0:.3f}".format(acc));
triangle.corner(chain[nburn:], labels=["alpha", "beta"], truths=[500, -2]);
plt.hist(events, bins, range=(a, b), histtype="step", color="k", lw=2, weights=weights)
# Plot the guess at the rate.
xx = np.linspace(a, b, 500)
for alpha, beta in chain[nburn+np.random.randint(len(chain)-nburn, size=50)]:
plt.plot(xx, alpha * xx ** beta, "g", alpha=0.1)
# Format the figure.
plt.ylabel("number")
plt.xlabel("x");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If this works, the output should greet you without throwing any errors. If so, that's pretty much all we need so let's get started with some MCMC!
Step2: Now we'll load the datapoints and plot them. When you execute the following cell, you should see a plot of the data. If not, make sure that you run the import cell from above first.
Step3: As I mentioned previously, it is pretty silly to use MCMC to solve this problem because the maximum likelihood and full posterior probability distribution (under infinitely broad priors) for the slope and intercept of the line are known analytically. Therefore, let's compute what the right answer should be before we even start. The analytic result for the posterior probability distribution is a 2-d Gaussian with mean
Step4: We'll save these results for later to compare them to the result computed using MCMC but for now, it's nice to take a look and see what this prediction looks like. To do this, we'll sample 24 slopes and intercepts from this 2d Gaussian and overplot them on the data.
Step5: This plot is a visualization of our posterior expectations for the true underlying line that generated these data. We'll reuse this plot a few times later to test the results of our code.
Step6: After you're satisfied with your implementation, run the following cell. In this cell, we're checking to see if your code is right. If it is, you'll see a smiling face (☺︎) but if not, you'll get an error message.
Step7: If you don't get the ☺︎, go back and try to debug your model. Iterate until your result is correct.
Step8: Metropolis(–Hastings) MCMC
Step9: As before, here's a simple test for this function. When you run the following cell it will either print a smile or throw an exception. Since the algorithm is random, it might occasionally fail this test so if it fails once, try running it again. If it fails a second time, edit your implementation until the test consistently passes.
Step10: Running the Markov Chain
Step11: In the next cell, we'll start from this initial guess for the slope and intercept and walk through parameter space (using the transition probability from above) to generate a Markov Chain of samples from the posterior probability.
Step12: The results of the MCMC run are stored in the array called chain with dimensions (nstep, 2). These are samples from the posterior probability density for the parameters. We know from above that this should be a Gaussian with mean $\mathbf{w}$ and covariance $\mathbf{V}$ so let's compare the sample mean and covariance to the analytic result that we computed above
Step13: If you don't get a smile here, that could mean a few things
Step14: This plot is a representation of our contraints on the posterior probability for the slope and intercept conditioned on the data. The 2-D plot shows the full posterior and the two 1-D plots show the constraints for each parameter marginalized over the other.
Step15: It is always useful to make a plot like this because it lets you see if your model is capable of describing your data or if there is anything catasrophically wrong.
Step16: In the following cell, you need to implement the log-likelihood function for the problem (same as above)
Step17: As before, edit your implementation until the following test passes.
Step18: Once you're happy with this implementation, we'll define the full probabilistic model including a prior. As before, I've chosen a broad flat prior on alpha and beta but you should feel free to change this.
Step19: Now let's run the MCMC for this model. As before, you should tune the parameters of the algorithm until you get a reasonable acceptance fraction ($\sim 25- 40\%$) and the chains seem converged.
Step20: Once you're happy with to convergence of your chain, plot the results as a corner plot (compared to the values that I used to generate the dataset; $\alpha = 500$ and $\beta = -2$) and plot the posterior predictive distribution.
|
7,002
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_iris
iris = load_iris()
print(type(iris.data))
print(type(iris.target))
import pandas as pd
import numpy as np
%matplotlib inline
import pandas as pd
from sklearn import datasets
iris = datasets.load_iris()
pd.DataFrame({'feature name': iris.feature_names})
import pandas as pd
from sklearn import datasets
iris = datasets.load_iris()
pd.DataFrame({'target name': iris.target_names})
import pandas as pd
from sklearn import datasets
iris = datasets.load_iris()
# How many data points (rows) x how many features (columns)
print(iris.data.shape)
print(iris.target.shape)
# convert to pandas df (adding real column names)
iris.df = pd.DataFrame(iris.data,
columns = iris.feature_names)
# first few rows
iris.df.head()
# summary stats
iris.df.describe()
# Standardization aka scaling
from sklearn import preprocessing, datasets
# make sure we have iris loaded
iris = datasets.load_iris()
X, y = iris.data, iris.target
# scale it to a gaussian distribution
X_scaled = preprocessing.scale(X)
# how does it look now
pd.DataFrame(X_scaled).head()
# let's just confirm our standardization worked (mean is 0 w/ unit variance)
pd.DataFrame(X_scaled).describe()
# also could:
#print(X_scaled.mean(axis = 0))
#print(X_scaled.std(axis = 0))
# Standardization aka scaling
from sklearn import preprocessing, datasets
# make sure we have iris loaded
iris = datasets.load_iris()
X, y = iris.data, iris.target
# scale it to a gaussian distribution
X_norm = preprocessing.normalize(X, norm='l1')
# how does it look now
pd.DataFrame(X_norm).tail()
# let's just confirm our standardization worked (mean is 0 w/ unit variance)
pd.DataFrame(X_norm).describe()
# cumulative sum of normalized and original data:
#print(pd.DataFrame(X_norm.cumsum().reshape(X.shape)).tail())
#print(pd.DataFrame(X).cumsum().tail())
# unit norm (convert to unit vectors) - all row sums should be 1 now
X_norm.sum(axis = 1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's Dive In!
Step2: Features (aka columns in data)
Step3: Targets (aka labels)
Step4: sklearn TIP
Step5: <b>Sneak a peek at data (a reminder of your pandas dataframe methods)<b>
Step6: <b>Describe the dataset with some summary statitsics<b>
Step7: We don't have to do much with the iris dataset. It has no missing values. It's already in numpy arrays and has the correct shape for sklearn. However we could try <b>standardization</b> and/or <b>normalization</b>. (later, in the transforms section, we will show one hot encoding, a preprocessing step)
Step8: PRO TIP
|
7,003
|
<ASSISTANT_TASK:>
Python Code:
import sys
from docplex.cp.model import *
mdl0 = CpoModel()
masonry = mdl0.interval_var(size=35)
carpentry = mdl0.interval_var(size=15)
plumbing = mdl0.interval_var(size=40)
ceiling = mdl0.interval_var(size=15)
roofing = mdl0.interval_var(size=5)
painting = mdl0.interval_var(size=10)
windows = mdl0.interval_var(size=5)
facade = mdl0.interval_var(size=10)
garden = mdl0.interval_var(size=5)
moving = mdl0.interval_var(size=5)
mdl0.add( mdl0.end_before_start(masonry, carpentry) )
mdl0.add( mdl0.end_before_start(masonry, plumbing) )
mdl0.add( mdl0.end_before_start(masonry, ceiling) )
mdl0.add( mdl0.end_before_start(carpentry, roofing) )
mdl0.add( mdl0.end_before_start(ceiling, painting) )
mdl0.add( mdl0.end_before_start(roofing, windows) )
mdl0.add( mdl0.end_before_start(roofing, facade) )
mdl0.add( mdl0.end_before_start(plumbing, facade) )
mdl0.add( mdl0.end_before_start(roofing, garden) )
mdl0.add( mdl0.end_before_start(plumbing, garden) )
mdl0.add( mdl0.end_before_start(windows, moving) )
mdl0.add( mdl0.end_before_start(facade, moving) )
mdl0.add( mdl0.end_before_start(garden, moving) )
mdl0.add( mdl0.end_before_start(painting, moving) )
# Solve the model
print("\nSolving model....")
msol0 = mdl0.solve(TimeLimit=10)
print("done")
if msol0:
var_sol = msol0.get_var_solution(masonry)
print("Masonry : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(carpentry)
print("Carpentry : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(plumbing)
print("Plumbing : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(ceiling)
print("Ceiling : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(roofing)
print("Roofing : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(painting)
print("Painting : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(windows)
print("Windows : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(facade)
print("Facade : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol0.get_var_solution(moving)
print("Moving : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
else:
print("No solution found")
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
if msol0:
wt = msol0.get_var_solution(masonry)
visu.interval(wt, 'lightblue', 'masonry')
wt = msol0.get_var_solution(carpentry)
visu.interval(wt, 'lightblue', 'carpentry')
wt = msol0.get_var_solution(plumbing)
visu.interval(wt, 'lightblue', 'plumbing')
wt = msol0.get_var_solution(ceiling)
visu.interval(wt, 'lightblue', 'ceiling')
wt = msol0.get_var_solution(roofing)
visu.interval(wt, 'lightblue', 'roofing')
wt = msol0.get_var_solution(painting)
visu.interval(wt, 'lightblue', 'painting')
wt = msol0.get_var_solution(windows)
visu.interval(wt, 'lightblue', 'windows')
wt = msol0.get_var_solution(facade)
visu.interval(wt, 'lightblue', 'facade')
wt = msol0.get_var_solution(moving)
visu.interval(wt, 'lightblue', 'moving')
visu.show()
import sys
from docplex.cp.model import *
mdl1 = CpoModel()
masonry = mdl1.interval_var(size=35)
carpentry = mdl1.interval_var(size=15)
plumbing = mdl1.interval_var(size=40)
ceiling = mdl1.interval_var(size=15)
roofing = mdl1.interval_var(size=5)
painting = mdl1.interval_var(size=10)
windows = mdl1.interval_var(size=5)
facade = mdl1.interval_var(size=10)
garden = mdl1.interval_var(size=5)
moving = mdl1.interval_var(size=5)
mdl1.add( mdl1.end_before_start(masonry, carpentry) )
mdl1.add( mdl1.end_before_start(masonry, plumbing) )
mdl1.add( mdl1.end_before_start(masonry, ceiling) )
mdl1.add( mdl1.end_before_start(carpentry, roofing) )
mdl1.add( mdl1.end_before_start(ceiling, painting) )
mdl1.add( mdl1.end_before_start(roofing, windows) )
mdl1.add( mdl1.end_before_start(roofing, facade) )
mdl1.add( mdl1.end_before_start(plumbing, facade) )
mdl1.add( mdl1.end_before_start(roofing, garden) )
mdl1.add( mdl1.end_before_start(plumbing, garden) )
mdl1.add( mdl1.end_before_start(windows, moving) )
mdl1.add( mdl1.end_before_start(facade, moving) )
mdl1.add( mdl1.end_before_start(garden, moving) )
mdl1.add( mdl1.end_before_start(painting, moving) )
obj = mdl1.minimize( 400 * mdl1.max([mdl1.end_of(moving) - 100, 0])
+ 200 * mdl1.max([25 - mdl1.start_of(masonry), 0])
+ 300 * mdl1.max([75 - mdl1.start_of(carpentry), 0])
+ 100 * mdl1.max([75 - mdl1.start_of(ceiling), 0]) )
mdl1.add(obj)
# Solve the model
print("\nSolving model....")
msol1 = mdl1.solve(TimeLimit=20)
print("done")
if msol1:
print("Cost will be " + str(msol1.get_objective_values()[0]))
var_sol = msol1.get_var_solution(masonry)
print("Masonry : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(carpentry)
print("Carpentry : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(plumbing)
print("Plumbing : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(ceiling)
print("Ceiling : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(roofing)
print("Roofing : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(painting)
print("Painting : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(windows)
print("Windows : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(facade)
print("Facade : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
var_sol = msol1.get_var_solution(moving)
print("Moving : {}..{}".format(var_sol.get_start(), var_sol.get_end()))
else:
print("No solution found")
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
if msol1:
wt = msol1.get_var_solution(masonry)
visu.interval(wt, 'lightblue', 'masonry')
wt = msol1.get_var_solution(carpentry)
visu.interval(wt, 'lightblue', 'carpentry')
wt = msol1.get_var_solution(plumbing)
visu.interval(wt, 'lightblue', 'plumbing')
wt = msol1.get_var_solution(ceiling)
visu.interval(wt, 'lightblue', 'ceiling')
wt = msol1.get_var_solution(roofing)
visu.interval(wt, 'lightblue', 'roofing')
wt = msol1.get_var_solution(painting)
visu.interval(wt, 'lightblue', 'painting')
wt = msol1.get_var_solution(windows)
visu.interval(wt, 'lightblue', 'windows')
wt = msol1.get_var_solution(facade)
visu.interval(wt, 'lightblue', 'facade')
wt = msol1.get_var_solution(moving)
visu.interval(wt, 'lightblue', 'moving')
visu.show()
NbHouses = 5
WorkerNames = ["Joe", "Jim"]
TaskNames = ["masonry", "carpentry", "plumbing",
"ceiling", "roofing", "painting",
"windows", "facade", "garden", "moving"]
Duration = [35, 15, 40, 15, 5, 10, 5, 10, 5, 5]
Worker = {"masonry" : "Joe" ,
"carpentry": "Joe" ,
"plumbing" : "Jim" ,
"ceiling" : "Jim" ,
"roofing" : "Joe" ,
"painting" : "Jim" ,
"windows" : "Jim" ,
"facade" : "Joe" ,
"garden" : "Joe" ,
"moving" : "Jim"}
ReleaseDate = [ 0, 0, 151, 59, 243]
DueDate = [120, 212, 304, 181, 425]
Weight = [100.0, 100.0, 100.0, 200.0, 100.0]
Precedences = [("masonry", "carpentry"),("masonry", "plumbing"),
("masonry", "ceiling"), ("carpentry", "roofing"),
("ceiling", "painting"), ("roofing", "windows"),
("roofing", "facade"), ("plumbing", "facade"),
("roofing", "garden"), ("plumbing", "garden"),
("windows", "moving"), ("facade", "moving"),
("garden", "moving"), ("painting", "moving")]
Houses = range(NbHouses)
import sys
from docplex.cp.model import *
mdl2 = CpoModel()
houses = [mdl2.interval_var(start=(ReleaseDate[i], INTERVAL_MAX), name="house"+str(i)) for i in Houses]
TaskNames_ids = {}
itvs = {}
for h in Houses:
for i,t in enumerate(TaskNames):
_name = str(h)+"_"+str(t)
itvs[(h,t)] = mdl2.interval_var(size=Duration[i], name=_name)
TaskNames_ids[_name] = i
for h in Houses:
for p in Precedences:
mdl2.add(mdl2.end_before_start(itvs[(h,p[0])], itvs[(h,p[1])]) )
for h in Houses:
mdl2.add( mdl2.span(houses[h], [itvs[(h,t)] for t in TaskNames] ) )
transitionTimes = transition_matrix([[int(abs(i - j)) for j in Houses] for i in Houses])
workers = {w : mdl2.sequence_var([ itvs[(h,t)] for h in Houses for t in TaskNames if Worker[t]==w ],
types=[h for h in Houses for t in TaskNames if Worker[t]==w ], name="workers_"+w)
for w in WorkerNames}
for w in WorkerNames:
mdl2.add( mdl2.no_overlap(workers[w], transitionTimes) )
# create the obj and add it.
mdl2.add(
mdl2.minimize(
mdl2.sum(Weight[h] * mdl2.max([0, mdl2.end_of(houses[h])-DueDate[h]]) + mdl2.length_of(houses[h]) for h in Houses)
)
)
# Solve the model
print("\nSolving model....")
msol2 = mdl2.solve(FailLimit=30000)
print("done")
if msol2:
print("Cost will be " + str(msol2.get_objective_values()[0]))
else:
print("No solution found")
# Viewing the results of sequencing problems in a Gantt chart
# (double click on the gantt to see details)
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
def showsequence(msol, s, setup, tp):
seq = msol.get_var_solution(s)
visu.sequence(name=s.get_name())
vs = seq.get_value()
for v in vs:
nm = v.get_name()
visu.interval(v, tp[TaskNames_ids[nm]], nm)
for i in range(len(vs) - 1):
end = vs[i].get_end()
tp1 = tp[TaskNames_ids[vs[i].get_name()]]
tp2 = tp[TaskNames_ids[vs[i + 1].get_name()]]
visu.transition(end, end + setup.get_value(tp1, tp2))
if msol2:
visu.timeline("Solution for SchedSetup")
for w in WorkerNames:
types=[h for h in Houses for t in TaskNames if Worker[t]==w]
showsequence(msol2, workers[w], transitionTimes, types)
visu.show()
import sys
from docplex.cp.model import *
mdl3 = CpoModel()
NbHouses = 5;
WorkerNames = ["Joe", "Jim" ]
TaskNames = ["masonry","carpentry","plumbing","ceiling","roofing","painting","windows","facade","garden","moving"]
Duration = [35,15,40,15,5,10,5,10,5,5]
Worker = {"masonry":"Joe","carpentry":"Joe","plumbing":"Jim","ceiling":"Jim",
"roofing":"Joe","painting":"Jim","windows":"Jim","facade":"Joe",
"garden":"Joe","moving":"Jim"}
Precedences = { ("masonry","carpentry"),("masonry","plumbing"),
("masonry","ceiling"),("carpentry","roofing"),
("ceiling","painting"),("roofing","windows"),
("roofing","facade"),("plumbing","facade"),
("roofing","garden"),("plumbing","garden"),
("windows","moving"),("facade","moving"),
("garden","moving"),("painting","moving") }
Houses = range(NbHouses)
Breaks ={
"Joe" : [
(5,14),(19,21),(26,28),(33,35),(40,42),(47,49),(54,56),(61,63),
(68,70),(75,77),(82,84),(89,91),(96,98),(103,105),(110,112),(117,119),
(124,133),(138,140),(145,147),(152,154),(159,161),(166,168),(173,175),
(180,182),(187,189),(194,196),(201,203),(208,210),(215,238),(243,245),(250,252),
(257,259),(264,266),(271,273),(278,280),(285,287),(292,294),(299,301),
(306,308),(313,315),(320,322),(327,329),(334,336),(341,343),(348,350),
(355,357),(362,364),(369,378),(383,385),(390,392),(397,399),(404,406),(411,413),
(418,420),(425,427),(432,434),(439,441),(446,448),(453,455),(460,462),(467,469),
(474,476),(481,483),(488,490),(495,504),(509,511),(516,518),(523,525),(530,532),
(537,539),(544,546),(551,553),(558,560),(565,567),(572,574),(579,602),(607,609),
(614,616),(621,623),(628,630),(635,637),(642,644),(649,651),(656,658),(663,665),
(670,672),(677,679),(684,686),(691,693),(698,700),(705,707),(712,714),
(719,721),(726,728)
],
"Jim" : [
(5,7),(12,14),(19,21),(26,42),(47,49),(54,56),(61,63),(68,70),(75,77),
(82,84),(89,91),(96,98),(103,105),(110,112),(117,119),(124,126),(131,133),
(138,140),(145,147),(152,154),(159,161),(166,168),(173,175),(180,182),(187,189),
(194,196),(201,225),(229,231),(236,238),(243,245),(250,252),(257,259),
(264,266),(271,273),(278,280),(285,287),(292,294),(299,301),(306,315),
(320,322),(327,329),(334,336),(341,343),(348,350),(355,357),(362,364),(369,371),
(376,378),(383,385),(390,392),(397,413),(418,420),(425,427),(432,434),(439,441),
(446,448),(453,455),(460,462),(467,469),(474,476),(481,483),(488,490),(495,497),
(502,504),(509,511),(516,518),(523,525),(530,532),(537,539),(544,546),
(551,553),(558,560),(565,581),(586,588),(593,595),(600,602),(607,609),
(614,616),(621,623),(628,630),(635,637),(642,644),(649,651),(656,658),
(663,665),(670,672),(677,679),(684,686),(691,693),(698,700),(705,707),
(712,714),(719,721),(726,728)]
}
from collections import namedtuple
Break = namedtuple('Break', ['start', 'end'])
Calendar = {}
mymax = max(max(v for k,v in Breaks[w]) for w in WorkerNames)
for w in WorkerNames:
step = CpoStepFunction()
step.set_value(0, mymax, 100)
for b in Breaks[w]:
t = Break(*b)
step.set_value(t.start, t.end, 0)
Calendar[w] = step
#TaskNames_ids = {}
itvs = {}
for h in Houses:
for i,t in enumerate(TaskNames):
_name = str(h) + "_" + str(t)
itvs[(h,t)] = mdl3.interval_var(size=Duration[i], intensity=Calendar[Worker[t]], name=_name)
for h in Houses:
for p in Precedences:
mdl3.add( mdl3.end_before_start(itvs[h,p[0]], itvs[h,p[1]]) )
for w in WorkerNames:
mdl3.add( mdl3.no_overlap( [itvs[h,t] for h in Houses for t in TaskNames if Worker[t]==w] ) )
for h in Houses:
for t in TaskNames:
mdl3.add(mdl3.forbid_start(itvs[h,t], Calendar[Worker[t]]))
mdl3.add(mdl3.forbid_end (itvs[h,t], Calendar[Worker[t]]))
mdl3.add( mdl3.minimize(mdl3.max(mdl3.end_of(itvs[h,"moving"]) for h in Houses)))
# Solve the model
print("\nSolving model....")
msol3 = mdl3.solve(FailLimit=30000)
print("done")
if msol3:
print("Cost will be " + str( msol3.get_objective_values()[0] )) # Allocate tasks to workers
tasks = {w : [] for w in WorkerNames}
for k,v in Worker.items():
tasks[v].append(k)
types = {t : i for i,t in enumerate(TaskNames)}
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
visu.timeline('Solution SchedCalendar')
for w in WorkerNames:
visu.panel()
visu.pause(Calendar[w])
visu.sequence(name=w,
intervals=[(msol3.get_var_solution(itvs[h,t]), types[t], t) for t in tasks[w] for h in Houses])
visu.show()
else:
print("No solution found")
NbWorkers = 3
NbHouses = 5
TaskNames = {"masonry","carpentry","plumbing",
"ceiling","roofing","painting",
"windows","facade","garden","moving"}
Duration = [35, 15, 40, 15, 5, 10, 5, 10, 5, 5]
ReleaseDate = [31, 0, 90, 120, 90]
Precedences = [("masonry", "carpentry"), ("masonry", "plumbing"), ("masonry", "ceiling"),
("carpentry", "roofing"), ("ceiling", "painting"), ("roofing", "windows"),
("roofing", "facade"), ("plumbing", "facade"), ("roofing", "garden"),
("plumbing", "garden"), ("windows", "moving"), ("facade", "moving"),
("garden", "moving"), ("painting", "moving")]
Houses = range(NbHouses)
import sys
from docplex.cp.model import *
mdl4 = CpoModel()
itvs = {}
for h in Houses:
for i,t in enumerate(TaskNames):
itvs[h,t] = mdl4.interval_var(start = [ReleaseDate[h], INTERVAL_MAX], size=Duration[i])
workers_usage = step_at(0, 0)
for h in Houses:
for t in TaskNames:
workers_usage += mdl4.pulse(itvs[h,t],1)
cash = step_at(0, 0)
for p in Houses:
cash += mdl4.step_at(60*p, 30000)
for h in Houses:
for i,t in enumerate(TaskNames):
cash -= mdl4.step_at_start(itvs[h,t], 200*Duration[i])
for h in Houses:
for p in Precedences:
mdl4.add( mdl4.end_before_start(itvs[h,p[0]], itvs[h,p[1]]) )
mdl4.add( workers_usage <= NbWorkers )
mdl4.add( cash >= 0 )
mdl4.add(
mdl4.minimize(
mdl4.max( mdl4.end_of(itvs[h,"moving"]) for h in Houses)
)
)
# Solve the model
print("\nSolving model....")
msol4 = mdl4.solve(FailLimit=30000)
print("done")
if msol4:
print("Cost will be " + str( msol4.get_objective_values()[0] ))
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
workersF = CpoStepFunction()
cashF = CpoStepFunction()
for p in range(5):
cashF.add_value(60 * p, INT_MAX, 30000)
for h in Houses:
for i,t in enumerate(TaskNames):
itv = msol4.get_var_solution(itvs[h,t])
workersF.add_value(itv.get_start(), itv.get_end(), 1)
cashF.add_value(itv.start, INT_MAX, -200 * Duration[i])
visu.timeline('Solution SchedCumul')
visu.panel(name="Schedule")
for h in Houses:
for i,t in enumerate(TaskNames):
visu.interval(msol4.get_var_solution(itvs[h,t]), h, t)
visu.panel(name="Workers")
visu.function(segments=workersF, style='area')
visu.panel(name="Cash")
visu.function(segments=cashF, style='area', color='gold')
visu.show()
else:
print("No solution found")
NbHouses = 5
Deadline = 318
Workers = ["Joe", "Jack", "Jim"]
Tasks = ["masonry", "carpentry", "plumbing", "ceiling","roofing", "painting", "windows", "facade","garden", "moving"]
Durations = [35, 15, 40, 15, 5, 10, 5, 10, 5, 5]
Skills = [("Joe","masonry",9),("Joe","carpentry",7),("Joe","ceiling",5),("Joe","roofing",6),
("Joe","windows",8),("Joe","facade",5),("Joe","garden",5),("Joe","moving",6),
("Jack","masonry",5),("Jack","plumbing",7),("Jack","ceiling",8),("Jack","roofing",7),
("Jack","painting",9),("Jack","facade",5),("Jack","garden",5),("Jim","carpentry",5),
("Jim","painting",6),("Jim","windows",5),("Jim","garden",9),("Jim","moving",8)]
Precedences = [("masonry","carpentry"),("masonry","plumbing"),("masonry","ceiling"),
("carpentry","roofing"),("ceiling","painting"),("roofing","windows"),
("roofing","facade"),("plumbing","facade"),("roofing","garden"),
("plumbing","garden"),("windows","moving"),("facade","moving"),
("garden","moving"),("painting","moving")
]
Continuities = [("Joe","masonry","carpentry"),("Jack","roofing","facade"),
("Joe","carpentry", "roofing"),("Jim","garden","moving")]
nbWorkers = len(Workers)
Houses = range(NbHouses)
import sys
from docplex.cp.model import *
mdl5 = CpoModel()
tasks = {}
wtasks = {}
for h in Houses:
for i,t in enumerate(Tasks):
tasks[(h,t)] = mdl5.interval_var(start=[0,Deadline], size=Durations[i])
for s in Skills:
wtasks[(h,s)] = mdl5.interval_var(optional=True)
for h in Houses:
for p in Precedences:
mdl5.add( mdl5.end_before_start(tasks[h,p[0]], tasks[h,p[1]]) )
for h in Houses:
for t in Tasks:
mdl5.add( mdl5.alternative(tasks[h,t], [wtasks[h,s] for s in Skills if s[1]==t]) )
for h in Houses:
for c in Continuities:
for (worker1, task1, l1) in Skills:
if worker1 == c[0] and task1 == c[1]:
for (worker2, task2, l2) in Skills:
if worker2 == c[0] and task2 == c[2]:
mdl5.add(
mdl5.presence_of(wtasks[h,(c[0], task1, l1)])
==
mdl5.presence_of(wtasks[h,(c[0], task2, l2)])
)
for w in Workers:
mdl5.add( mdl5.no_overlap([wtasks[h,s] for h in Houses for s in Skills if s[0]==w]) )
mdl5.add(
mdl5.maximize(
mdl5.sum( s[2] * mdl5.presence_of(wtasks[h,s]) for h in Houses for s in Skills)
)
)
# Solve the model
print("\nSolving model....")
msol5 = mdl5.solve(FailLimit=30000)
print("done")
if msol5:
print("Cost will be "+str( msol5.get_objective_values()[0] ))
worker_idx = {w : i for i,w in enumerate(Workers)}
worker_tasks = [[] for w in range(nbWorkers)] # Tasks assigned to a given worker
for h in Houses:
for s in Skills:
worker = s[0]
wt = wtasks[(h,s)]
worker_tasks[worker_idx[worker]].append(wt)
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
visu.timeline('Solution SchedOptional', 0, Deadline)
for i,w in enumerate(Workers):
visu.sequence(name=w)
for t in worker_tasks[worker_idx[w]]:
wt = msol5.get_var_solution(t)
if wt.is_present():
#if desc[t].skills[w] == max(desc[t].skills):
# Green-like color when task is using the most skilled worker
# color = 'lightgreen'
#else:
# Red-like color when task does not use the most skilled worker
# color = 'salmon'
color = 'salmon'
visu.interval(wt, color, wt.get_name())
visu.show()
else:
print("No solution found")
NbHouses = 5
NbWorkers = 2
AllStates = ["clean", "dirty"]
TaskNames = ["masonry","carpentry", "plumbing", "ceiling","roofing","painting","windows","facade","garden","moving"]
Duration = [35,15,40,15,5,10,5,10,5,5]
States = [("masonry","dirty"),("carpentry","dirty"),("plumbing","clean"),
("ceiling","clean"),("roofing","dirty"),("painting","clean"),
("windows","dirty")]
Precedences = [("masonry","carpentry"),("masonry","plumbing"),("masonry","ceiling"),
("carpentry","roofing"),("ceiling","painting"),("roofing","windows"),
("roofing","facade"),("plumbing","facade"),("roofing","garden"),
("plumbing","garden"),("windows","moving"),("facade","moving"),
("garden","moving"),("painting","moving")]
Houses = range(NbHouses)
import sys
from docplex.cp.model import *
mdl6 = CpoModel()
task = {}
for h in Houses:
for i,t in enumerate(TaskNames):
task[(h,t)] = mdl6.interval_var(size = Duration[i])
workers = step_at(0, 0)
for h in Houses:
for t in TaskNames:
workers += mdl6.pulse(task[h,t], 1)
Index = {s : i for i,s in enumerate(AllStates)}
ttvalues = [[0, 0], [0, 0]]
ttvalues[Index["dirty"]][Index["clean"]] = 1
ttime = transition_matrix(ttvalues, name='TTime')
state = { h : state_function(ttime, name="house"+str(h)) for h in Houses}
for h in Houses:
for p in Precedences:
mdl6.add( mdl6.end_before_start(task[h,p[0]], task[h,p[1]]) )
for s in States:
mdl6.add( mdl6.always_equal(state[h], task[h,s[0]], Index[s[1]]) )
mdl6.add( workers <= NbWorkers )
mdl6.add(mdl6.minimize( mdl6.max( mdl6.end_of(task[h,"moving"]) for h in Houses )))
# Solve the model
print("\nSolving model....")
msol6 = mdl6.solve(FailLimit=30000)
print("done")
if msol6:
print("Cost will be " + str( msol6.get_objective_values()[0] ))
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
workers_function = CpoStepFunction()
for h in Houses:
for t in TaskNames:
itv = msol6.get_var_solution(task[h,t])
workers_function.add_value(itv.get_start(), itv.get_end(), 1)
visu.timeline('Solution SchedState')
visu.panel(name="Schedule")
for h in Houses:
for t in TaskNames:
visu.interval(msol6.get_var_solution(task[h,t]), h, t)
visu.panel(name="Houses state")
for h in Houses:
f = state[h]
visu.sequence(name=f.get_name(), segments=msol6.get_var_solution(f))
visu.panel(name="Nb of workers")
visu.function(segments=workers_function, style='line')
visu.show()
else:
print("No solution found")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This code creates a CP model container that allows the use of constraints that are specific to constraint programming or to
Step2: Adding the constraints
Step3: Here, the special constraint end_before_start() ensures that one interval variable ends before the other starts.
Step4: Displaying the solution
Step5: To understand the solution found by CP Optimizer to this satisfiability scheduling problem, consider the line
Step6: Note on interval variables
Step7: The declaration of necessary interval variables is done as follows
Step8: Step 3
Step9: To model the cost for starting a task earlier than the preferred starting date, the expression start_of() can be used.
Step 4
Step10: Solving a problem consists of finding a value for each decision variable so that all constraints are satisfied.
Step 5
Step11: Graphical display of the same result is available with
Step12: The overall cost is 5000 and moving will be completed by day 110.
Step13: One part of the objective is based on the time it takes to build a house.
Step 3
Step14: Step 4
Step15: Step 5
Step16: To model the cost associated with the length of time it takes to build a single house, the interval variable associated with the house is constrained to start at the start of the first task of the house and end at the end of the last task.
Step 6
Step17: Step 7
Step18: Each of the tasks requires a particular worker.
Step 8
Step19: Step 9
Step20: The cost for building a house is the sum of the tardiness cost and the number of days it takes from start to finish building the house.
Step 10
Step21: Step 11
Step22: Chapter 4. Adding calendars to the house building problem
Step23: Step 3
Step24: This intensity function is used in creating the task variables for the workers.
Step 4
Step25: The tasks of the house building project have precedence constraints that are added to the model.
Step 5
Step26: Step 6
Step27: Step 7
Step28: Step 8
Step29: Step 9
Step30: Chapter 5. Using cumulative functions in the house building problem
Step31: Step 3
Step32: As the workers are equivalent in this problem, it is better to represent them as one pool of workers instead of as individual workers with no overlap constraints as was done in the earlier examples.
Step 4
Step33: Step 5
Step34: Step 6
Step35: Step 7
Step36: Step 8
Step37: Step 9
Step38: Step 10
Step39: Chapter 6. Using alternative resources in the house building problem
Step40: Step 3
Step41: Step 4
Step42: Step 5
Step43: The expression presence_of() is used to represent whether a task is performed by a worker.
Step44: Step 7
Step45: Step 8
Step46: Step 9
Step47: Chapter 7. Using state functions
Step48: Step 3
Step49: Step 4
Step50: Step 5
Step51: Step 6
Step52: Step 7
Step53: Step 8
Step54: Step 9
|
7,004
|
<ASSISTANT_TASK:>
Python Code:
# Figure 1
Image(url= "http://3.bp.blogspot.com/_UpN7DfJA0j4/TJtUBWPk0SI/AAAAAAAAABY/oWPMtmqJn3k/s1600/mnist_originals.png", width=200, height=200)
from __future__ import print_function # Use a function definition from future version (say 3.x from 2.7 interpreter)
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import time
import cntk as C
%matplotlib inline
# Select the right target device when this notebook is being tested:
if 'TEST_DEVICE' in os.environ:
if os.environ['TEST_DEVICE'] == 'cpu':
C.device.try_set_default_device(C.device.cpu())
else:
C.device.try_set_default_device(C.device.gpu(0))
# Test for CNTK version
if not C.__version__ == "2.0":
raise Exception("this notebook is designed to work with 2.0. Current Version: " + C.__version__)
# Ensure we always get the same amount of randomness
np.random.seed(0)
C.cntk_py.set_fixed_random_seed(1)
C.cntk_py.force_deterministic_algorithms()
# Define the data dimensions
input_dim_model = (1, 28, 28) # images are 28 x 28 with 1 channel of color (gray)
input_dim = 28*28 # used by readers to treat input data as a vector
num_output_classes = 10
# Read a CTF formatted text (as mentioned above) using the CTF deserializer from a file
def create_reader(path, is_training, input_dim, num_label_classes):
ctf = C.io.CTFDeserializer(path, C.io.StreamDefs(
labels=C.io.StreamDef(field='labels', shape=num_label_classes, is_sparse=False),
features=C.io.StreamDef(field='features', shape=input_dim, is_sparse=False)))
return C.io.MinibatchSource(ctf,
randomize = is_training, max_sweeps = C.io.INFINITELY_REPEAT if is_training else 1)
# Ensure the training and test data is available
# We search in two locations in the toolkit for the cached MNIST data set.
data_found=False # A flag to indicate if train/test data found in local cache
for data_dir in [os.path.join("..", "Examples", "Image", "DataSets", "MNIST"),
os.path.join("data", "MNIST")]:
train_file=os.path.join(data_dir, "Train-28x28_cntk_text.txt")
test_file=os.path.join(data_dir, "Test-28x28_cntk_text.txt")
if os.path.isfile(train_file) and os.path.isfile(test_file):
data_found=True
break
if not data_found:
raise ValueError("Please generate the data by completing Lab1_MNIST_DataLoader")
print("Data directory is {0}".format(data_dir))
Image(url="https://www.cntk.ai/jup/cntk103d_conv2d_final.gif", width= 300)
# Plot images with strides of 2 and 1 with padding turned on
images = [("https://www.cntk.ai/jup/cntk103d_padding_strides.gif" , 'With stride = 2'),
("https://www.cntk.ai/jup/cntk103d_same_padding_no_strides.gif", 'With stride = 1')]
for im in images:
print(im[1])
display(Image(url=im[0], width=200, height=200))
x = C.input_variable(input_dim_model)
y = C.input_variable(num_output_classes)
# function to build model
# relu error:1.37
# sigmoid error: 2.69
# leaky relu error: 1.26
# tanh error: 1.4
# def create_model(features):
# with C.layers.default_options(init=C.glorot_uniform(), activation=C.relu):
# #with C.layers.default_options(init=C.glorot_uniform(), activation=C.sigmoid):
# #with C.layers.default_options(init=C.glorot_uniform(), activation=C.leaky_relu):
# #with C.layers.default_options(init=C.glorot_uniform(), activation=C.tanh):
# h = features
# h = C.layers.Convolution2D(filter_shape=(5,5),
# num_filters=8,
# strides=(2,2),
# pad=True, name='first_conv')(h)
# h = C.layers.Convolution2D(filter_shape=(5,5),
# num_filters=16,
# strides=(2,2),
# pad=True, name='second_conv')(h)
# r = C.layers.Dense(num_output_classes, activation=None, name='classify')(h)
# return r
# average pool
# relu error:1.25
# leak relu error: 1.17
# sigmoid error:3.89
# tanh erro:1.47
# def create_model(features):
# with C.layers.default_options(init=C.glorot_uniform(), activation=C.leaky_relu):
# h = features
# h = C.layers.Convolution2D(filter_shape=(5,5),
# num_filters=8,
# strides=(1,1),
# pad=True, name='first_conv')(h)
# h = C.layers.AveragePooling(filter_shape=(5,5),
# strides=(2,2), name = 'first_avg_pool')(h)
# h = C.layers.Convolution2D(filter_shape=(5,5),
# num_filters=16,
# strides=(1,1),
# pad=True, name='second_conv')(h)
# h = C.layers.AveragePooling(filter_shape=(5,5),
# strides=(2,2), name = 'second_avg_pool')(h)
# r = C.layers.Dense(num_output_classes, activation=None, name='classify')(h)
# return r
# max pool
# leaky relu 2*2 2*2 error:1.12
# leaky relu 5*5 5*5 error:1.37
def create_model(features):
with C.layers.default_options(init=C.glorot_uniform(), activation=C.leaky_relu):
h = features
h = C.layers.Convolution2D(filter_shape=(5,5),
num_filters=8,
strides=(1,1),
pad=True, name='first_conv')(h)
h = C.layers.MaxPooling(filter_shape=(5,5),
strides=(2,2), name = 'first_max_pool')(h)
h = C.layers.Convolution2D(filter_shape=(5,5),
num_filters=16,
strides=(1,1),
pad=True, name='second_conv')(h)
h = C.layers.MaxPooling(filter_shape=(5,5),
strides=(2,2), name = 'second_max_pool')(h)
r = C.layers.Dense(num_output_classes, activation=None, name='classify')(h)
return r
# Create the model
z = create_model(x)
# Print the output shapes / parameters of different components
print("Output Shape of the first convolution layer:", z.first_conv.shape)
print("Bias value of the last dense layer:", z.classify.b.value)
print("Output Shape of the second convolution layer:", z.second_conv.shape)
# Number of parameters in the network
C.logging.log_number_of_parameters(z)
def create_criterion_function(model, labels):
loss = C.cross_entropy_with_softmax(model, labels)
errs = C.classification_error(model, labels)
return loss, errs # (model, labels) -> (loss, error metric)
# Define a utility function to compute the moving average sum.
# A more efficient implementation is possible with np.cumsum() function
def moving_average(a, w=5):
if len(a) < w:
return a[:] # Need to send a copy of the array
return [val if idx < w else sum(a[(idx-w):idx])/w for idx, val in enumerate(a)]
# Defines a utility that prints the training progress
def print_training_progress(trainer, mb, frequency, verbose=1):
training_loss = "NA"
eval_error = "NA"
if mb%frequency == 0:
training_loss = trainer.previous_minibatch_loss_average
eval_error = trainer.previous_minibatch_evaluation_average
if verbose:
print ("Minibatch: {0}, Loss: {1:.4f}, Error: {2:.2f}%".format(mb, training_loss, eval_error*100))
return mb, training_loss, eval_error
def train_test(train_reader, test_reader, model_func, num_sweeps_to_train_with=10):
# Instantiate the model function; x is the input (feature) variable
# We will scale the input image pixels within 0-1 range by dividing all input value by 255.
model = model_func(x/255)
# Instantiate the loss and error function
loss, label_error = create_criterion_function(model, y)
# Instantiate the trainer object to drive the model training
learning_rate = 0.2
lr_schedule = C.learning_rate_schedule(learning_rate, C.UnitType.minibatch)
learner = C.sgd(z.parameters, lr_schedule)
trainer = C.Trainer(z, (loss, label_error), [learner])
# Initialize the parameters for the trainer
minibatch_size = 64
num_samples_per_sweep = 60000
num_minibatches_to_train = (num_samples_per_sweep * num_sweeps_to_train_with) / minibatch_size
# Map the data streams to the input and labels.
input_map={
y : train_reader.streams.labels,
x : train_reader.streams.features
}
# Uncomment below for more detailed logging
training_progress_output_freq = 500
# Start a timer
start = time.time()
for i in range(0, int(num_minibatches_to_train)):
# Read a mini batch from the training data file
data=train_reader.next_minibatch(minibatch_size, input_map=input_map)
trainer.train_minibatch(data)
print_training_progress(trainer, i, training_progress_output_freq, verbose=1)
# Print training time
print("Training took {:.1f} sec".format(time.time() - start))
# Test the model
test_input_map = {
y : test_reader.streams.labels,
x : test_reader.streams.features
}
# Test data for trained model
test_minibatch_size = 512
num_samples = 10000
num_minibatches_to_test = num_samples // test_minibatch_size
test_result = 0.0
for i in range(num_minibatches_to_test):
# We are loading test data in batches specified by test_minibatch_size
# Each data point in the minibatch is a MNIST digit image of 784 dimensions
# with one pixel per dimension that we will encode / decode with the
# trained model.
data = test_reader.next_minibatch(test_minibatch_size, input_map=test_input_map)
eval_error = trainer.test_minibatch(data)
test_result = test_result + eval_error
# Average of evaluation errors of all test minibatches
print("Average test error: {0:.2f}%".format(test_result*100 / num_minibatches_to_test))
def do_train_test():
global z
z = create_model(x)
reader_train = create_reader(train_file, True, input_dim, num_output_classes)
reader_test = create_reader(test_file, False, input_dim, num_output_classes)
train_test(reader_train, reader_test, z)
do_train_test()
print("Bias value of the last dense layer:", z.classify.b.value)
out = C.softmax(z)
# Read the data for evaluation
reader_eval=create_reader(test_file, False, input_dim, num_output_classes)
eval_minibatch_size = 25
eval_input_map = {x: reader_eval.streams.features, y:reader_eval.streams.labels}
data = reader_eval.next_minibatch(eval_minibatch_size, input_map=eval_input_map)
img_label = data[y].asarray()
img_data = data[x].asarray()
# reshape img_data to: M x 1 x 28 x 28 to be compatible with model
img_data = np.reshape(img_data, (eval_minibatch_size, 1, 28, 28))
predicted_label_prob = [out.eval(img_data[i]) for i in range(len(img_data))]
# Find the index with the maximum value for both predicted as well as the ground truth
pred = [np.argmax(predicted_label_prob[i]) for i in range(len(predicted_label_prob))]
gtlabel = [np.argmax(img_label[i]) for i in range(len(img_label))]
print("Label :", gtlabel[:25])
print("Predicted:", pred)
# Plot a random image
sample_number = 5
plt.imshow(img_data[sample_number].reshape(28,28), cmap="gray_r")
plt.axis('off')
img_gt, img_pred = gtlabel[sample_number], pred[sample_number]
print("Image Label: ", img_pred)
# Plot images with strides of 2 and 1 with padding turned on
images = [("https://www.cntk.ai/jup/c103d_max_pooling.gif" , 'Max pooling'),
("https://www.cntk.ai/jup/c103d_average_pooling.gif", 'Average pooling')]
for im in images:
print(im[1])
display(Image(url=im[0], width=200, height=200))
# Modify this model
def create_model(features):
with C.layers.default_options(init = C.glorot_uniform(), activation = C.relu):
h = features
h = C.layers.Convolution2D(filter_shape=(5,5),
num_filters=8,
strides=(2,2),
pad=True, name='first_conv')(h)
h = C.layers.Convolution2D(filter_shape=(5,5),
num_filters=16,
strides=(2,2),
pad=True, name='second_conv')(h)
r = C.layers.Dense(num_output_classes, activation = None, name='classify')(h)
return r
# do_train_test()
import numpy
from PIL import Image
img = Image.open("MysteryNumberD.bmp").convert("F")
imgarr = numpy.asarray(img)
imgarr = np.reshape(imgarr,(1,28,28))
np.argmax(out.eval(imgarr))
type(imgarr)
np.reshape(imgarr,(1,28,28)).ndim
out.eval(np.reshape(imgarr,(1,28,28)))
type(img_data[1])
img_data[1].ndim
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Goal
Step2: In the block below, we check if we are running this notebook in the CNTK internal test machines by looking for environment variables defined there. We then select the right target device (GPU vs CPU) to test this notebook. In other cases, we use CNTK's default policy to use the best available device (GPU, if available, else CPU).
Step3: Data reading
Step4: Data reading
Step5: <a id='#Model Creation'></a>
Step6: Convolution layers incorporate the following key features
Step7: Building our CNN models
Step8: The first model we build is a simple convolution only network. Here we have two convolutional layers. Since, our task is to detect the 10 digits in the MNIST database, the output of the network should be a vector of length 10, 1 element corresponding to each output class. This is achieved by projecting the output of the last convolutional layer using a dense layer with the output being num_output_classes. We have seen this before with Logistic Regression and MLP where features were mapped to the number of classes in the final layer. Also, note that since we will be using the softmax operation that is combined with the cross entropy loss function during training (see a few cells below), the final dense layer has no activation function associated with it.
Step9: Let us create an instance of the model and inspect the different components of the model. z will be used to represent the output of a network. In this model, we use the relu activation function. Note
Step10: Understanding the number of learnable parameters in a model is key to deep learning since there is a dependency between the number of parameters and the amount of data one needs to have to train the model.
Step11: Understanding Parameters
Step12: Next we will need a helper function to perform the model training. First let us create additional helper functions that will be needed to visualize different functions associated with training.
Step13: Configure training
Step14: <a id='#Run the trainer'></a>
Step15: Note, the average test error is very comparable to our training error indicating that our model has good "out of sample" error a.k.a. generalization error. This implies that our model can very effectively deal with previously unseen observations (during the training process). This is key to avoid overfitting.
Step16: Evaluation / Prediction
Step17: Let us test a small minibatch sample from the test data.
Step18: Let us visualize some of the results
Step19: Pooling Layer
Step20: Typical convolution network
Step21: Knowledge Check
|
7,005
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
%load neon_aop_refl_hdf5_functions.py
#Define inputs
filename = '../data/SERC/hyperspectral/NEON_D02_SERC_DP1_20160807_160559_reflectance.h5'
sercRefl, sercRefl_md, wavelengths = h5refl2array(filename)
clipExtDict = {}
clipExtDict['xMin'] = 367400.
clipExtDict['xMax'] = 368100.
clipExtDict['yMin'] = 4305750.
clipExtDict['yMax'] = 4306350.
clipExtent = (clipExtDict['xMin'],clipExtDict['xMax'],clipExtDict['yMin'],clipExtDict['yMax'])
clipIndex = calc_clip_index(clipExtDict,sercRefl_md['ext_dict'])
sercReflSubset = subset_clean_refl(sercRefl,sercRefl_md,clipIndex)
#Select bands to be used in the NDVI calculation
ndvi_bands = (58,90) #NIR and VIS (Red) bands
#Check the center wavelengths that these bands represent
band_width = wavelengths.value[1]-wavelengths.value[0]
print('band 58 wavelength range: ' + str(round(wavelengths.value[57]-band_width/2,2))
+ '-' + str(round(wavelengths.value[57]+band_width/2,2)) + ' nm')
print('band 90 wavelength range: ' + str(round(wavelengths.value[89]-band_width/2,2))
+ '-' + str(round(wavelengths.value[89]+band_width/2,2)) + ' nm')
#Use the stack_subset_bands function to create a stack of the subsetted red and NIR bands needed to calculate NDVI
ndvi_stack = stack_subset_bands(sercRefl,sercRefl_md,ndvi_bands,clipIndex)
vis = ndvi_stack[:,:,0].astype(float)
nir = ndvi_stack[:,:,1].astype(float)
ndvi = np.divide((nir-vis),(nir+vis))
plot_band_array(ndvi,clipExtent,(0,np.max(ndvi)),\
title='SERC Subset NDVI \n (VIS = Band 58, NIR = Band 90)',cmap_title='NDVI',colormap='seismic')
import copy
ndvi_gtpt6 = copy.copy(ndvi)
ndvi_gtpt6[ndvi<0.6] = np.nan #set all pixels with NDVI < 0.6 to nan, keeping only values > 0.6
print('Mean NDVI > 0.6:',round(np.nanmean(ndvi_gtpt6),2))
plot_band_array(ndvi_gtpt6,clipExtent,(0.6,1),title='SERC Subset NDVI > 0.6 \n (VIS = Band 58, NIR = Band 90)',
cmap_title='NDVI',colormap='RdYlGn')
import numpy.ma as ma
def calculate_mean_masked_spectra(reflArray,ndvi,ndvi_threshold,ineq='>'):
mean_masked_refl = np.zeros(reflArray.shape[2])
for i in np.arange(reflArray.shape[2]):
refl_band = reflArray[:,:,i]
if ineq == '>':
ndvi_mask = ma.masked_where((ndvi<=ndvi_threshold) | (np.isnan(ndvi)),ndvi)
elif ineq == '<':
ndvi_mask = ma.masked_where((ndvi>=ndvi_threshold) | (np.isnan(ndvi)),ndvi)
else:
print('ERROR: Invalid inequality. Enter < or >')
masked_refl = ma.MaskedArray(refl_band,mask=ndvi_mask.mask)
mean_masked_refl[i] = ma.mean(masked_refl)
return mean_masked_refl
sercSpectra_ndvi_gtpt6 = calculate_mean_masked_spectra(sercReflSubset,ndvi,0.6)
sercSpectra_ndvi_ltpt3 = calculate_mean_masked_spectra(sercReflSubset,ndvi,0.3,ineq='<')
import pandas
#Remove water vapor band windows & last 10 bands
w = copy.copy(wavelengths.value)
w[((w >= 1340) & (w <= 1445)) | ((w >= 1790) & (w <= 1955))]=np.nan
w[-10:]=np.nan; # the last 10 bands sometimes have noise - best to eliminate
nan_ind = np.argwhere(np.isnan(w))
sercSpectra_ndvi_gtpt6[nan_ind] = np.nan
sercSpectra_ndvi_ltpt3[nan_ind] = np.nan
#Create dataframe with masked NDVI mean spectra
sercSpectra_ndvi_df = pandas.DataFrame()
sercSpectra_ndvi_df['wavelength'] = w
sercSpectra_ndvi_df['mean_refl_ndvi_gtpt6'] = sercSpectra_ndvi_gtpt6
sercSpectra_ndvi_df['mean_refl_ndvi_ltpt3'] = sercSpectra_ndvi_ltpt3
ax = plt.gca();
sercSpectra_ndvi_df.plot(ax=ax,x='wavelength',y='mean_refl_ndvi_gtpt6',color='green',
edgecolor='none',kind='scatter',label='NDVI > 0.6',legend=True);
sercSpectra_ndvi_df.plot(ax=ax,x='wavelength',y='mean_refl_ndvi_ltpt3',color='red',
edgecolor='none',kind='scatter',label='NDVI < 0.3',legend=True);
ax.set_title('Mean Spectra of Reflectance Masked by NDVI')
ax.set_xlim([np.nanmin(w),np.nanmax(w)]); ax.set_ylim(0,0.45)
ax.set_xlabel("Wavelength, nm"); ax.set_ylabel("Reflectance")
ax.grid('on');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in SERC Flightline & Subset
Step2: Stack NIR and VIS bands
Step3: Calculate NDVI & Plot
Step4: Extract Spectra Using Masks
Step5: Function to calculate the mean spectra for reflectance values thresholed by NDVI using numpy.ma
Step6: We can test out this function for various NDVI thresholds. We'll test two together, and you can try out different values on your own. Let's look at the average spectra for healthy vegetation (NDVI > 0.6), and for a lower threshold (NDVI < 0.3).
|
7,006
|
<ASSISTANT_TASK:>
Python Code:
from fretbursts import *
sns = init_notebook(apionly=True)
# Tweak here matplotlib style
import matplotlib as mpl
mpl.rcParams['font.sans-serif'].insert(0, 'Arial')
mpl.rcParams['font.size'] = 12
%config InlineBackend.figure_format = 'retina'
url = 'http://files.figshare.com/2182601/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5'
download_file(url, save_dir='./data')
full_fname = "./data/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5"
d = loader.photon_hdf5(full_fname)
loader.alex_apply_period(d)
d.calc_bg(bg.exp_fit, time_s=50.1, tail_min_us='auto', F_bg=1.7)
d.burst_search(m=10, computefret=False, ph_sel=Ph_sel(Dex='DAem'))
d.calc_fret(count_ph=True, corrections=False)
ds = d.select_bursts(select_bursts.naa, th1=30, computefret=False)
ds1 = ds.select_bursts(select_bursts.size, th1=30, computefret=False)
ds_FRET = ds1.select_bursts(select_bursts.S, S1=0.25, S2=0.85, computefret=False)
dx=ds_FRET
alex_jointplot(dx)
def bva_sigma_E(n, bursts, DexAem_mask, out=None):
Perform BVA analysis computing std.dev. of E for sub-bursts in each burst.
Split each burst in n-photons chunks (sub-bursts), compute E for each sub-burst,
then compute std.dev. of E across the sub-bursts.
For details on BVA see:
- Torella et al. (2011) Biophys. J. doi.org/10.1016/j.bpj.2011.01.066
- Ingargiola et al. (2016) bioRxiv, doi.org/10.1101/039198
Arguments:
n (int): number of photons in each sub-burst
bursts (Bursts object): burst-data object with indexes relative
to the Dex photon stream.
DexAem_mask (bool array): mask of A-emitted photons during D-excitation
periods. It is a boolean array indexing the array of Dex timestamps
(`Ph_sel(Dex='DAem')`).
out (None or list): append the result to the passed list. If None,
creates a new list. This is useful to accumulate data from
different spots in a single list.
Returns:
E_sub_std (1D array): contains for each burst, the standard deviation of
sub-bursts FRET efficiency. Same length of input argument `bursts`.
E_sub_std = [] if out is None else out
for burst in bursts:
E_sub_bursts = []
startlist = range(burst.istart, burst.istop + 2 - n, n)
stoplist = [i + n for i in startlist]
for start, stop in zip(startlist, stoplist):
A_D = DexAem_mask[start:stop].sum()
assert stop - start == n
E = A_D / n
E_sub_bursts.append(E)
E_sub_std.append(np.std(E_sub_bursts))
return E_sub_std
ph_d = ds_FRET.get_ph_times(ph_sel=Ph_sel(Dex='DAem'))
bursts = ds_FRET.mburst[0]
bursts_d = bursts.recompute_index_reduce(ph_d)
Dex_mask = ds_FRET.get_ph_mask(ph_sel=Ph_sel(Dex='DAem'))
DexAem_mask = ds_FRET.get_ph_mask(ph_sel=Ph_sel(Dex='Aem'))
DexAem_mask_d = DexAem_mask[Dex_mask]
n = 7
E_sub_std = bva_sigma_E(n, bursts_d, DexAem_mask_d)
plt.figure(figsize=(4.5, 4.5))
x = np.arange(0,1.01,0.01)
y = np.sqrt((x*(1-x))/n)
plt.plot(x, y, lw=2, color='k', ls='--')
im = sns.kdeplot(ds_FRET.E[0], np.asfarray(E_sub_std),
shade=True, cmap='Spectral_r', shade_lowest=False, n_levels=20)
plt.xlim(0,1)
plt.ylim(0,np.sqrt(0.5**2/7)*2)
plt.xlabel('E', fontsize=16)
plt.ylabel(r'$\sigma_i$', fontsize=16);
plt.text(0.05, 0.95, 'BVA', va='top', fontsize=22, transform=plt.gca().transAxes)
plt.text(0.95, 0.95, '# Bursts: %d' % ds_FRET.num_bursts,
va='top', ha='right', transform=plt.gca().transAxes)
plt.savefig('BVA.png', bbox_inches='tight', dpi=200, transparent=False)
x, y = ds_FRET.E[0], np.asfarray(E_sub_std)
hist_kws = dict(edgecolor='k', linewidth=0.2,
facecolor=sns.color_palette('Spectral_r', 100)[10])
g = sns.JointGrid(x=x, y=y, ratio=3)
g.plot_joint(sns.kdeplot, cmap='Spectral_r', shade=True, shade_lowest=False, n_levels=20)
g.ax_marg_x.hist(x, bins=np.arange(-0.2, 1.2, 0.025), **hist_kws)
g.ax_marg_y.hist(y, bins=50, orientation="horizontal", **hist_kws)
x1 = np.arange(0,1.01,0.01)
y1 = np.sqrt((x1*(1-x1))/n)
plt.plot(x1, y1, lw=2, color='k', ls='--')
g.ax_joint.set_xlim(0,1)
g.ax_joint.set_ylim(0,np.sqrt(0.5**2/7)*2)
g.ax_joint.set_xlabel('E', fontsize=16)
g.ax_joint.set_ylabel(r'$\sigma_i$', fontsize=16);
g.ax_joint.text(0.05, 0.95, 'BVA', va='top', fontsize=22, transform=g.ax_joint.transAxes)
g.ax_joint.text(0.95, 0.95, '# Bursts: %d' % ds_FRET.num_bursts,
va='top', ha='right', transform=g.ax_joint.transAxes)
plt.savefig('BVA_joint.png', bbox_inches='tight', dpi=200, transparent=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Data
Step3: Burst Variance Analysis
Step4: Next we prepare the data for BVA
Step5: and call the bva_sigma_E function
Step6: Finally, we make a KDE plot of the 2D distribution E_sub_std versus the burst FRET efficiency
|
7,007
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'sandbox-2', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
7,008
|
<ASSISTANT_TASK:>
Python Code:
numero_entero = 5 # Asigno el número 5 a la variable numero_entero
print numero_entero # Imprimo el valor que tiene la variable numero_entero
print type(numero_entero) # Imprimo el tipo de la variable numero_entero
numero_muy_grande = 9223372036854775807
print numero_muy_grande
print type(numero_entero)
numero_muy_grande += 1
print numero_muy_grande
print type(numero_muy_grande)
numero_real = 7.5
print numero_real
print type(numero_real)
print numero_entero + numero_real
print type(numero_entero + numero_real)
dividendo = 5
divisor = 3
resultado = dividendo / divisor
print resultado
print type(resultado)
divisor = 3.0
resultado = dividendo / divisor
print resultado
print type(resultado)
cociente = dividendo // divisor
print "cociente: ", cociente
print type(cociente)
resto = dividendo % divisor
print "resto: ", resto
print type(resto)
complejo = 5 + 3j
print complejo
print type(complejo)
complejo_cuadrado = complejo ** 2
print '(5+3j)*(5+3j) = 5*5 + 5*3j + 3j*5 + 3j*3j = (25-9) + 30j'
print complejo_cuadrado
boolean = True
print boolean
print not boolean
print type(boolean)
print True or False and True
boolean = 5 < 7
print boolean
numero = 7
en_rango = 5 < numero < 9
fuera_de_rango = 5 < numero < 6
print 'numero vale {0}'.format(numero)
print '5 < numero < 9: {pertenece}'.format(pertenece=en_rango)
print '5 < numero < 6: %s' % fuera_de_rango
print " %d %i %s %ld %lu %0.4d %4d" % (5, 5, 5, 5, 5, 5, 5)
cadena_caracteres = 'Hola mundo'
print cadena_caracteres
print type(cadena_caracteres)
cadena_caracteres = "Y con doble comilla?, de qué tipo es?"
print cadena_caracteres
print type(cadena_caracteres)
cadena_caracteres = y si quiero
usar un string
que se escriba en varias
líneas?.
print cadena_caracteres
print type(cadena_caracteres)
cadena_caracteres = 'Hola mundo'
print cadena_caracteres
print 'El septimo caracter de la cadena "{0}" es "{1}"'.format(cadena_caracteres, cadena_caracteres[6])
print 'El septimo caracter de la cadena "{0}" es "{1}"'.format(cadena_caracteres, cadena_caracteres[-4])
cadena_caracteres[6] = 'x'
print cadena_caracteres[2:8] # Con los dos índices positivos
print cadena_caracteres[2:-2] # Con un índice negativo y otro positivo
print cadena_caracteres[-8:8] # Con un índice negativo y otro positivo
print cadena_caracteres[-8:-2] # Con ambos índices negativos
print cadena_caracteres[2:-2:3] # Y salteándose de a dos
cadena_caracteres = 'Hola mundo\n'
print cadena_caracteres
print cadena_caracteres[:-1]
print cadena_caracteres[:-5]
numero = raw_input('Ingrese un número')
print numero
print type(numero)
numero = int(numero)
print numero
print type(numero)
numero1 = 1
numero2 = 2
if numero1 == numero2:
print 'Los números son iguales'
print 'Este string se imprime siempre'
print 'Ahora cambio el valor de numero2'
numero2 = 1
if numero1 == numero2:
print 'Los números son iguales'
print 'Este string se imprime siempre'
numero1 = 1
numero2 = 2
if numero1 == numero2:
print 'Los números son iguales'
print 132
print 23424
else:
print 'Los números son distintos'
# Como lo tendríamos que hacer en Pascal o C.
if numero1 == numero2:
print 'Los dos números son iguales'
else:
if numero1 > numero2:
print 'numero1 es mayor a numero2'
else:
print 'numero1 es menor a numero2'
# Más corto y elegante en Python.
if numero1 == numero2:
print 'Los dos números son iguales'
elif numero1 > numero2:
print 'numero1 es mayor a numero2'
else:
print 'numero1 es menor a numero2'
lista_de_numeros = []
if lista_de_numeros:
print 'la lista tiene elementos'
else:
print 'la lista no tiene elementos'
if lista_de_numeros:
print 'La lista no esta vacía'
if False or None or [] or () or {} or 0:
print 'Alguna de las anteriores no era falsa'
else:
print 'Todos los valores anteriores son consideradas como Falso'
num = 5
es_par = True if (num % 2 == 0) else False
print '5 es par?:', es_par
num = 6
es_par = True if (num % 2 == 0) else False
print '6 es par?:', es_par
nulo = None
print nulo
print type(nulo)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ahora, ¿qué pasa cuando ese número entero crece mucho?, por ejemplo, si le asignamos 9223372036854775807
Step2: ¿Y si ahora le sumamos 1?
Step3: Reales
Step4: ¿Y qué pasa si a un entero le sumamos un real?
Step5: Operaciones entre reales y enteros
Step6: En cambio, si alguno de los números es real
Step7: ¿Y si queremos hacer la división entera por más que uno de los números sea real?
Step8: Esto cambia en Python 3, donde la / hace la división real (por más que le pases dos números enteros) y la // hace la división entera.
Step9: Si bien Python soporta aritmética de complejos, la verdad es que no es uno de los tipos de datos más usados. Sin embargo, es bueno saber que existe.
Step10: También se puede crear un boolean a partir de comparar dos números
Step11: Incluso, se puede saber fácilmente si un número está dentro de un rango o no.
Step12: Muchas formas de imprimir el número 5
Step13: Strings
Step15: Además, se pueden armar strings multilínea poniendo tres comillas simples o dobles seguidas
Step16: Índices y Slice en string
Step17: H | o | l | a | | m | u | n | d | o
Step18: H | o | l | a | | m | u | n | d | o
Step19: Aunque a veces lo que queremos es una parte del string, no todo
Step20: Aunque lo más común es quitar el último carácter, por ejemplo, cuando es un Enter
Step21: Ingreso de datos desde teclado
Step22: Y para convertirlo como entero
Step23: None
Step24: if-else
Step25: if-elif-else
Step26: En cambio, en Python lo podemos un poco más compacto y claro
Step27: Como ya dijimos antes, se puede usar el operador in para saber si un elemento se encuentra en una lista
Step28: Cualquier tipo de dato se lo puede evaluar como booleano.
Step29: short-if
|
7,009
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import time
import PIL.Image as Image
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_hub as hub
import datetime
%load_ext tensorboard
mobilenet_v2 ="https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4"
inception_v3 = "https://tfhub.dev/google/imagenet/inception_v3/classification/5"
classifier_model = mobilenet_v2 #@param ["mobilenet_v2", "inception_v3"] {type:"raw"}
IMAGE_SHAPE = (224, 224)
classifier = tf.keras.Sequential([
hub.KerasLayer(classifier_model, input_shape=IMAGE_SHAPE+(3,))
])
grace_hopper = tf.keras.utils.get_file('image.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg')
grace_hopper = Image.open(grace_hopper).resize(IMAGE_SHAPE)
grace_hopper
grace_hopper = np.array(grace_hopper)/255.0
grace_hopper.shape
result = classifier.predict(grace_hopper[np.newaxis, ...])
result.shape
predicted_class = tf.math.argmax(result[0], axis=-1)
predicted_class
labels_path = tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
imagenet_labels = np.array(open(labels_path).read().splitlines())
plt.imshow(grace_hopper)
plt.axis('off')
predicted_class_name = imagenet_labels[predicted_class]
_ = plt.title("Prediction: " + predicted_class_name.title())
data_root = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
batch_size = 32
img_height = 224
img_width = 224
train_ds = tf.keras.utils.image_dataset_from_directory(
str(data_root),
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size
)
val_ds = tf.keras.utils.image_dataset_from_directory(
str(data_root),
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size
)
class_names = np.array(train_ds.class_names)
print(class_names)
normalization_layer = tf.keras.layers.Rescaling(1./255)
train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y)) # Where x—images, y—labels.
val_ds = val_ds.map(lambda x, y: (normalization_layer(x), y)) # Where x—images, y—labels.
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
result_batch = classifier.predict(train_ds)
predicted_class_names = imagenet_labels[tf.math.argmax(result_batch, axis=-1)]
predicted_class_names
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
plt.title(predicted_class_names[n])
plt.axis('off')
_ = plt.suptitle("ImageNet predictions")
mobilenet_v2 = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
inception_v3 = "https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4"
feature_extractor_model = mobilenet_v2 #@param ["mobilenet_v2", "inception_v3"] {type:"raw"}
feature_extractor_layer = hub.KerasLayer(
feature_extractor_model,
input_shape=(224, 224, 3),
trainable=False)
feature_batch = feature_extractor_layer(image_batch)
print(feature_batch.shape)
num_classes = len(class_names)
model = tf.keras.Sequential([
feature_extractor_layer,
tf.keras.layers.Dense(num_classes)
])
model.summary()
predictions = model(image_batch)
predictions.shape
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['acc'])
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=log_dir,
histogram_freq=1) # Enable histogram computation for every epoch.
NUM_EPOCHS = 10
history = model.fit(train_ds,
validation_data=val_ds,
epochs=NUM_EPOCHS,
callbacks=tensorboard_callback)
%tensorboard --logdir logs/fit
predicted_batch = model.predict(image_batch)
predicted_id = tf.math.argmax(predicted_batch, axis=-1)
predicted_label_batch = class_names[predicted_id]
print(predicted_label_batch)
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
plt.title(predicted_label_batch[n].title())
plt.axis('off')
_ = plt.suptitle("Model predictions")
t = time.time()
export_path = "/tmp/saved_models/{}".format(int(t))
model.save(export_path)
export_path
reloaded = tf.keras.models.load_model(export_path)
result_batch = model.predict(image_batch)
reloaded_result_batch = reloaded.predict(image_batch)
abs(reloaded_result_batch - result_batch).max()
reloaded_predicted_id = tf.math.argmax(reloaded_result_batch, axis=-1)
reloaded_predicted_label_batch = class_names[reloaded_predicted_id]
print(reloaded_predicted_label_batch)
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch[n])
plt.title(reloaded_predicted_label_batch[n].title())
plt.axis('off')
_ = plt.suptitle("Model predictions")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TensorFlow Hub による転移学習
Step2: ImageNet の分類器
Step3: 1 枚の画像で実行する
Step4: バッチの次元を追加し、画像をモデルに入力します。
Step5: 結果は、1001 要素のベクトルのロジットで、画像の各クラスの確率を評価します。
Step6: 推論結果のデコード
Step7: シンプルな転移学習
Step8: まず、tf.keras.utils.image_dataset_from_directory を使用して、このデータをディスクの画像データを使ったモデルに読み込みます。これにより、tf.data.Dataset が生成されます。
Step9: flowers データセットには 5 つのクラスがあります。
Step10: 次に、画像モデルに使用される TensorFlow Hub の規則では[0, 1] 範囲の浮動小数点数の入力が期待されるため、tf.keras.layers.Rescaling 前処理レイヤーを使用してこれを達成します。
Step11: 3 番目に、Dataset.prefetch を使って、バッファリングされたプリフェッチで入力パイプラインを終了します。これで、I/O ブロッキングの問題が生じずにディスクからデータを生成することができます。
Step12: 分類器で画像をバッチ処理する
Step13: これらの予測が画像とどれくらい整合しているかを確認します。
Step14: 注意
Step15: hub.KerasLayer を使用して、事前トレーニング済みモデルを Keras レイヤーとしてラップし、特徴量抽出器を作成します。trainable=False 引数を使用して変数を凍結し、トレーニングのみが新しい分類器レイヤーを変更できるようにします。
Step16: 特徴量抽出器は、画像ごとに 1280 長のベクトルを返します(この例では、画像バッチサイズは 32 のママになります)。
Step17: 上位の分類レイヤーを接合する
Step18: モデルのトレーニング
Step19: 次に、Model.fit メソッドを使用して、モデルをトレーニングします。
Step20: エポックごとに指標がどのように変化しているかを表示し、他のスカラー値を追跡するために、TensorBoard を起動します。
Step21: <!-- <img class="tfo-display-only-on-site" src="https
Step22: 結果をプロットします
Step23: モデルのエクスポート
Step24: SavedModel を再読み込みできることと、モデルが同じ結果を出力できることを確認します。
|
7,010
|
<ASSISTANT_TASK:>
Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import functools
import itertools
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_hub as hub
from tqdm import trange
# Use the inner product between two embedding vectors as the similarity measure
def plot_correlation(labels, features):
corr = np.inner(features, features)
corr /= np.max(corr)
sns.heatmap(corr, xticklabels=labels, yticklabels=labels)
# Generate embeddings for some terms
queries = [
# Related viruses
'coronavirus', 'SARS', 'MERS',
# Regions
'Italy', 'Spain', 'Europe',
# Symptoms
'cough', 'fever', 'throat'
]
module = hub.load('https://tfhub.dev/tensorflow/cord-19/swivel-128d/3')
embeddings = module(queries)
plot_correlation(queries, embeddings)
builder = tfds.builder(name='scicite')
builder.download_and_prepare()
train_data, validation_data, test_data = builder.as_dataset(
split=('train', 'validation', 'test'),
as_supervised=True)
#@title Let's take a look at a few labeled examples from the training set
NUM_EXAMPLES = 10#@param {type:"integer"}
TEXT_FEATURE_NAME = builder.info.supervised_keys[0]
LABEL_NAME = builder.info.supervised_keys[1]
def label2str(numeric_label):
m = builder.info.features[LABEL_NAME].names
return m[numeric_label]
data = next(iter(train_data.batch(NUM_EXAMPLES)))
pd.DataFrame({
TEXT_FEATURE_NAME: [ex.numpy().decode('utf8') for ex in data[0]],
LABEL_NAME: [label2str(x) for x in data[1]]
})
#@title Hyperparameters { run: "auto" }
EMBEDDING = 'https://tfhub.dev/tensorflow/cord-19/swivel-128d/3' #@param {type: "string"}
TRAINABLE_MODULE = False #@param {type: "boolean"}
hub_layer = hub.KerasLayer(EMBEDDING, input_shape=[],
dtype=tf.string, trainable=TRAINABLE_MODULE)
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(3))
model.summary()
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
EPOCHS = 35#@param {type: "integer"}
BATCH_SIZE = 32#@param {type: "integer"}
history = model.fit(train_data.shuffle(10000).batch(BATCH_SIZE),
epochs=EPOCHS,
validation_data=validation_data.batch(BATCH_SIZE),
verbose=1)
from matplotlib import pyplot as plt
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.set_facecolor('#F8F8F8')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
display_training_curves(history.history['accuracy'], history.history['val_accuracy'], 'accuracy', 211)
display_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212)
results = model.evaluate(test_data.batch(512), verbose=2)
for name, value in zip(model.metrics_names, results):
print('%s: %.3f' % (name, value))
prediction_dataset = next(iter(test_data.batch(20)))
prediction_texts = [ex.numpy().decode('utf8') for ex in prediction_dataset[0]]
prediction_labels = [label2str(x) for x in prediction_dataset[1]]
predictions = [
label2str(x) for x in np.argmax(model.predict(prediction_texts), axis=-1)]
pd.DataFrame({
TEXT_FEATURE_NAME: prediction_texts,
LABEL_NAME: prediction_labels,
'prediction': predictions
})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TF-Hub CORD-19 Swivel 임베딩 살펴보기
Step2: 임베딩 분석하기
Step3: 임베딩이 여러 용어의 의미를 성공적으로 포착했음을 알 수 있습니다. 각 단어는 해당 클러스터의 다른 단어와 유사하지만(즉, "coronavirus"는 "SARS" 및 "MERS"와 높은 상관 관계가 있음) 다른 클러스터의 용어와는 다릅니다(즉, "SARS"와 "Spain" 사이의 유사성은 0에 가까움).
Step4: 인용 의도 분류자 훈련하기
Step5: 모델 훈련 및 평가하기
Step6: 모델 평가하기
Step7: 특히 정확성이 빠르게 증가하는 동안 손실이 빠르게 감소하는 것을 볼 수 있습니다. 예측이 실제 레이블과 어떻게 관련되는지 확인하기 위해 몇 가지 예를 플롯해 보겠습니다.
|
7,011
|
<ASSISTANT_TASK:>
Python Code:
#python dom extension functions to get class and other attributes
def getAttr(dom,cl,attr='class',el='div'):
toreturn=[]
divs=dom.getElementsByTagName(el)
for div in divs:
clarray=div.getAttribute(attr).split(' ')
for cli in clarray:
if cli==cl: toreturn.append(div)
if toreturn!=[]: return toreturn
else: return None
#open first page, parse html, get number of pages and their links
import html5lib
import urllib2
url="http://www.research.lancs.ac.uk/portal/en/organisations/energy-lancaster/publications.html"
aResp = urllib2.urlopen(url)
t = aResp.read()
dom = html5lib.parse(t, treebuilder="dom")
links=getAttr(dom,'portal_navigator_paging',el='span')[0].childNodes
nr_of_pages=int([i for i in links if i.nodeType==1][::-1][0].childNodes[0].childNodes[0].nodeValue)-1
#create publist array
publist=[]
#parse publications links on all pages
for pagenr in range(nr_of_pages):
aResp = urllib2.urlopen(url+'?page='+str(pagenr))
t = aResp.read()
dom = html5lib.parse(t, treebuilder="dom")
#get html list
htmlpublist=dom.getElementsByTagName('ol')
#extract pub links
for i in htmlpublist[0].childNodes:
if i.nodeType==1:
if i.childNodes[0].nodeType==1:
j=i.childNodes[1].childNodes[0].childNodes[0]
if j.nodeType==1:
publist.append(j.getAttribute('href'))
print 'finished page',pagenr
print len(publist),'publications associated with Energy Lancaster'
#create dictionary
pubdict={i:{"url":i} for i in publist}
for r in range(len(publist)):
pub=publist[r]
aResp = urllib2.urlopen(pub)
t = aResp.read()
dom = html5lib.parse(t, treebuilder="dom")
#get keywords from pub page
keywords=getAttr(dom,'keywords',el='ul')
if keywords:
pubdict[pub]['keywords']=[i.childNodes[0].childNodes[0].nodeValue for i in keywords[0].getElementsByTagName('a')]
#get title from pub page
title=getAttr(dom,'title',el='h2')
if title:
pubdict[pub]['title']=title[0].childNodes[0].childNodes[0].nodeValue
abstract=getAttr(dom,'rendering_researchoutput_abstractportal',el='div')
if abstract:
pubdict[pub]['abstract']=abstract[0].childNodes[0].childNodes[0].nodeValue
if r%10==0: print 'processed',r,'publications...'
#save parsed data
import json
file('pubdict.json','w').write(json.dumps(pubdict))
#load if saved previously
#pubdict=json.loads(file('pubdict.json','r').read())
#import dependencies
import pandas as pd
from textblob import TextBlob
#import spacy
#nlp = spacy.load('en')
#run once if you need to download nltk corpora, igonre otherwise
import nltk
nltk.download()
#get topical nouns for title and abstract using natural language processing
for i in range(len(pubdict.keys())):
if 'title' in pubdict[pubdict.keys()[i]]:
if text:
text=pubdict[pubdict.keys()[i]]['title']
#get topical nouns with textblob
blob1 = TextBlob(text)
keywords1=blob1.noun_phrases
#get topical nouns with spacy
blob2 = nlp(text)
keywords2=[]
for k in blob2.noun_chunks:
keywords2.append(str(k).decode('utf8').replace(u'\n',' '))
#create unified, unique set of topical nouns, called keywords here
keywords=list(set(keywords2).union(set(keywords1)))
pubdict[pubdict.keys()[i]]['title-nlp']=keywords
if 'abstract' in pubdict[pubdict.keys()[i]]:
text=pubdict[pubdict.keys()[i]]['abstract']
if text:
#get topical nouns with textblob
blob1 = TextBlob(text)
keywords1=blob1.noun_phrases
#get topical nouns with spacy
blob2 = nlp(text)
keywords2=[]
for k in blob2.noun_chunks:
keywords2.append(str(k).decode('utf8').replace(u'\n',' '))
#create unified, unique set of topical nouns, called keywords here
keywords=list(set(keywords2).union(set(keywords1)))
pubdict[pubdict.keys()[i]]['abstract-nlp']=keywords
print i,',',
#save parsed data
file('pubdict2.json','w').write(json.dumps(pubdict))
#load if saved previously
#pubdict=json.loads(file('pubdict2.json','r').read())
keywords=[j for i in pubdict if 'keywords' in pubdict[i] if pubdict[i]['keywords'] for j in pubdict[i]['keywords']]
titles=[pubdict[i]['title'] for i in pubdict if 'title' in pubdict[i] if pubdict[i]['title']]
abstracts=[pubdict[i]['abstract'] for i in pubdict if 'abstract' in pubdict[i] if pubdict[i]['abstract']]
title_nlp=[j for i in pubdict if 'title-nlp' in pubdict[i] if pubdict[i]['title-nlp'] for j in pubdict[i]['title-nlp']]
abstract_nlp=[j for i in pubdict if 'abstract-nlp' in pubdict[i] if pubdict[i]['abstract-nlp'] for j in pubdict[i]['abstract-nlp']]
kt=keywords+titles
kta=kt+abstracts
kt_nlp=keywords+title_nlp
kta_nlp=kt+abstract_nlp
file('keywords.json','w').write(json.dumps(keywords))
file('titles.json','w').write(json.dumps(titles))
file('abstracts.json','w').write(json.dumps(abstracts))
file('kt.json','w').write(json.dumps(kt))
file('kta.json','w').write(json.dumps(kta))
file('kt_nlp.json','w').write(json.dumps(kt_nlp))
file('kta_nlp.json','w').write(json.dumps(kta_nlp))
import re
def convert(name):
s1 = re.sub('(.)([A-Z][a-z]+)', r'\1 \2', name)
return re.sub('([a-z0-9])([A-Z])', r'\1 \2', s1).lower()
kc=[convert(i) for i in keywords]
file('kc.json','w').write(json.dumps(kc))
ks=[j for i in kc for j in i.split()]
file('ks.json','w').write(json.dumps(ks))
ktc_nlp=[convert(i) for i in kt_nlp]
file('ktc_nlp.json','w').write(json.dumps(ktc_nlp))
kts_nlp=[j for i in ktc_nlp for j in i.split()]
file('kts_nlp.json','w').write(json.dumps(kts_nlp))
ktac_nlp=[convert(i) for i in kta_nlp]
file('ktac_nlp.json','w').write(json.dumps(ktac_nlp))
ktas_nlp=[j for i in ktac_nlp for j in i.split()]
file('ktas_nlp.json','w').write(json.dumps(ktas_nlp))
for topic_id in range(1,len(topics)):
#select topic
#topic_id=1
#use title
usetitle=True
verbose=False
#initiate global DFs
DF=pd.DataFrame()
projects1={}
projects2={}
projects12={}
#specify depth (n most relevant projects)
depth=100
#get topical nouns with textblob
blob1 = TextBlob(topics[topic_id].decode('utf8'))
keywords1=blob1.noun_phrases
#get topical nouns with spacy
blob2 = nlp(topics[topic_id].decode('utf8'))
keywords2=[]
for i in blob2.noun_chunks:
keywords2.append(str(i).replace(u'\n',' '))
#create unified, unique set of topical nouns, called keywords here
keywords=list(set(keywords2).union(set(keywords1)))
print '----- started processing topic ', topic_id,'-----'
print 'topic keywords are:',
for keyword in keywords: print keyword+', ',
print ' '
#construct search query from title and keywords, the cycle through the keywords
for keyword in keywords:
if usetitle:
if verbose: print 'query for <'+title+keyword+'>'
query=repr(title+keyword).replace(' ','+')[2:-1]
u0='http://gtr.rcuk.ac.uk/search/project/csv?term='
u1='&selectedFacets=&fields='
u2='pro.gr,pro.t,pro.a,pro.orcidId,per.fn,per.on,per.sn,'
u3='per.fnsn,per.orcidId,per.org.n,per.pro.t,per.pro.abs,pub.t,pub.a,pub.orcidId,org.n,org.orcidId,'
u4='acp.t,acp.d,acp.i,acp.oid,kf.d,kf.oid,is.t,is.d,is.oid,col.i,col.d,col.c,col.dept,col.org,col.pc,col.pic,'
u5='col.oid,ip.t,ip.d,ip.i,ip.oid,pol.i,pol.gt,pol.in,pol.oid,prod.t,prod.d,prod.i,prod.oid,rtp.t,rtp.d,rtp.i,'
u6='rtp.oid,rdm.t,rdm.d,rdm.i,rdm.oid,stp.t,stp.d,stp.i,stp.oid,so.t,so.d,so.cn,so.i,so.oid,ff.t,ff.d,ff.c,'
u7='ff.org,ff.dept,ff.oid,dis.t,dis.d,dis.i,dis.oid'
u8='&type=&fetchSize=50'
u9='&selectedSortableField=score&selectedSortOrder=DESC'
url=u0+query+u8+u9
#query RCUK GtR API
df=pd.read_csv(url,nrows=depth)
#record scores
df['score'] = depth-df.index
df=df.set_index('ProjectReference')
DF=pd.concat([DF,df])
for i in df.index:
if i not in projects12:projects12[i]=0
projects12[i]+=df.loc[i]['score']**2
if i not in projects1:projects1[i]=0
projects1[i]+=df.loc[i]['score']**2
if verbose: print 'query for <'+keyword+'>'
query=repr(keyword).replace(' ','+')[2:-1]
u0='http://gtr.rcuk.ac.uk/search/project/csv?term='
u1='&selectedFacets=&fields='
u2='pro.gr,pro.t,pro.a,pro.orcidId,per.fn,per.on,per.sn,'
u3='per.fnsn,per.orcidId,per.org.n,per.pro.t,per.pro.abs,pub.t,pub.a,pub.orcidId,org.n,org.orcidId,'
u4='acp.t,acp.d,acp.i,acp.oid,kf.d,kf.oid,is.t,is.d,is.oid,col.i,col.d,col.c,col.dept,col.org,col.pc,col.pic,'
u5='col.oid,ip.t,ip.d,ip.i,ip.oid,pol.i,pol.gt,pol.in,pol.oid,prod.t,prod.d,prod.i,prod.oid,rtp.t,rtp.d,rtp.i,'
u6='rtp.oid,rdm.t,rdm.d,rdm.i,rdm.oid,stp.t,stp.d,stp.i,stp.oid,so.t,so.d,so.cn,so.i,so.oid,ff.t,ff.d,ff.c,'
u7='ff.org,ff.dept,ff.oid,dis.t,dis.d,dis.i,dis.oid'
u8='&type=&fetchSize=50'
u9='&selectedSortableField=score&selectedSortOrder=DESC'
url=u0+query+u8+u9
#query RCUK GtR API
df=pd.read_csv(url,nrows=depth)
#record scores
df['score'] = depth-df.index
df=df.set_index('ProjectReference')
DF=pd.concat([DF,df])
for i in df.index:
if i not in projects12:projects12[i]=0
projects12[i]+=df.loc[i]['score']**2
if i not in projects2:projects2[i]=0
projects2[i]+=df.loc[i]['score']**2
print '----- finished topic ', topic_id,'-----'
print ' '
###### SORTING #######
#select top projects
#sort project vectors
top=30
import operator
sorted_projects1=sorted(projects1.items(), key=operator.itemgetter(1))[::-1][:30]
sorted_projects2=sorted(projects2.items(), key=operator.itemgetter(1))[::-1][:30]
sorted_projects12=sorted(projects12.items(), key=operator.itemgetter(1))[::-1][:30]
#record scores in sorted vector in a master vector
projects={}
for i in range(len(sorted_projects1)):
if sorted_projects1[i][0] not in projects:projects[sorted_projects1[i][0]]=0
projects[sorted_projects1[i][0]]+=(top-i)**2
for i in range(len(sorted_projects2)):
if sorted_projects2[i][0] not in projects:projects[sorted_projects2[i][0]]=0
projects[sorted_projects2[i][0]]+=(top-i)**2
for i in range(len(sorted_projects12)):
if sorted_projects12[i][0] not in projects:projects[sorted_projects12[i][0]]=0
projects[sorted_projects12[i][0]]+=(top-i)**2
#save final vector of most relevant projects
sorted_projects=sorted(projects.items(), key=operator.itemgetter(1))[::-1][:30]
###### DISPLAY ########
#print resulting links to projects
for i in range(len(sorted_projects)):
print str(i+1)+'.',DF.loc[sorted_projects[i][0]][u'GTRProjectUrl'].values[0],\
DF.loc[sorted_projects[i][0]][u'PIFirstName'].values[0],\
DF.loc[sorted_projects[i][0]][u'PISurname'].values[0]+'|',\
DF.loc[sorted_projects[i][0]][u'LeadROName'].values[0]+'|',\
DF.loc[sorted_projects[i][0]][u'StartDate'].values[0][6:]+'-'+\
DF.loc[sorted_projects[i][0]][u'EndDate'].values[0][6:]+'|',\
str(int(DF.loc[sorted_projects[i][0]][u'AwardPounds'].values[0])/1000)+'k'
print DF.loc[sorted_projects[i][0]][u'Title'].values[0]+'\n'
#print '----------------------------------------------------'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get number of pages for publications
Step2: Extract links to publications, from all pages
Step3: Keyword extraction, for each publication
Step4: Mine titles and abstracts for topics
Step5: Save output for D3 word cloud
Step6: Having consturcted three project score vectors (without title, with title, both), we sort the projects based on high scores. These are best matching research projects. We display a link to them below. Repeat for each topic.
|
7,012
|
<ASSISTANT_TASK:>
Python Code:
import requests
response = requests.get("https://api.forecast.io/forecast/5afc9217d7eea82824254c951b1b57f4/-12.0561,-77.0268")
weather_Lima = response.json()
weather_Lima.keys()
print(weather_Lima['timezone'])
print("Longitude:", weather_Lima['longitude'], "Latitude:", weather_Lima['latitude'])
weather_Lima['currently'].keys()
Lima_windspeed = weather_Lima['currently']['windSpeed']
print("The wind in Lima is currently blowing at", Lima_windspeed, "mph")
weather = weather_Lima['currently']
Temperature = int(weather['apparentTemperature']) - int(weather['temperature'])
if Temperature == 0:
print("The temperature feels exactly as expected:", weather['temperature'], "degrees Fahrenheit")
elif Temperature > 0:
print("It feels", Temperature, "degrees Fahrenheit warmer than the actual temperature:", weather['temperature'], "degrees Fahrenheit")
else:
print("It feels", Temperature, "degrees Fahrenheit colder than the actual temperature:", weather['temperature'], "degrees Fahrenheit")
weather_Lima['daily'].keys()
weather_Lima['daily']['data']
for moon in weather_Lima['daily']['data']:
moon_forecast = moon['moonPhase']
break
if moon_forecast == 0:
print("Tomorrow is New Moon.")
elif moon_forecast > .75:
print("Tomorrow the Moon is in a Waning Crescent phase.")
elif moon_forecast == .75:
print("Tomorrow is last quarter Moon.")
elif moon_forecast > .5:
print("Tommorrow the Moon is in a Waning Gibbous phase.")
elif moon_forecast == .5:
print("Tommorrow is Full Moon.")
elif moon_forecast > .25:
print("Tommorrow the Moon is a Waxing Gibbous phase.")
elif moon_forecast == .25:
print("Tommorrow is first Quarter Moon.")
elif moon_forecast > 0:
print("Tommorrow the Moon is in a Waxing Crescent phase. This is the first phase after New Moon.")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: I have chosen Lima-Peru, the city I was born.
Step2: 2) What's the current wind speed? How much warmer does it feel than it actually is?
Step3: 3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
|
7,013
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from thinkbayes2 import Hist, Pmf, Cdf, Suite, Beta
import thinkplot
prior = Beta(2, 3)
thinkplot.Pdf(prior.MakePmf())
prior.Mean()
posterior = Beta(3, 2)
posterior.Update((2, 8))
posterior.MAP()
from scipy import stats
class AlienBlaster(Suite):
def Likelihood(self, data, hypo):
Computes the likeliood of data under hypo.
data: number of shots they took
hypo: probability of a hit, p
n = data
x = hypo
# specific version for n=2 shots
likes = [x**4, (1-x)**4, (2*x*(1-x))**2]
# general version for any n shots
likes = [stats.binom.pmf(k, n, x)**2 for k in range(n+1)]
return np.sum(likes)
pmf = Beta(1, 1).MakePmf()
blaster = AlienBlaster(pmf)
blaster.Update(2)
thinkplot.Pdf(blaster)
pmf = Beta(2, 3).MakePmf()
blaster = AlienBlaster(pmf)
blaster.Update(2)
thinkplot.Pdf(blaster)
prior.Mean(), blaster.Mean()
prior.MAP(), blaster.MAP()
k = 3
n = 10
x1 = 0.3
x2 = 0.4
0.3 * stats.binom.pmf(k, n, x1) + 0.7 * stats.binom.pmf(k, n, x2)
def flip(p):
return np.random.random() < p
def simulate_shots(n, p):
return np.random.binomial(n, p)
ks = []
for i in range(1000):
if flip(0.3):
k = simulate_shots(n, x1)
else:
k = simulate_shots(n, x2)
ks.append(k)
pmf = Pmf(ks)
thinkplot.Hist(pmf)
len(ks), np.mean(ks)
xs = np.random.choice(a=[x1, x2], p=[0.3, 0.7], size=1000)
Hist(xs)
ks = np.random.binomial(n, xs)
pmf = Pmf(ks)
thinkplot.Hist(pmf)
np.mean(ks)
from thinkbayes2 import MakeBinomialPmf
pmf1 = MakeBinomialPmf(n, x1)
pmf2 = MakeBinomialPmf(n, x2)
metapmf = Pmf({pmf1:0.3, pmf2:0.7})
metapmf.Print()
ks = [metapmf.Random().Random() for _ in range(1000)]
pmf = Pmf(ks)
thinkplot.Hist(pmf)
np.mean(ks)
from thinkbayes2 import MakeMixture
mix = MakeMixture(metapmf)
thinkplot.Hist(mix)
mix.Mean()
from thinkbayes2 import Beta
beta = Beta(2, 3).MakePmf()
metapmf = Pmf()
thinkplot.hist(beta)
for x, prob in beta.Items():
nested_pmf = MakeBinomialPmf(n, x)
metapmf[nested_pmf] = prob
mix = MakeMixture(metapmf)
thinkplot.hist(mix)
mix.Mean()
beta.Mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part One
Step2: In its first test, the new Alien Blaster 9000 takes 10 shots and hits 2 targets. Taking into account this data, what is the posterior distribution of $x$ for this missile? What is the value in the posterior with the highest probability, also known as the MAP?
Step4: Now suppose the new ultra-secret Alien Blaster 10K is being tested. In a press conference, an EDF general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were, but they report
Step5: If we start with a uniform prior, we can see what the likelihood function looks like
Step6: A tie is most likely if they are both terrible shots or both very good.
Step7: The posterior mean and MAP are lower than in the prior.
Step8: So if we learn that the new design is "consistent", it is more likely to be consistently bad (in this case).
Step9: The answer is a value drawn from the mixture of the two distributions.
Step10: Here's what the distribution looks like.
Step11: The mean should be near 3.7. We can run this simulation more efficiently using NumPy. First we generate a sample of xs
Step12: Then for each x we generate a k
Step13: And the results look similar.
Step14: One more way to do the same thing is to make a meta-Pmf, which contains the two binomial Pmf objects
Step15: Here's how we can draw samples from the meta-Pmf
Step16: And here are the results, one more time
Step17: This result, which we have estimated three ways, is a predictive distribution, based on our uncertainty about x.
Step19: Exercise
|
7,014
|
<ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
from __future__ import absolute_import, division, print_function
import time
import numpy as np
import matplotlib.pyplot as plt
import seaborn
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient
from cs231n.gradient_check import eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) \
/ (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print('{}: {}'.format(k, v.shape))
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(
num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(
np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print('Testing affine_forward function:')
print('difference:', rel_error(out, correct_out))
# Test the affine_backward function
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(
lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(
lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(
lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print('Testing affine_backward function:')
print('dx error:', rel_error(dx_num, dx))
print('dw error:', rel_error(dw_num, dw))
print('db error:', rel_error(db_num, db))
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array(
[[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 1e-8
print('Testing relu_forward function:')
print('difference:', rel_error(out, correct_out))
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(
lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 1e-12
print('Testing relu_backward function:')
print('dx error:', rel_error(dx_num, dx))
from cs231n.layer_utils import affine_relu_forward, \
affine_relu_backward
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(
lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(
lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(
lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print('Testing affine_relu_forward:')
print('dx error:', rel_error(dx_num, dx))
print('dw error:', rel_error(dw_num, dw))
print('db error:', rel_error(db_num, db))
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(
lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error
# should be 1e-9
print('Testing svm_loss...')
print('loss:', loss)
print('dx error:', rel_error(dx_num, dx))
dx_num = eval_numerical_gradient(
lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error
# should be 1e-8
print('\nTesting softmax_loss:')
print('loss:', loss)
print('dx error:', rel_error(dx_num, dx))
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-2
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C,
weight_scale=std)
print('Testing initialization ... ')
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print('Testing test-time forward pass ... ')
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102,
14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412,
14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138,
15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print('Testing training loss (no regularization)')
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, \
'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, \
'Problem with regularization loss'
for reg in [0.0, 0.7]:
print('Running numeric gradient check with reg =', reg)
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(
f, model.params[name], verbose=False)
print('{} relative error: {:.2e}'.format(
name, rel_error(grad_num, grads[name])))
model = TwoLayerNet()
solver = None
###################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that #
# achieves at least 50% accuracy on the validation set. #
###################################################################
input_dim = 3 * 32 * 32
num_classes = 10
hidden_dim = 200
weight_scale = np.sqrt(2.0 / data['X_train'].shape[0])
reg = 1
learning_rate = 2e-3
lr_decay = 0.7
batch_size = 250
num_epochs = 5
print_every = 100
verbose = True
model = TwoLayerNet(
input_dim=input_dim,
hidden_dim=hidden_dim,
num_classes=num_classes,
weight_scale=weight_scale,
reg=reg,
)
solver = Solver(
model, data,
update_rule='sgd',
optim_config={'learning_rate': learning_rate},
lr_decay=lr_decay,
batch_size=batch_size,
num_epochs=num_epochs,
print_every=print_every,
verbose=verbose,
)
solver.train()
###################################################################
# END OF YOUR CODE #
###################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
### [djn] ===>
# model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
# reg=reg, weight_scale=5e-2,
# dtype=np.float64)
# scores = model.loss(X)
# print('scores:', scores)
### [djn] <===
for reg in [0, 3.14, 30000]:
print('Running check with reg =', reg)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
#loss_func='softmax',
reg=reg, weight_scale=5e-2,
dtype=np.float64)
loss, grads = model.loss(X, y)
print('Initial loss:', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(
f, model.params[name], verbose=False, h=1e-5)
print('{} relative error: {:.2e}'.format(
name, rel_error(grad_num, grads[name])))
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = np.sqrt(2.0 / num_train) #1e-2
learning_rate = 1e-3 #1e-4
model = FullyConnectedNet([100, 100],
#loss_func='svm',
weight_scale=weight_scale,
dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
# TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
learning_rate = 1e-4
weight_scale = np.sqrt(2.0 / num_train) # 1e-5
model = FullyConnectedNet([100, 100, 100, 100],
#loss_func='svm',
weight_scale=weight_scale,
dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print('next_w error:', rel_error(next_w, expected_next_w))
print('velocity error: ',
rel_error(expected_velocity, config['velocity']))
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100],
weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print('')
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in solvers.iteritems():
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
# Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print('next_w error:', rel_error(expected_next_w, next_w))
print('cache error:', rel_error(expected_cache, config['cache']))
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print('next_w error: ', rel_error(expected_next_w, next_w))
print('v error: ', rel_error(expected_v, config['v']))
print('m error: ', rel_error(expected_m, config['m']))
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print('running with ', update_rule)
model = FullyConnectedNet([100, 100, 100, 100, 100],
weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in solvers.iteritems():
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
best_model = None
###################################################################
# TODO: Train the best FullyConnectedNet that you can on #
# CIFAR-10. You might batch normalization and dropout useful. #
# Store your best model in the best_model variable. #
###################################################################input_dim = 3 * 32 * 32
input_dim = 3 * 32 * 32
num_classes = 10
#hidden_dims = [100, 200, 50, 50, 50]
weight_scale = np.sqrt(2.0 / data['X_train'].shape[0])
#reg = 1
#learning_rate = 2e-3
#lr_decay = 0.7
model = FullyConnectedNet(
input_dim=3 * 32 * 32,
num_classes=10,
#
hidden_dims=[450, 400, 300, 200],
#hidden_dims=[200, 400, 100, 100],
loss_func='softmax',
dropout=0.95, #0.95,
use_batchnorm=True,
weight_scale=weight_scale,
reg=1e-4, #1e-4,
)
solver = Solver(
model, data,
update_rule='adam',
optim_config={'learning_rate': 2e-3},
lr_decay=0.7,
batch_size=250,
num_epochs=5,
print_every=100,
verbose=True,
)
solver.train()
# visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
###################################################################
# END OF YOUR CODE #
###################################################################
best_model = model
X_test = data['X_test']
X_val = data['X_val']
y_test = data['y_test']
y_val = data['y_val']
y_test_pred = np.argmax(best_model.loss(X_test), axis=1)
y_val_pred = np.argmax(best_model.loss(X_val), axis=1)
print('Validation set accuracy:', (y_val_pred == y_val).mean())
print('Test set accuracy:', (y_test_pred == y_test).mean())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fully-Connected Neural Nets
Step4: Affine layer
Step5: Affine layer
Step6: ReLU layer
Step7: ReLU layer
Step8: "Sandwich" layers
Step9: Loss layers
Step10: Two-layer network
Step11: Solver
Step12: Multilayer network
Step13: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
Step14: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
Step15: Inline question
Step16: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
Step17: RMSProp and Adam
Step18: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules
Step19: Train a good model!
Step20: Test you model
|
7,015
|
<ASSISTANT_TASK:>
Python Code:
!pip install --upgrade google-api-python-client
from httplib2 import Http
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
http = Http()
credentials.authorize(http)
from apiclient.discovery import build
genomics = build('genomics', 'v1', http=http)
request = genomics.datasets().get(datasetId='10473108253681171589')
response = request.execute()
for entry in response.items():
print "%s => %s" % entry
dataset_id = '10473108253681171589' # This is the 1000 Genomes dataset ID
sample = 'NA12872'
reference_name = '22'
reference_position = 51003835
request = genomics.readgroupsets().search(
body={'datasetIds': [dataset_id], 'name': sample},
fields='readGroupSets(id)')
read_group_sets = request.execute().get('readGroupSets', [])
if len(read_group_sets) != 1:
raise Exception('Searching for %s didn\'t return '
'the right number of read group sets' % sample)
read_group_set_id = read_group_sets[0]['id']
request = genomics.reads().search(
body={'readGroupSetIds': [read_group_set_id],
'referenceName': reference_name,
'start': reference_position,
'end': reference_position + 1,
'pageSize': 1024},
fields='alignments(alignment,alignedSequence)')
reads = request.execute().get('alignments', [])
# Note: This is simplistic - the cigar should be considered for real code
bases = [read['alignedSequence'][
reference_position - int(read['alignment']['position']['position'])]
for read in reads]
print '%s bases on %s at %d are' % (sample, reference_name, reference_position)
from collections import Counter
for base, count in Counter(bases).items():
print '%s: %s' % (base, count)
request = genomics.callsets().search(
body={'variantSetIds': [dataset_id], 'name': sample},
fields='callSets(id)')
resp = request.execute()
call_sets = resp.get('callSets', [])
if len(call_sets) != 1:
raise Exception('Searching for %s didn\'t return '
'the right number of call sets' % sample)
call_set_id = call_sets[0]['id']
request = genomics.variants().search(
body={'callSetIds': [call_set_id],
'referenceName': reference_name,
'start': reference_position,
'end': reference_position + 1},
fields='variants(names,referenceBases,alternateBases,calls(genotype))')
variant = request.execute().get('variants', [])[0]
variant_name = variant['names'][0]
genotype = [variant['referenceBases'] if g == 0
else variant['alternateBases'][g - 1]
for g in variant['calls'][0]['genotype']]
print 'the called genotype is %s for %s' % (','.join(genotype), variant_name)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create an Authenticated Client
Step2: And then we create a client for the Genomics API.
Step3: Send a request to the Genomics API
Step4: Next, we'll send this request to the Genomics API by calling the request.execute() method.
Step5: You will need enable the Genomics API for your project if you have not done so previously. Click on this link to enable the API in your project.
Step6: Success! We can see the name of the specified Dataset and a few other pieces of metadata.
Step7: Get read bases for a sample at specific a position
Step8: Once we have the read group set ID, lookup the reads at the position in which we are interested.
Step9: And we print out the results.
Step10: Get variants for a sample at specific a position
Step11: Once we have the call set ID, lookup the variants that overlap the position in which we are interested.
Step12: And we print out the results.
|
7,016
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from sklearn import datasets
N = 1000
X, color = datasets.samples_generator.make_s_curve(N, random_state=0)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:,0], X[:,1], X[:,2], c=color, s = 1)
plt.title('orignal data')
plt.show()
from megaman.geometry import Geometry
geom = Geometry()
radius = 1.0
adjacency_method = 'cyflann'
adjacency_kwds = {'radius':radius} # ignore distances above this radius
affinity_method = 'gaussian'
affinity_kwds = {'radius':radius} # A = exp(-||x - y||/radius^2)
laplacian_method = 'geometric'
laplacian_kwds = {'scaling_epps':radius} # scaling ensures convergence to Laplace-Beltrami operator
geom = Geometry(adjacency_method=adjacency_method, adjacency_kwds=adjacency_kwds,
affinity_method=affinity_method, affinity_kwds=affinity_kwds,
laplacian_method=laplacian_method, laplacian_kwds=laplacian_kwds)
geom.set_data_matrix(X)
adjacency_matrix = geom.compute_adjacency_matrix()
from scipy.sparse.csgraph import connected_components
rad1 = 0.2
# compute an adjacency matrix with a radius
geom.adjacency_kwds = {'radius':rad1}
adjacency_matrix = geom.compute_adjacency_matrix()
# compute the corresponding affinity matrix
geom.affinity_kwds = {'radius':rad1}
affinity_matrix = geom.compute_affinity_matrix({'radius':rad1})
(number_connected_components, labels) = connected_components(affinity_matrix)
print(number_connected_components)
rad1 = 0.5
# compute an adjacency matrix with a radius
geom.adjacency_kwds = {'radius':rad1}
adjacency_matrix = geom.compute_adjacency_matrix()
# compute the corresponding affinity matrix
geom.affinity_kwds = {'radius':rad1}
affinity_matrix = geom.compute_affinity_matrix({'radius':rad1})
(number_connected_components, labels) = connected_components(affinity_matrix)
print(number_connected_components)
radius = 0.5
adjacency_method = 'cyflann'
adjacency_kwds = {'radius':radius}
affinity_method = 'gaussian'
affinity_kwds = {'radius':radius}
laplacian_method = 'symmetricnormalized'
laplacian_kwds = {'scaling_epps':radius}
geom = {'adjacency_method':adjacency_method, 'adjacency_kwds':adjacency_kwds,
'affinity_method':affinity_method, 'affinity_kwds':affinity_kwds,
'laplacian_method':laplacian_method, 'laplacian_kwds':laplacian_kwds}
geom = Geometry(adjacency_method=adjacency_method, adjacency_kwds=adjacency_kwds,
affinity_method=affinity_method, affinity_kwds=affinity_kwds,
laplacian_method=laplacian_method, laplacian_kwds=laplacian_kwds)
geom.set_data_matrix(X)
from megaman.embedding import (Isomap, LocallyLinearEmbedding,
LTSA, SpectralEmbedding)
n_components = 2
spectral = SpectralEmbedding(n_components=3, eigen_solver='amg',geom=geom, drop_first=False) # use 3 for spectral
embed_spectral = spectral.fit_transform(X)
ltsa = LTSA(n_components=n_components,eigen_solver='arpack',geom=geom)
embed_ltsa = ltsa.fit_transform(X)
lle = LocallyLinearEmbedding(n_components=n_components, eigen_solver='arpack',geom=geom)
embed_lle = lle.fit_transform(X)
isomap = Isomap(n_components=n_components, eigen_solver='arpack',geom=geom)
embed_isomap = isomap.fit_transform(X)
embed_spectral = embed_spectral[:, [0, 2]] * 50.0
tmp = embed_spectral[:, 1].copy()
embed_spectral[:,1] = embed_spectral[:,0].copy()
embed_spectral[:,0] = tmp.copy()
tmp = None
fig, axes = plt.subplots(nrows=2, ncols=2)
axes[0, 0].scatter(embed_ltsa[:, 0], embed_ltsa[:, 1], s=1, c=color)
axes[0, 0].set_title('LTSA')
axes[0, 1].scatter(embed_lle[:, 0], embed_lle[:, 1], s=1, c=color)
axes[0, 1].set_title('LLE')
axes[1, 0].scatter(embed_isomap[:, 0], embed_isomap[:, 1], s=1, c=color)
axes[1, 0].set_title('Isomap')
axes[1, 1].scatter(embed_spectral[:, 0], embed_spectral[:, 1], s=1, c=color)
axes[1, 1].set_title('Spectral Embedding')
plt.show()
geom.laplacian_method = 'geometric'
geom.laplacian_kwds = {'scaling_epps':radius} # scaling ensures convergence to Laplace-Beltrami operator
laplacian_matrix = geom.compute_laplacian_matrix()
from megaman.geometry.rmetric import RiemannMetric
rmetric_spectral = RiemannMetric(embed_spectral, geom.laplacian_matrix)
H_spectral = rmetric_spectral.get_dual_rmetric()
rmetric_ltsa = RiemannMetric(embed_ltsa, geom.laplacian_matrix)
H_ltsa = rmetric_ltsa.get_dual_rmetric()
rmetric_lle = RiemannMetric(embed_lle, geom.laplacian_matrix)
H_lle = rmetric_lle.get_dual_rmetric()
rmetric_isomap = RiemannMetric(embed_isomap, geom.laplacian_matrix)
H_isomap = rmetric_isomap.get_dual_rmetric()
from megaman.utils.covar_plotter import plot_cov_ellipse
n_plot = 50
rng = np.random.RandomState(8675309)
sample_points = rng.choice(range(N), n_plot, replace = False)
f, ax = plt.subplots()
ax.scatter(embed_isomap[:, 0], embed_isomap[:, 1], s = 1)
ax.set_aspect('equal') # if an ellipse is a circle no distortion occured.
for i in range(n_plot):
ii = sample_points[i]
cov = H_isomap[ii, :, :]
plot_cov_ellipse(cov*0.05, embed_isomap[ii, :] ,ax=ax, edgecolor='none', alpha = 0.5)
plt.show()
f, ax = plt.subplots()
ax.scatter(embed_spectral[:, 0], embed_spectral[:, 1], s = 1)
ax.set_aspect('equal') # if an ellipse is a circle no distortion occured.
for i in range(n_plot):
ii = sample_points[i]
cov = H_spectral[ii,:,:]
plot_cov_ellipse(cov*0.01, embed_spectral[ii,:] ,ax=ax, edgecolor='none', alpha = 0.5)
plt.show()
import time
N = 100000
X, color = datasets.samples_generator.make_s_curve(N, random_state=0)
radius = 1.5
adjacency_method = 'cyflann'
cyflann_kwds = {'index_type':'kmeans', 'branching':64, 'iterations':20, 'cb_index':0.4}
adjacency_kwds = {'radius':radius, 'cyflann_kwds':cyflann_kwds}
affinity_method = 'gaussian'
affinity_kwds = {'radius':radius}
laplacian_method = 'geometric'
laplacian_kwds = {'scaling_epps':radius}
geom = Geometry(adjacency_method=adjacency_method, adjacency_kwds=adjacency_kwds,
affinity_method=affinity_method, affinity_kwds=affinity_kwds,
laplacian_method=laplacian_method, laplacian_kwds=laplacian_kwds)
geom.set_data_matrix(X)
t0 = time.time()
adjacency_matrix = geom.compute_adjacency_matrix()
t1 = time.time() - t0
print(t1)
N = 100000
X, color = datasets.samples_generator.make_s_curve(N, random_state=0)
radius = 1.5
adjacency_method = 'cyflann'
cyflann_kwds = {'index_type':'kdtrees', 'num_trees':10}
adjacency_kwds = {'radius':radius, 'cyflann_kwds':cyflann_kwds}
affinity_method = 'gaussian'
affinity_kwds = {'radius':radius}
laplacian_method = 'geometric'
laplacian_kwds = {'scaling_epps':radius}
geom = Geometry(adjacency_method=adjacency_method, adjacency_kwds=adjacency_kwds,
affinity_method=affinity_method, affinity_kwds=affinity_kwds,
laplacian_method=laplacian_method, laplacian_kwds=laplacian_kwds)
geom.set_data_matrix(X)
t0 = time.time()
adjacency_matrix = geom.compute_adjacency_matrix()
t1 = time.time() - t0
print(t1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can take a look at the data set with the following plot
Step2: The Geometry Class
Step3: Geometry is the main class that will Cache things like distance/adjacency,
Step4: At class instantiation you can also pass parameters & methods for
Step5: Inputing your Data to Geometry
Step6: Computing geometric matrices
Step7: Note that by defualt this is pass-by-reference pass copy=True to get a copied version.
Step8: Dictionary valued geometry
Step9: Embedding
Step10: Now we will go through the API with the different embedding functions.
Step11: You have to select an output dimnension. Since the data lie on an S-curve
Step12: Once you have your geometry selected, each method works fairly simply.
Step13: for visualization purposes we're going to use the 1st and 3rd
Step14: Visualization
Step15: Riemannian Metric
Step16: We can then estimate the R metric on each embedding with
Step17: We can use the cover_plotter module included in megaman to
Step18: we can also do this with the spectral embedding
Step19: Best parameters for large data sets
Step20: cyflann also has the option to use 'kdtrees' which has parameters
|
7,017
|
<ASSISTANT_TASK:>
Python Code:
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
import math
def grayscale(img):
Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
Applies the Canny transform
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
Applies a Gaussian Noise kernel
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=5):
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def annotationLine( slopeAvgs, theLines, lineIndex, line_img, vertices, lastLines ):
Create an annotation line using the longest segment and extending to ROI extent
try:
if theLines[ lineIndex ] is None:
return
if slopeAvgs[ 2 ] == 0:
slopeAvgs[ lineIndex ] = theLines[ lineIndex ][ 1 ]
elif slopeAvgs[ 2 ] < slopeAvgs[ 3 ]:
slopeAvgs[ lineIndex ] = slopeAvgs[ lineIndex ] * ((slopeAvgs[ 2 ] - 1) / slopeAvgs[ 2 ]) + (theLines[ lineIndex ][ 1 ] / slopeAvgs[ 2 ])
else:
slopeAvgs[ lineIndex ] = slopeAvgs[ lineIndex ] * ((slopeAvgs[ 3 ] - 1) / slopeAvgs[ 3 ]) + (theLines[ lineIndex ][ 1 ] / slopeAvgs[ 3 ])
currentLine = theLines[ lineIndex ]
currentSlope = slopeAvgs[ lineIndex ]
lineb = currentLine[ 0 ][ 1 ] - currentLine[ 0 ][ 0 ] * currentSlope
x1 = (vertices[ 0 ][ 0 ][ 1 ] - lineb) / currentSlope
x2 = (vertices[ 0 ][ 1 ][ 1 ] - lineb) / currentSlope
newLine = [[(int(x1), vertices[ 0 ][ 0 ][ 1 ],int(x2),vertices[ 0 ][ 1 ][ 1 ]) ]]
draw_lines(line_img, newLine,thickness=10,color=[255, 0, 255])
lastLines[ lineIndex ] = newLine
except:
draw_lines(line_img, lastLines[ lineIndex ],thickness=10,color=[255, 0, 255])
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap, vertices=None, draw_segments=False):
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
#
# Find longest segements in frame
#
theSlopes = [ 0.0, 0.0 ]
theCounts = [ 0.0, 0.0 ]
theLengths = [ 0.0, 0.0 ]
theLines = [ None, None ]
for aLine in lines:
aLine = aLine[ 0 ]
lineSlope = ( (aLine[ 3 ] - aLine[ 1 ]) / (aLine[ 2 ] - aLine[ 0 ]) )
#
# Filter near horizontal segments
#
if abs( lineSlope ) < 0.05:
continue
lineLength = math.sqrt( (aLine[ 3 ] - aLine[ 1 ]) ** 2 + (aLine[ 2 ] - aLine[ 0 ]) ** 2 )
if lineSlope > 0:
theSlopes[ 0 ] += lineSlope
theCounts[ 0 ] += 1
if lineLength > theLengths[ 0 ]:
theLines[ 0 ] = [aLine,lineSlope,lineLength]
theLengths[ 0 ] = lineLength
else:
theSlopes[ 1 ] += lineSlope
theCounts[ 1 ] += 1
if lineLength > theLengths[ 1 ]:
theLines[ 1 ] = [aLine,lineSlope,lineLength]
theLengths[ 1 ] = lineLength
#
# Draw ROI
#
if 0:
newLine = [[(vertices[ 0 ][ 0 ][ 0 ], vertices[ 0 ][ 0 ][ 1 ],vertices[ 0 ][ 1 ][ 0 ],vertices[ 0 ][ 1 ][ 1 ]) ]]
draw_lines( line_img, newLine,color=[0, 0, 255] )
newLine = [[(vertices[ 0 ][ 2 ][ 0 ], vertices[ 0 ][ 2 ][ 1 ],vertices[ 0 ][ 3 ][ 0 ],vertices[ 0 ][ 3 ][ 1 ]) ]]
draw_lines( line_img, newLine,color=[0, 0, 255] )
if draw_segments:
draw_lines(line_img, lines)
#
# Take longest lines and extend to ROI
#
annotationLine( slopeAvgs, theLines, 0, line_img, vertices, lastLines )
annotationLine( slopeAvgs, theLines, 1, line_img, vertices, lastLines )
slopeAvgs[ 2 ] += 1
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
return cv2.addWeighted(initial_img, α, img, β, λ)
import os
os.listdir("test_images/")
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
imagePath = "test_images/"
arrImages = os.listdir("test_images/")
figureIndex = 0
slopeAvgs = [ 0.0, 0.0, 0, 30 ]
lastLines = [ None, None ]
for anImage in arrImages:
# Read in and grayscale the image
image = mpimg.imread("test_images/" + anImage)
gray = grayscale(image)
# Define a kernel size and apply Gaussian smoothing
kernel_size = 5
blur_gray = gaussian_blur( gray, kernel_size )
# Define our parameters for Canny and apply
low_threshold = 50
high_threshold = 150
edges = canny(blur_gray, low_threshold, high_threshold)
# Next we'll create a masked edges image using cv2.fillPoly()
mask = np.zeros_like(edges)
ignore_mask_color = 255
# This time we are defining a four sided polygon to mask
imshape = image.shape
vertices = np.array([[( 95, imshape[0] ), ( 460, 310 ), ( 480, 310 ), ( 900, imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest( edges, vertices )
# Define the Hough transform parameters
# Make a blank the same size as our image to draw on
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 5 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 10 #minimum number of pixels making up a line
max_line_gap = 1 # maximum gap in pixels between connectable line segments
line_image = np.copy(image)*0 # creating a blank to draw lines on
# Run Hough on edge detected image
line_image = hough_lines( masked_edges, rho, theta, threshold, min_line_length, max_line_gap, vertices, True )
# Create a "color" binary image to combine with line image
color_edges = np.dstack((edges, edges, edges))
lines_edges = weighted_img( color_edges, line_image)
plt.figure(figureIndex)
plt.imshow(lines_edges)
plt.figure(figureIndex + 1)
plt.imshow(image)
figureIndex = figureIndex + 2
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
def process_image(image):
gray = grayscale(image)
# Define a kernel size and apply Gaussian smoothing
kernel_size = 5
blur_gray = gaussian_blur( gray, kernel_size )
# Define our parameters for Canny and apply
low_threshold = 50
high_threshold = 150
edges = canny(blur_gray, low_threshold, high_threshold)
# Next we'll create a masked edges image using cv2.fillPoly()
mask = np.zeros_like(edges)
ignore_mask_color = 255
# This time we are defining a four sided polygon to mask
imshape = image.shape
# Regular vid ROIs
vertices = np.array([[( 125, imshape[0] ), ( 400, 340 ), ( 520, 340 ), ( 900, imshape[0])]], dtype=np.int32)
# Challenge ROI
# vertices = np.array([[( 155 + 80, imshape[0] - 60 ), ( 590, 450 ), ( 750, 450 ), ( 1200 - 60, imshape[0] - 60)]], dtype=np.int32)
masked_edges = region_of_interest( edges, vertices )
# Define the Hough transform parameters
# Make a blank the same size as our image to draw on
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 10 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 15 #minimum number of pixels making up a line
max_line_gap = 1 # maximum gap in pixels between connectable line segments
line_image = np.copy(image)*0 # creating a blank to draw lines on
# Run Hough on edge detected image
line_image = hough_lines( masked_edges, rho, theta, threshold, min_line_length, max_line_gap, vertices, False )
# Create a "color" binary image to combine with line image
color_edges = np.dstack((edges, edges, edges))
lines_edges = weighted_img( image, line_image)
return lines_edges
fileName = 'solidWhiteRight.mp4'
white_output = 'test_videos_output/' + fileName
slopeAvgs = [ 0.0, 0.0, 0, 30 ]
lastLines = [ None, None ]
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/" + fileName )
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(white_output))
fileName = 'solidYellowLeft.mp4'
yellow_output = 'test_videos_output/' + fileName
slopeAvgs = [ 0.0, 0.0, 0, 30 ]
lastLines = [ None, None ]
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/' + fileName)
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(yellow_output))
fileName = 'challenge.mp4'
challenge_output = 'test_videos_output/' + fileName
slopeAvgs = [ 0.0, 0.0, 0, 30 ]
lastLines = [ None, None ]
####
#### NOTE: the ROI must be changed in process_image for this to work properly
####
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/' + fileName)
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(challenge_output))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in an Image
Step10: Ideas for Lane Detection Pipeline
Step11: Test Images
Step12: Build a Lane Finding Pipeline
Step13: Test on Videos
Step14: Let's try the one with the solid white lane on the right first ...
Step16: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
Step18: Improve the draw_lines() function
Step20: Writeup and Submission
|
7,018
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
n_segurados = 1000
prob = 0.35
# Lista que salva a quantidade de aposentados para cada cenário
lista_nap = []
# Lista de seeds -> 50 cenários
seeds = range(0,50)
# Executa 50 cenários (seeds) diferentes
for seed in seeds:
# Define o seed para geração de números aleatórios
np.random.seed(seed)
# Gera 1000 números aletórios entre 0 e
lista_na = np.random.rand(n_segurados)
# Número de aposentados
num_ap = 0
# Determina quantos irão se aposentar para o cenário
for na in lista_na:
# calcula a probabilidade de cada um dos segurados se aposentar
# Se o número aleatório for menor ou igual a probabilidade o segurado irá se aposentar
if na <= prob:
num_ap += 1
lista_nap.append(num_ap)
print(lista_nap)
media = np.mean(lista_nap)
print('Média: {}'.format(media))
std = np.std(lista_nap)
print('Desvio padrão: {}'.format(std))
import matplotlib.pyplot as plt
%matplotlib inline
medias = [350] * len(seeds)
fig, ax = plt.subplots()
ax.plot(seeds, lista_nap, '--', linewidth=2, label='Método Probabilístico')
ax.plot(seeds, medias,label='Método Determinístico')
ax.set_ylabel('Número de Aposentados')
ax.set_xlabel('Seed')
ax.set_title('Cálculo do estoque usando diferentes métodos')
ax.legend()
plt.show()
np.var(lista_nap)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Observem que diferente do método simples, para cada cenário (seed) ocorre uma situação diferente, ou seja, o número de segurados que se aposenta é diferente
Step2: Se calcularmos a média, temos um valor bem próximo ao do Método determinístico.
Step3: Porém, com diferentes cenários, podemos calcular medidas de dispersão, como o desvio padrão.
Step4: Visualizando em um gráfico
Step5: Aplicando o método probabilístico no cálculo dos estoques (onde as probabilidades são aplicadas), teremos para cada seed, uma projeção/resultado diferente.
|
7,019
|
<ASSISTANT_TASK:>
Python Code:
### imports
from IPython.core.debugger import Tracer
#Tracer()()
import os, sys, time
### prevent the dying jupyter notebook
stdout = sys.stdout
#sys.stdout = sys.__stdout__ # did not work to restoure print -> console
#sys.stdout = open('keras_output.txt', 'a+')
#sys.stdout = stdout
import sys, os, argparse, logging # NOQA
import importlib
from pprint import pprint
from tqdm import tqdm
import twBase
from twBase import * # NOQA
importlib.reload(twBase)
import twQuoraRun
importlib.reload(twQuoraRun)
#from twQuoraRun import * # NOQA
#Allow relative imports to directories above cwd/
sys.path.insert(1, os.path.join(sys.path[0], '..'))
%matplotlib inline
np.random.seed(42)
params = {
"DATA": {
"BASE_DIR": "./data",
"GLOVE_DIR": './data/glove.6B',
"DATA_FN": './data/train.csv',
"DATA_TEST_FN": './data/test.csv',
"H5DATA_FN": "./data/train_emb.40.300.200000.h5",
"H5DATA_TEST_FN": "./data/test_emb.40.300.200000.h5",
"embedOnly": True,
#"H5DATA_FN": "./data/train.40.300.200000.h5",
#"H5DATA_TEST_FN": "./data/test.40.300.200000.h5",
#"H5DATA_FN": "./data/train.s.40.300.200000.h5.s",
#"H5DATA_TEST_FN": "./data/test.s.40.300.200000.h5.s",
"EMBED_FN": './data/emb.300.200k.npy',
"isSample": False,
},
"SPACY": {
"MODEL": "en1G", # en1G
#"MODEL": "en", # en1G
},
"MODEL": {
"MAX_SEQUENCE_LENGTH": 40, # 60
"MAX_NB_WORDS": 200000, # 40000, 200000
"EMBED_DIM": 300,
"BASIC_FEAT_DIM": 26,
"SUFFIX": "xx",
#"SUFFIX": "01",
"NAME": None,
#"CLASS": "CosModel",
#"CLASS": "RNNSimple",
#"CLASS": "FeatsSimple",
#"CLASS": "BasicSimple",
#"CLASS": "EmbedCNN",
#"CLASS": "CNN1D",
#"CLASS": "All",
"CLASS": "AllSmall",
"LOAD_WEIGHTS": None,
"MASKING": False,
"RNN_TOPOLOGY": [100],
"CNN_TOPOLOGY": [128, 256, 512, 1024],
"TOPOLOGY": [256, 512, 1024],
"DROPOUT": [0.2, 0.3, 0.4, 0.5],
"REGULIZER": [0.01, 0.0], # k_reg, a_reg: 0.005, order of 10
"OPTIMIZER": ['Adam', {"lr": 0.001}]
},
'SIMILARITY': {
#'mode': 'cosine',
'mode': 'euclidean',
'gamma': 1,
'c': 1,
'd': 2,
'dropout': 0.5,
},
"TRAINING": {
"DEVICE": "/cpu:0",
"VALIDATION_SPLIT": 0.2,
"BATCH_SIZE": 2048,
"EPOCHS": 80,
"PATIENCE": 25,
"DECAY_EPOCH": 30,
"SAVE_BEST": True,
"isShuffle": False,
"VERBOSITY": 1,
"UNFREEZE": True
},
"OUTPUT": {
"BASE_DIR": "./data/out",
"MODEL_DIR": None,
"MODEL_PRETRAINED": None,
}
}
# params generated by jinja
###{{params}}###
P = Struct(params)
#P.DATA.H5DATA_FN = "./data/train.{}.{}.{}.h5".format(P.MODEL.MAX_SEQUENCE_LENGTH, P.MODEL.EMBED_DIM, P.MODEL.MAX_NB_WORDS) # build name
#P.DATA.H5DATA_TEST_FN = "./data/test.{}.{}.{}.h5".format(P.MODEL.MAX_SEQUENCE_LENGTH, P.MODEL.EMBED_DIM, P.MODEL.MAX_NB_WORDS) # build name
P.MODEL.NAME = "{}.{}".format(P.MODEL.CLASS, P.MODEL.SUFFIX) # build name
P.OUTPUT.MODEL_DIR = os.path.join(P.OUTPUT.BASE_DIR,
"{}.{}.{}".format(P.MODEL.MAX_SEQUENCE_LENGTH, P.MODEL.EMBED_DIM, P.MODEL.MAX_NB_WORDS),
P.MODEL.NAME) # build output dir
import importlib
import twQuoraModel
importlib.reload(twQuoraModel)
#from twQuoraModel import CNN1D
e1 = twQuoraRun.Evaluator(P, model=twQuoraModel.CNN1D)
import importlib
import twQuoraModel
importlib.reload(twQuoraModel)
#from twQuoraModel import All
e3 = twQuoraRun.Evaluator(P, model=twQuoraModel.RNNSimple)
import importlib
import twQuoraModel
importlib.reload(twQuoraModel)
#from twQuoraModel import All
e2 = twQuoraRun.Evaluator(P, model=twQuoraModel.AllSmall)
# load the weights
path = './data/out/40.300.200000/CNN1D.03.200k/_weights_epoch_26.0.364.h5'
e1.model.model.load_weights(path)
# load the weights
path = './data/out/40.300.200000/RNNSimple.03.200k/_weights_epoch_27.0.342.h5'
e3.model.model.load_weights(path)
# sanity check
cnn_emb = e1.model.model.layers[2].get_weights()
cnn_emb[0].shape
all_emb = e2.model.model.layers[2].get_weights()
all_emb[0].shape
assert cnn_emb[0].shape == all_emb[0].shape
# syntactic sugar
Cnn = e1.model.model
All = e2.model.model
Rnn = e3.model.model
import keras
cnn_layerlist = []
rnn_layerlist = []
for i, (allL, cnnL, rnnL) in enumerate(twBase.outer_zip((All.layers, 'x'), (Cnn.layers, '_'), (Rnn.layers, '.'))):
if isinstance(allL, keras.engine.Layer):
name1 = allL.name
else:
name1 = allL
if isinstance(cnnL, keras.engine.Layer):
name2 = cnnL.name
else:
name2 = cnnL
if isinstance(rnnL, keras.engine.Layer):
name3 = rnnL.name
else:
name3 = rnnL
print("{:2d}: {:25.25} {:25.25} {!s:25.25}".format(i, name1, name2, name3))
cnn_layerlist.append((i, name1, name2))
rnn_layerlist.append((i, name1, name3))
df = pd.DataFrame(cnn_layerlist, columns=['ix', 'All', 'Cnn'])
df.to_csv(os.path.join(P.DATA.BASE_DIR, 'weight_transfer_cnn.csv'), index=False, sep=';')
df = pd.DataFrame(rnn_layerlist, columns=['ix', 'All', 'Rnn'])
df.to_csv(os.path.join(P.DATA.BASE_DIR, 'weight_transfer_rnn.csv'), index=False, sep=';')
# manipulated weight transfer list
df = pd.read_csv(os.path.join(P.DATA.BASE_DIR, 'weight_transfer_cnn.csv'), sep=';')
df
# Load CNN weights
for i in range(len(df)):
l1 = All.get_layer(df.ix[i, 'All'])
l2 = Cnn.get_layer(df.ix[i, 'Cnn'])
print("Copy weights from {} -> {}".format(l2.name, l1.name))
l1.set_weights(l2.get_weights())
l1.trainable = False # must be set BEVORE compile
# manipulated weight transfer list
df = pd.read_csv(os.path.join(P.DATA.BASE_DIR, 'weight_transfer_rnn.csv'), sep=';')
df
# Load RNN weights
for i in range(len(df)):
l1 = All.get_layer(df.ix[i, 'All'])
l2 = Rnn.get_layer(df.ix[i, 'Rnn'])
print("Copy weights from {} -> {}".format(l2.name, l1.name))
l1.set_weights(l2.get_weights())
l1.trainable = False # must be set BEVORE compile
# compile due to set trainable and save
optimizer = getattr(keras.optimizers, P.MODEL.OPTIMIZER[0])(lr=P.MODEL.OPTIMIZER[1]['lr'])
All.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
All.save(os.path.join(P.OUTPUT.MODEL_DIR, 'AllSmall.preloaded.h5'))
# Load the model and check trainable
from keras.models import load_model
#del All # deletes the existing model
# returns a compiled model
# identical to the previous one
model = load_model(os.path.join(P.OUTPUT.MODEL_DIR, 'All.preloaded.h5'))
model.get_layer('bidirectional_4').trainable
# check layer topology
n = 5
w = e1.model.model.layers[n].get_weights()
len(w), type(w)
w[0].shape, w[1].shape
# check layer identity after copyting over the weights
n = 5
w1 = e1.model.model.layers[n].get_weights()
w2 = e2.model.model.layers[n].get_weights()
if isinstance(w1, list):
np.allclose(w1[0], w2[0])
w1[0].sum(), w2[0].sum()
else:
np.allclose(w1, w2)
w1.sum(), w2.sum()
# Get layer by name
Cnn.get_layer("CNN1d.ConvL4.CNN1d")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the Models
Step2: Create the Mapping and Tranfer the Weights
Step3: Update the CSV file in Excel
|
7,020
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
data = pd.read_csv('https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/dataTrain_carListings.zip')
data.head()
data.shape
data.Price.describe()
data.plot(kind='scatter', y='Price', x='Year')
data.plot(kind='scatter', y='Price', x='Mileage')
data.columns
data_test = pd.read_csv('https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/dataTest_carListings.zip', index_col=0)
data_test.head()
data_test.shape
import numpy as np
np.random.seed(42)
y_pred = pd.DataFrame(np.random.rand(data_test.shape[0]) * 75000 + 5000, index=data_test.index, columns=['Price'])
y_pred.to_csv('test_submission.csv', index_label='ID')
y_pred.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise P1.1 (50%)
Step2: Submission example
|
7,021
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
lst = list(range(1000))
arr = np.arange(1000)
arr[:10]
arr[10:20]
arr[10:20:2]
type(arr)
%timeit [i ** 2 for i in lst]
%timeit arr ** 2
arr[5:10]
arr[-1]
['a', 2, (1, 3)]
lst[0] = 'some other type'
lst[:3]
arr[0] = 'some other type'
arr.dtype
arr[0] = 1.234
arr[:10]
Image("https://docs.scipy.org/doc/numpy/_images/threefundamental.png")
np.zeros(5, dtype=float)
np.zeros(5, dtype=int)
np.zeros(5, dtype=complex)
np.ones(5, dtype=float)
np.linspace(0, 1, num=5)
np.logspace(1, 4, num=4)
np.random.randn(5)
norm10 = np.random.normal(loc=9, scale=3, size=10)
%load solutions/random_number.py
mask = norm10 > 9
mask
norm10[mask]
norm10[[1, 4, 6]]
norm10[norm10 > 9] = 0
norm10
norm10[[1, 4, 7]] = 10
norm10
x = np.arange(10)
x
y = x[::2]
y
y[3] = 100
y
x
a = norm10[[0, 1, 5]]
a
a[:] = -10
a
norm10
# [Solution here]
%load solutions/copies_vs_views.py
samples_list = [[632, 1638, 569, 115], [433,1130,754,555]]
samples_array = np.array(samples_list)
samples_array.shape
print(samples_array)
samples_list[0][1]
samples_array[0,1]
np.zeros((2,3))
np.random.normal(10, 3, size=(2, 4))
arr = np.arange(8).reshape(2,4)
arr
arr[1, 2:4]
arr[:, 2]
arr[1]
print('Data type :', samples_array.dtype)
print('Total number of elements :', samples_array.size)
print('Number of dimensions :', samples_array.ndim)
print('Shape (dimensionality) :', samples_array.shape)
print('Memory used (in bytes) :', samples_array.nbytes)
print('Minimum and maximum :', samples_array.min(), samples_array.max())
print('Sum, mean and standard deviation:', samples_array.sum(), samples_array.mean(), samples_array.std())
samples_array.sum(axis=0)
samples_array.sum(axis=1)
samples_array.sum(axis=1, keepdims=True)
samples_array.T
[attr for attr in dir(samples_array) if not attr.startswith('__')]
Image('https://ipython-books.github.io/images/layout.png')
%load solutions/matrix_creation.py
sample1 = np.array([632, 1638, 569, 115])
sample2 = np.array([433,1130,754,555])
sample_sum = sample1 + sample2
np.array([632, 1638, 569, 115])
print('{0} X {1} = {2}'.format(sample1, sample2, sample1 * sample2))
print('{0} . {1} = {2}'.format(sample1, sample2, sample1 @ sample2))
sample1 + 1.5
sample1 + np.array([7,8])
b = np.array([10, 20, 30, 40])
bcast_sum = sample1 + b
print('{0}\n\n+ {1}\n{2}\n{3}'.format(sample1, b, '-'*21, bcast_sum))
c = np.array([-100, 100])
sample1 + c
cplus = c[:, np.newaxis]
cplus
cplus.shape
sample1 + cplus
sample1[:, np.newaxis] + c
# [Solution here]
%load solutions/broadcasting.py
import pandas as pd
counts = pd.Series([632, 1638, 569, 115])
counts
counts.values
counts.index
bacteria = pd.Series([632, 1638, 569, 115],
index=['Firmicutes', 'Proteobacteria',
'Actinobacteria', 'Bacteroidetes'])
bacteria
np.log(bacteria)
bacteria_dict = {
'Firmicutes': 632,
'Proteobacteria': 1638,
'Actinobacteria': 569,
'Bacteroidetes': 115
}
pd.Series(bacteria_dict)
data = pd.DataFrame({'value': [632, 1638, 569, 115, 433, 1130, 754, 555],
'patient': [1, 1, 1, 1, 2, 2, 2, 2],
'phylum': ['Firmicutes', 'Proteobacteria', 'Actinobacteria',
'Bacteroidetes', 'Firmicutes', 'Proteobacteria',
'Actinobacteria', 'Bacteroidetes']})
data
data.head()
data.columns
vessels = pd.read_csv("../data/AIS/vessel_information.csv")
vessels.head()
%load solutions/read_nfs_1974.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NumPy Arrays and Indexing
Step2: Here's what the array looks like
Step3: We can index arrays in the same ways as lists
Step4: Arrays vs Lists
Step5: We can't do this with an array
Step6: The data type is contained in the dtype attribute
Step7: The dtype is fixed
Step8: What is an Array
Step9: Array Creation
Step10: We have seen how the arange function generates an array for a range of integers.
Step11: Random Number Generation
Step12: $X \sim N(9, 3)$
Step13: Exercise
Step14: Index Arrays
Step15: Integer Indexing
Step16: Asssignment
Step17: Copies vs Views
Step18: This, however, produces a copy
Step19: Exercise
Step20: Multidimensional Arrays
Step21: With two-dimensional arrays we start seeing the convenience of NumPy data structures
Step22: Most of the array creation functions listed above can be passed multidimensional shapes. For example
Step23: In fact, an array can be reshaped at any time, as long as the total number of elements is unchanged. For example, if we want a 2x4 array with numbers increasing from 0, the easiest way to create it is via the array's reshape method.
Step24: With multidimensional arrays, you can also use slices, and you can mix and match slices and single indices in the different dimensions (using the same array as above)
Step25: If you only provide one index, then you will get the corresponding row.
Step26: Now that we have seen how to create arrays with more than one dimension, it's a good idea to look at some of the most useful properties and methods that arrays have. The following provide basic information about the size, shape and data in the array
Step27: Arrays also have many useful methods, some especially useful ones are
Step28: For these methods, the above operations area all computed on all the elements of the array. But for a multidimensional array, it's possible to do the computation along a single dimension, by passing the axis parameter; for example
Step29: Notice that summing over the rows returned a 1d array above.
Step30: Another widely used property of arrays is the .T attribute, which allows you to access the transpose of the array
Step31: There is a wide variety of methods and properties of arrays.
Step32: What is a Multi-Dimensional Array
Step33: Exercises
Step34: Array Operations, Methods, and Functions
Step35: This includes the multiplication operator -- it does not perform matrix multiplication, as is the case in Matlab, for example
Step36: In Python 3.5, you can use the @ operator to get the inner product (or matrix multiplication) (!)
Step37: this implies that the dimension of the arrays for each operation must match in size,
Step38: In this case, numpy looked at both operands and saw that the first was a one-dimensional array of length 4 and the second was a scalar, considered a zero-dimensional object. The broadcasting rules allow numpy to
Step39: Remember that matching begins at the trailing dimensions. Here, c would need to have a trailing dimension of 1 for the broadcasting to work. We can augment arrays with dimensions on the fly, by indexing it with a np.newaxis object, which adds an "empty" dimension
Step40: Exercises
Step41: What Else
Step42: Pandas Series
Step43: If an index is not specified, a default sequence of integers is assigned as the index. A NumPy array comprises the values of the Series, while the index is a pandas Index object.
Step44: Index Object
Step45: We can assign meaningful labels to the index, if they are available
Step46: NumPy's math functions and other operations can be applied to Series without losing the data structure.
Step47: Creation from a dict
Step48: Pandas DataFrames
Step49: We often will want to peak at the first few rows of a DataFrame
Step50: Columns as an Index
Step51: Reading and Writing Files
Step52: Exercises
|
7,022
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import skfuzzy as fuzz
import matplotlib.pyplot as plt
%matplotlib inline
x = np.arange(30, 100, 0.1)
## LINEAR
# Create the membership functions
x_cold_lin = fuzz.trimf(x, [30, 30, 50])
x_mild_lin = fuzz.trimf(x, [30, 50, 70])
x_warm_lin = fuzz.trimf(x, [50, 70, 100])
x_hot_lin = fuzz.trimf(x, [70, 100, 100])
# Plot the results of the linear fuzzy membership
plt.figure()
plt.plot(x, x_cold_lin, 'b', linewidth=1.5, label='Cold')
plt.plot(x, x_mild_lin, 'k', linewidth=1.5, label='Mild')
plt.plot(x, x_warm_lin, 'm', linewidth=1.5, label='Warm')
plt.plot(x, x_hot_lin, 'r', linewidth=1.5, label='Hot')
plt.title('Temperature, Linear Fuzzy')
plt.ylabel('Membership')
plt.xlabel('Temperature (Fahrenheit)')
plt.legend(loc='center right', bbox_to_anchor=(1.25, 0.5),
ncol=1, fancybox=True, shadow=True);
## GAUSSIAN
# Create the membership functions
x_cold_gauss = fuzz.gaussmf(x, 30, 8)
x_mild_gauss = fuzz.gaussmf(x, 50, 8)
x_warm_gauss = fuzz.gaussmf(x, 70, 12)
x_hot_gauss = fuzz.gaussmf(x, 100, 8)
# Plot the results of the gaussian fuzzy membership
plt.figure()
plt.plot(x, x_cold_gauss, 'b', linewidth=1.5, label='Cold')
plt.plot(x, x_mild_gauss, 'k', linewidth=1.5, label='Mild')
plt.plot(x, x_warm_gauss, 'm', linewidth=1.5, label='Warm')
plt.plot(x, x_hot_gauss, 'r', linewidth=1.5, label='Hot')
plt.title('Temperature, Gaussian Fuzzy')
plt.ylabel('Membership')
plt.xlabel('Temperature')
plt.legend(loc='center right', bbox_to_anchor=(1.25, 0.5),
ncol=1, fancybox=True, shadow=True);
# Plot to show the sum is not always 1
x_sum = x_cold_gauss + x_mild_gauss + \
x_warm_gauss + x_hot_gauss
plt.figure()
plt.plot(x, x_sum, 'y', linewidth=1.5, label='Total')
plt.title('Temperature, Gaussian Fuzzy Sum')
plt.ylabel('Membership')
plt.xlabel('Temperature')
plt.legend(loc='center right', bbox_to_anchor=(1.25, 0.5),
ncol=1, fancybox=True, shadow=True);
## NORMALIZED GAUSSIAN
# rescale
x_sum = x_cold_gauss + x_mild_gauss + \
x_warm_gauss + x_hot_gauss
x_cold_rescale = x_cold_gauss / x_sum
x_mild_rescale = x_mild_gauss / x_sum
x_warm_rescale = x_warm_gauss / x_sum
x_hot_rescale = x_hot_gauss / x_sum
# Plot the results of the rescaled gaussian fuzzy membership
plt.figure()
plt.plot(x, x_cold_rescale, 'b', linewidth=1.5, label='Cold')
plt.plot(x, x_mild_rescale, 'k', linewidth=1.5, label='Mild')
plt.plot(x, x_warm_rescale, 'm', linewidth=1.5, label='Warm')
plt.plot(x, x_hot_rescale, 'r', linewidth=1.5, label='Hot')
plt.title('Temperature, Rescaled Gaussian Fuzzy')
plt.ylabel('Membership')
plt.xlabel('Temperature')
plt.legend(loc='center right', bbox_to_anchor=(1.25, 0.5),
ncol=1, fancybox=True, shadow=True);
# Plot to show the sum is not always 1
x_sum = x_cold_rescale + x_mild_rescale + \
x_warm_rescale + x_hot_rescale
plt.figure()
plt.plot(x, x_sum, 'y', linewidth=1.5, label='Total')
plt.title('Temperature, Rescaled Gaussian Fuzzy Sum')
plt.ylabel('Membership')
plt.xlabel('Temperature')
plt.legend(loc='center right', bbox_to_anchor=(1.25, 0.5),
ncol=1, fancybox=True, shadow=True);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As can be seen in the figure above, each state ("cold", "mild", "warm", and "hot") has a membership value defined at all temperatures between 30 and 100 degrees (F). This was accomplished easily by defining the following triangular functions for each state
Step2: Looking at the figure above, values tend to be more heavily concentrated around the "full" membership values, dropping off more steeply than the triangular membership function. This might be preferential behavior, depending on the system that's being modeled.
Step3: As suspected, the membership function does not sum to 100% at each temperature value. There may be situations where this doesn't matter. For instance, if you're using fuzzy logic to determine the most likely descriptor at each temperature value, that the membership doesn't need to sum to 1, since only the maximum value is required. Generally, though, if you are going to be treating the membership function as a probability or using it in a summation somewhere down the line, you want to normalize your membership functions so that they always equal 1.
Step4: After normalization, notice that the membership functions no longer look Gaussian. Let's make sure that the membership does sum to 1 at each temperature
|
7,023
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries
import numpy as np
import matplotlib.pyplot as plt
import math
from sklearn.metrics import accuracy_score
import pickle
import sys
# Load data
with open('./data/pickled/xtrain.pickle', 'rb') as f:
xtrain = pickle.load(f)
with open('./data/pickled/ytrain.pickle', 'rb') as f:
ytrain = pickle.load(f)
with open('./data/pickled/xtest.pickle', 'rb') as f:
xtest = pickle.load(f)
with open('./data/pickled/ytest.pickle', 'rb') as f:
ytest = pickle.load(f)
with open('./data/pickled/xval.pickle', 'rb') as f:
xval = pickle.load(f)
with open('./data/pickled/yval.pickle', 'rb') as f:
yval = pickle.load(f)
# Sigmoid function
def sigmoid(z):
return 1.0 / (1.0 + np.exp(-z))
def nn(weights,x,y):
### Initialization
n = len(x)
activations = [np.array(0) for i in range(n_layers)]
activations[0] = x
deltas = [np.array(0) for i in range(n_layers-1)]
bias = np.ones((n,1))
### Forward propagation
for w,l in zip(weights,range(1,n_layers)):
inputs = np.concatenate((bias,activations[l-1]),axis=1)
activations[l] = sigmoid(np.dot(inputs,w.T))
### Output error
deltas[-1] = activations[-1] - y
### Back propagation
for l in range(2,n_layers):
deltas[-l] = np.dot(deltas[-(l-1)],weights[-(l-1)][:,1:]) * activations[-l]*(1-activations[-l])
# Update the weights / biases
for w,l in zip(weights,range(len(layers)-1,0,-1)):
w[:,1:] = w[:,1:] * (1-rate*Lambda/n) - np.dot(deltas[-l].T,activations[-(l+1)])*rate/n
w[:,:1] -= np.sum(deltas[-l])*rate/n
return weights
def predict(weights,x,y):
### Initialization
n = len(x)
activations = [np.array(0) for i in range(n_layers)]
activations[0] = x
bias = np.ones((n,1))
### Forward propagation
for w,l in zip(weights,range(1,n_layers)):
inputs = np.concatenate((bias,activations[l-1]),axis=1)
activations[l] = sigmoid(np.dot(inputs,w.T))
# Cost function: regularized cross entropy
C = np.sum(np.nan_to_num(-y*np.log(activations[-1]) - (1-y)*(np.log(1-activations[-1]))))/n
ws_sum_squares = 0
for l in range(n_layers-1):
ws_sum_squares += np.sum(weights[l][:,1:]**2)
C += ((Lambda/(2*n))) * ws_sum_squares # Add regularization to the cost function
return np.argmax(activations[-1],axis=1),C
def weight_init(L_in,L_out):
np.random.seed(13) # This makes testing consistent.
return np.random.normal(scale=1/np.sqrt(L_in), size=(L_out,L_in+1))
def SGD(x,y,monitor_cost,monitor_train_acc,monitor_test_acc):
# Make list of weights arrays
weights = [np.array(0) for i in range(len(layers)-1)]
for l in range(len(layers)-1):
weights[l] = weight_init(layers[l],layers[l+1]) #[layers-1,[L_in+1,Lout]]
def shuffle(x,y):
state = np.random.get_state()
np.random.shuffle(x)
np.random.set_state(state)
np.random.shuffle(y)
return x,y
costs, test_acc, train_acc = [],[],[]
for j in range(epochs):
# Shuffle the data
x,y = shuffle(x,y)
# Seperate x,y mini-batches
mini_x = [x[k:k+minibatchsize] for k in range(0,len(x),minibatchsize)]
mini_y = [y[k:k+minibatchsize] for k in range(0,len(y),minibatchsize)]
# Iterate through pairs of mini-batches, calling nn() on each pair
for x_mini,y_mini in zip(mini_x,mini_y):
weights = nn(weights,x_mini,y_mini)
# If statements for monitoring. This ensures the predict() function isn't called unnecessarily
if monitor_cost | monitor_train_acc:
ypred, C = predict(weights,x,y)
if monitor_cost:
costs.append(C)
if monitor_train_acc:
train_acc.append(accuracy_score(np.argmax(y,axis=1),ypred))
if monitor_test_acc:
test_acc.append(accuracy_score(np.argmax(ytest,axis=1),predict(weights,xtest,ytest)[0]))
# Write progress monitor
progress = (j+1)/(epochs)*100.0
bar = 20
hashes = '#'*(int(round(progress/100*bar)))
spaces = ' '*(bar-len(hashes))
sys.stdout.write('\r[{0}] {1}%'.format(hashes + spaces, round(progress,2)))
return weights,costs,train_acc,test_acc
# Model parameters
m = np.int(xtrain.shape[1]) # Number of features in each example
layers = [m, 100, 10]
n_layers = len(layers)
# Learning parameters
Lambda = 0.01
epochs = 40
minibatchsize = 50
rate = 0.3
# Train the model
weights, costs, train_acc, test_acc = SGD(xtrain,ytrain,True,True,True)
# Plot the results
# Note: don't bother calling unless the monitor parameters are set...
plot()
accuracy_score(np.argmax(yval,axis=1),predict(weights,xval,yval)[0])
def plot():# Visualize the cost and accuracy
fig = plt.figure(figsize=(12, 12))
ax = fig.add_subplot(221)
ax.plot(np.arange(epochs), costs, "-")
ax.set_xlim([0, epochs])
ax.set_xlabel('Epoch')
ax.set_ylabel('Cost')
ax.set_title('Cost over epochs')
ax = fig.add_subplot(222)
ax.plot(np.arange(epochs), train_acc, "-",color='blue',label="Training data, final acc: "+str(train_acc[-1]))
ax.plot(np.arange(epochs), test_acc, "-",color='orange',label="Testing data, final acc: "+str(test_acc[-1]))
ax.set_xlim([0, epochs])
ax.set_xlabel('Epoch')
ax.set_ylabel('Accuracy')
plt.legend(loc='lower right')
ax.set_title('Accuracy over epochs')
plt.show()
# Visualize the data
def drawplot(draw,x,y):
if draw:
n = x.shape[0]
idx = np.random.randint(0,n,size=100) # Make an array of random integers between 0 and n
fig, ax = plt.subplots(10, 10) # make the plots
img_size = math.sqrt(m) # Specify the image size (in these case sqrt(m) = 28)
for i in range(10):
for j in range(10):
Xi = x[idx[i*10+j],:].reshape(int(img_size), int(img_size)) # get each example and resize
ax[i,j].set_axis_off() # Turns off the axes for all the subplots for clarity
ax[i,j].imshow(Xi, aspect='auto',cmap='gray') # plots the current image in the correct position
plt.show()
drawplot(True,xtrain,ytrain)
# Interactive printer function
def printer(x,y,weights):
idx = np.random.randint(len(x),size=1)
img_size = int(math.sqrt(m))
xi = x[idx,:].reshape(img_size,img_size)
yi = predict(weights,x[idx,:],y[idx,:])[0]
plt.title('The predicted value is %i\n The true value is %i' %(yi,np.argmax(y[idx,:],axis=1)))
plt.imshow(xi, aspect='auto',cmap='gray')
plt.axis('off')
plt.show()
# Running this cell will draw a single image
# The predicted and real value for y is printed above
printer(xtest,ytest,weights)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, we load the data. For details, please see the accompanying notebook MNIST-loader.ipynb for details.
Step2: Now let's define some useful functions for the neural network to use. First is the sigmoid activation function
Step3: The neural network
Step4: Next is the predict function. This function takes the learned weights and performs forward propagation through the netwwork using the x values supplied in the arguments. The effect of this is essentially to predict the output class of the given data using the weights that have been learned. We also calculate the cost here, because the actual cost value (and it's calculation) is only necessary if monitoring is set to True. Note
Step5: We initialize theta with a set of random weights with a standard deviation of $ 1/\sqrt{n} $
Step6: Stochastic gradient descent
Step7: Finally, we train the model
Step8: Visualizing cost and accuracy as a function of epochs
Step9: Visualizing the handwritten numbers
|
7,024
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
from thunder import Colorize
image = Colorize.image
tile = Colorize.tile
sns.set_style('darkgrid')
sns.set_context('notebook')
data = tsc.loadExample('mouse-images')
data
from numpy import random
from scipy.ndimage.filters import gaussian_filter
t = 500
dx = gaussian_filter(random.randn(t), 50) * 25
dy = gaussian_filter(random.randn(t), 50) * 25
plt.plot(dx);
plt.plot(dy);
from scipy.ndimage import shift
shifted = data.apply(lambda (k, v): (k, shift(v, (dx[k], dy[k]), mode='nearest', order=0)))
im1 = data[0]
im2 = shifted[0]
tile([im1, im2, im1-im2], clim=[(0,300), (0,300), (-300,300)], grid=(1,3), size=14)
tile([data.mean(), shifted.mean()], size=14)
from thunder import Registration
reg = Registration('crosscorr')
reg.prepare(shifted, startIdx=0, stopIdx=500);
image(reg.reference)
ref = shifted.filterOnKeys(lambda k: k > 0 and k < 500).mean()
reg.prepare(ref)
image(reg.reference)
model = reg.fit(shifted)
model
model[0]
clrs = sns.color_palette('deep')
plt.plot(model.toArray()[:,0], color=clrs[0])
plt.plot(dx, color=clrs[0])
plt.plot(model.toArray()[:,1], color=clrs[1])
plt.plot(dy, color=clrs[1]);
reg.prepare(data, startIdx=0, stopIdx=500)
model = reg.fit(shifted)
plt.plot(model.toArray()[:,0], color=clrs[0])
plt.plot(dx, color=clrs[0])
plt.plot(model.toArray()[:,1], color=clrs[1])
plt.plot(dy, color=clrs[1]);
corrected = model.transform(shifted)
im1 = data[0]
im2 = corrected[0]
tile([im1, im2, im1-im2], clim=[(0,300), (0,300), (-300,300)], grid=(1,3), size=14)
tile([shifted.mean(), corrected.mean()], size=14)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generating data
Step2: There are 500 images (corresponding to 500 time points), and the data are two-dimensional, so we'll want to generate 500 random shifts in x and y. We'll use smoothing functions from scipy to make sure the drift varies slowly over time, which will be easier to look at.
Step3: Now let's use these drifts to shift the data. We'll use the apply method on our data, which applies an arbitrary function to each record; in this case, the function is to shift by an amount given by the corresponding entry in our list of shifts.
Step4: Look at the first entry of both the original images and the shifted images, and their difference
Step5: It's also useful to look at the mean of the raw images and the shifted images, the mean of the shifted images should be much more blurry!
Step6: Registration
Step7: This method computes a cross-correlation in parallel between every image and a reference. To compute that reference, we can use the prepare method, and either give it a reference, or have it compute one for us. For this method, the default prepare is to compute a mean, over some specified range. We call
Step8: This adds a reference attribute to the reg object, which we can look at
Step9: We could have equivalently computed the reference ourselves (using the mean, or any other calculation) and passed it as an argument
Step10: Now we use the registration method reg and fit it to the shifted data, returning a fitted RegistrationModel
Step11: Inspect the model
Step12: The model represents a list of transformations. You can inspect them
Step13: You can also convert the full collection of transformations into an array, which is useful for plotting. Here we'll plot the estimated transformations relative to the ground truth, they should be fairly similar.
Step14: Note that, while following a similar pattern as the ground truth, the estimates are not perfect. That's because we didn't use the true reference to estimate the displacements, but rather the mean of the displaced data. To see that we get the exact displacements back, let's compute a reference from the original, unshifted data.
Step15: Now the estimates should be exact (up to rounding error)! But note that this is sort of cheating, because in general we don't know the ground truth.
Step16: We can now use our model to transform a set of images, which applies the estimated transformations. The API design makes it easy to apply the transformations to the dataset we used to estimate the transformations, or a different one. We'll use the model we just estimates, which used the true reference, because it will be easy to see that it did the right thing.
Step17: Let's again look at the first image from the orignal and corrected, and their difference. Whereas before they were different, now they should be the same, except for minor near the boundaries (where the image has been replaced with its nearest neighbors).
Step18: As a final check on the registation, we can compare the mean of the shifted data, and the mean of the regsitered data. The latter should be much sharper.
|
7,025
|
<ASSISTANT_TASK:>
Python Code:
from scipy import stats
import pandas as pd
import numpy as np
LETTERS = list('ABCDEFGHIJKLMNOPQRSTUVWXYZ')
df = pd.DataFrame({'NUM1': np.random.randn(50)*100,
'NUM2': np.random.uniform(0,1,50),
'NUM3': np.random.randint(100, size=50),
'CAT1': ["".join(np.random.choice(LETTERS,1)) for _ in range(50)],
'CAT2': ["".join(np.random.choice(['pandas', 'r', 'julia', 'sas', 'stata', 'spss'],1)) for _ in range(50)],
'CAT3': ["".join(np.random.choice(['postgres', 'mysql', 'sqlite', 'oracle', 'sql server', 'db2'],1)) for _ in range(50)]
})
df = df[(np.abs(stats.zscore(df.select_dtypes(exclude='object'))) < 3).all(axis=1)]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
7,026
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
xls = pd.ExcelFile(r'C:\Users\jenng\Documents\texaspse-blog\media\f16-scientific-python\week2\myExcelData.xls')
temp_table = xls.parse('Temperature')
liquid_flow_table = xls.parse('Liquid Flow')
temp_table
temp_table.head()
temp_table.head(7)
list(liquid_flow_table.columns.values)
liquid_flow_table = liquid_flow_table.rename(columns={'Primary Std, Avg mL/s': 'Primary',
'Ultrasonic, mL/S': 'Ultrasonic',})
liquid_flow_table.head() # Great! It worked like we wanted.
liquid_flow_table['mySum'] = liquid_flow_table.Primary + liquid_flow_table.Ultrasonic
liquid_flow_table.head() #check my table to see if it did add
liquid_flow_table['myMultiplication'] = liquid_flow_table['Primary'] * liquid_flow_table['Ultrasonic']
liquid_flow_table.head() #check my table to see if it did multiply
liquid_flow_table['myPower'] = liquid_flow_table.Primary ** liquid_flow_table.Ultrasonic
liquid_flow_table.head() #check my table to see if it did multiply
liquid_flow_table['CumulativeSum'] = liquid_flow_table.Ultrasonic.cumsum()
liquid_flow_table
liquid_flow_table['CumulativeSum'].max() #looks right
liquid_flow_table['myDivision'] = liquid_flow_table['CumulativeSum']/liquid_flow_table['CumulativeSum'].max()
liquid_flow_table.head()
liquid_flow_table.sort('Primary', ascending=True, inplace=True)
liquid_flow_table.sort_values(by='Primary', ascending=True, inplace=True)
liquid_flow_table.head()
liquid_flow_table
liquid_flow_table.loc[liquid_flow_table['Ultrasonic'] > 40].head()
liquid_flow_table.loc[liquid_flow_table['Primary'].isin(39.62,60.64)].head()
liquid_flow_table.loc[liquid_flow_table['Primary'].isin([39.62,60.64])].head()
temp_table
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
# no need for the import statement again because it's already imported
plt.plot(temp_table['K-type TC, °C'])
plt.ylabel('some numbers')
plt.show()
plt.plot(temp_table['K-type TC, °C'])
plt.plot(temp_table['T-type TC, °C'])
plt.ylabel('some numbers')
plt.show()
plt.plot(temp_table['K-type TC, °C'])
plt.plot(temp_table['T-type TC, °C'])
plt.ylabel('Temperature, °C')
plt.xlabel('Index')
plt.title('Temperature vs. Index')
plt.legend()
plt.show()
plt.plot(temp_table['K-type TC, °C'],'-.',color='r',linewidth=3)
plt.plot(temp_table['T-type TC, °C'],'o-',color='c')
plt.ylabel('Temperature, °C')
plt.xlabel('Index')
plt.title('Temperature vs. Index')
plt.legend()
plt.show()
def plotMyData(x,y):
plt.plot(x, y,'-.',color='r',linewidth=3)
plt.xlabel(x.name)
plt.ylabel(y.name)
plt.title(y.name + " vs. " + x.name)
plt.legend()
plt.show()
liquid_flow_table.head()
plotMyData(liquid_flow_table.Primary,liquid_flow_table.mySum)
plotMyData(liquid_flow_table.Primary,liquid_flow_table.myMultiplication)
plotMyData(liquid_flow_table.Primary,liquid_flow_table.myPower)
plotMyData(liquid_flow_table.Primary,liquid_flow_table.CumulativeSum)b
plotMyData(liquid_flow_table.Primary,liquid_flow_table.myDivision)
liquid_table_columns_list = list(liquid_flow_table.columns.values)
liquid_table_columns_list
for column in liquid_table_columns_list:
plotMyData(liquid_flow_table.Primary, liquid_flow_table[column])b
%%timeit
xls = pd.read_csv(r'C:\Users\jenng\Documents\texaspse-blog\media\f16-scientific-python\week2\GASISData.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's read in some data. Our file is an .xls file and it has 2 sheets. I don't know how to do that, so I will google "pandas read in excel file multiple sheets" and see what happens.
Step2: Let's take a look at the "tables" we read in. By the way, in Python pandas terms, the data structure of the data we read in is called a DataFrame. This is the reason why when you search for answers online, a lot of people will refer to their variable names as df.
Step3: That's beautiful. Now pandas has a nice function called head() that summarizes our data. Right now our data only has 2 columns and maybe 10+ rows. But head() is going to be super useful to have a giant table of 1000 rows x 1000 columns. This is how you use it.
Step4: If you search up the documentation, you will see we can pass in an argument of how many rows we want to show. Let's see if we can show the first 7 rows.
Step5: That works! Let's now doing some fun stuff with pandas. I want to rename the two columns in liquid_table_temp to "Primary" and "Ultrasonic". Let's take a look at the current column names first. As always, I googled "list of column names" and found this Stackoverflow answer.
Step6: Cool. Let's rename it now. Again, I googled "pandas rename column" and found http
Step7: Let's check it now
Step8: Let's say I, for some reasons, want to add the primary + ultrasonic together and create a new column called mySum. Again, googled for "pandas add column together" and found http
Step9: Let's say now I want to create a new column that is the cumulative sum of Ultrasonic.
Step10: Now let's say I want to divide this CumulativeSum by the maximum of that Cumulative Sum column. I'm going to call the new column something arbitrary, "myDivision".
Step11: Now let's say I want to sort the table based on the Primary column from smallest to largest.
Step12: Apparently the sort() we are trying to use is deprecreted, I don't know what that quite mean but I'm gonna do what it says and use sort_values() instead
Step13: Now let's do some filtering. I want my table to only shows Primary values that are greater than 40. Let's first take a look at the original table before we move on.
Step14: http
Step15: That stackoverflow answer brought up a really nice option. That is, how can I select data where only certain values is what I want. For example, I want the table to only show rows where my Primary is either 39.61 or 60.64.
Step16: Not working so I searched "pandas isin()" and found http
Step17: Great! That sounds like most of the things I want to do. Now I want to try somethinga litle bit different and actually plot them out. I'm going back to my temp_table and plot the K-type and T-Type. Let's take a look at the temp_table again before we start.
Step18: Now I frankly don't know where to start so I googled "how to plot with python". First result came up was http
Step19: Now what if I don't want the figure to come up every time? Is there a way for Jupyter notebook to display the images within the notebook I'm working on?
Step20: Now I'm going to run that again and see what happens.
Step21: Perfect! Let's try now to plot our data.
Step22: But now I also want the other column in the same plot! No problem. Just add another line.
Step23: Awesome! Now I want to add in the title for the x-axis, y-axis, figure, label, and legend.
Step24: Sounds great! Now let's go a little bit further and decorate our plots. With some googling, "matplotlib change line styles" gave me http
Step25: Awesome. Let's say we have to plot the same thing over and over again for 10 different columns. That's going to be very annoying to do in Excel. So let's see if we can write up some function and "automate" this for us. I'm going to copy&paste all the code above and put it in a function.
Step26: Now I'm going plot several columns in the liquid_flow_table (because there are many columns). Again, let's take a look at the table again.
Step27: So 5 different plots, all in 5 lines! Imagine doing this in Excel! And in fact, programming is so awesome you can in fact not even have to copy&paste and do all that work.
Step28: So if you're not still sold over this, I don't know what will
|
7,027
|
<ASSISTANT_TASK:>
Python Code:
import os, re, math, json, shutil, pprint, datetime
import PIL.Image, PIL.ImageFont, PIL.ImageDraw
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.python.platform import tf_logging
print("Tensorflow version " + tf.__version__)
BATCH_SIZE = 128 #@param {type:"integer"}
BUCKET = 'gs://' #@param {type:"string"}
assert re.search(r'gs://.+', BUCKET), 'You need a GCS bucket for your Tensorboard logs. Head to http://console.cloud.google.com/storage and create one.'
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
# backend identification
IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
# Auth on Colab
# Little wrinkle: without auth, Colab will be extremely slow in accessing data from a GCS bucket, even public
if IS_COLAB_BACKEND:
from google.colab import auth
auth.authenticate_user()
#@title visualization utilities [RUN ME]
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
# Matplotlib config
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=0)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
unbatched_train_ds = training_dataset.apply(tf.data.experimental.unbatch())
v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()
t_images, t_labels = unbatched_train_ds.batch(N).make_one_shot_iterator().get_next()
# Run once, get one batch. Session.run returns numpy results
with tf.Session() as ses:
(validation_digits, validation_labels,
training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
# utility to display training and validation curves
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.grid(linewidth=1, color='white')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
def read_label(tf_bytestring):
label = tf.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
dataset = dataset.prefetch(-1) # prefetch next batch while training (-1: autotune prefetch buffer size)
return dataset
def get_validation_dataset(image_file, label_file):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch
dataset = dataset.repeat() # Mandatory for Keras for now
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)
# In Estimator, we will need a function that returns the dataset
training_input_fn = lambda: get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_input_fn = lambda: get_validation_dataset(validation_images_file, validation_labels_file)
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
def model_fn(features, labels, mode):
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
x = features
y = tf.reshape(x, [-1, 28, 28, 1]) # input images are 28x28 pixels, greyscale (-1 is for variable batch size)
###
# YOUR LAYERS HERE
# LAYERS YOU CAN TRY:
# y = tf.layers.Conv2D(filters=6, kernel_size=3, padding='same', strides=1, activation="relu")(y)
# y = tf.layers.Dense(200, activation="relu")(y)
# y = tf.layers.MaxPooling2D(pool_size=2)(y)
# y = tf.layers.Dropout(0.3)(y, training=is_training)
#
y = tf.layers.Flatten()(y)
y = tf.layers.Dense(200, activation='relu')(y)
###
logits = tf.layers.Dense(10)(y)
predictions = tf.nn.softmax(logits)
classes = tf.math.argmax(predictions, axis=-1)
if (mode != tf.estimator.ModeKeys.PREDICT):
loss = tf.losses.softmax_cross_entropy(labels, logits)
step = tf.train.get_or_create_global_step()
###
# YOUR LEARNING RATE SCHEDULE HERE
#
lr = 0.005
# lr = 0.0001 + tf.train.exponential_decay(0.005, step, 3000, 1/math.e)
# tf.summary.scalar("learn_rate", lr) # you can visualize it in Tensorboard
###
optimizer = tf.train.AdamOptimizer(lr)
# little wrinkle: batch norm uses running averages which need updating after each batch. create_train_op does it, optimizer.minimize does not.
train_op = tf.contrib.training.create_train_op(loss, optimizer)
#train_op = optimizer.minimize(loss, tf.train.get_or_create_global_step())
metrics = {'accuracy': tf.metrics.accuracy(classes, tf.math.argmax(labels, axis=-1))}
else:
loss = train_op = metrics = None # None of these can be computed in prediction mode because labels are not available
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={"predictions": predictions, "classes": classes}, # name these fields as you like
loss=loss,
train_op=train_op,
eval_metric_ops=metrics
)
# little wrinkle: tf.keras.layers can normally be used in an Estimator but tf.keras.layers.BatchNormalization does not work
# in an Estimator environment. Using TF layers everywhere for consistency. tf.layers and tf.ketas.layers are carbon copies of each other.
# Called once when the model is saved. This function produces a Tensorflow
# graph of operations that will be prepended to your model graph. When
# your model is deployed as a REST API, the API receives data in JSON format,
# parses it into Tensors, then sends the tensors to the input graph generated by
# this function. The graph can transform the data so it can be sent into your
# model input_fn. You can do anything you want here as long as you do it with
# tf.* functions that produce a graph of operations.
def serving_input_fn():
# placeholder for the data received by the API (already parsed, no JSON decoding necessary,
# but the JSON must contain one or multiple 'image' key(s) with 28x28 greyscale images as content.)
inputs = {"serving_input": tf.placeholder(tf.float32, [None, 28, 28])} # the shape of this dict should match the shape of your JSON
features = inputs['serving_input'] # no transformation needed
return tf.estimator.export.TensorServingInputReceiver(features, inputs) # features are the features needed by your model_fn
# Return a ServingInputReceiver if your features are a dictionary of Tensors, TensorServingInputReceiver if they are a straight Tensor
EPOCHS = 4
steps_per_epoch = 60000 // BATCH_SIZE # 60,000 images in training dataset
MODEL_EXPORT_NAME = "mnist" # name for exporting saved model
tf_logging.set_verbosity(tf_logging.INFO)
now = datetime.datetime.now() # create a model dir for each run
MODEL_DIR = BUCKET+"/mnistjobs/job" + "-{}-{:02d}-{:02d}-{:02d}:{:02d}:{:02d}".format(now.year, now.month, now.day, now.hour, now.minute, now.second)
training_config = tf.estimator.RunConfig(model_dir=MODEL_DIR, save_summary_steps=30, save_checkpoints_steps=steps_per_epoch, log_step_count_steps=steps_per_epoch/4)
export_latest = tf.estimator.LatestExporter(MODEL_EXPORT_NAME, serving_input_receiver_fn=serving_input_fn)
estimator = tf.estimator.Estimator(model_fn=model_fn, config=training_config)
train_spec = tf.estimator.TrainSpec(training_input_fn, max_steps=EPOCHS*steps_per_epoch)
eval_spec = tf.estimator.EvalSpec(validation_input_fn, steps=1, exporters=export_latest, throttle_secs=0) # no eval throttling: evaluates after each checkpoint
final_metrics = tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
tf_logging.set_verbosity(tf_logging.WARN)
print(final_metrics)
# recognize digits from local fonts
predictions = estimator.predict(lambda: tf.data.Dataset.from_tensor_slices(font_digits).batch(N),
yield_single_examples=False) # the returned value is a generator that will yield one batch of predictions per next() call
predicted_font_classes = next(predictions)['classes']
display_digits(font_digits, predicted_font_classes, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
predictions = estimator.predict(validation_input_fn,
yield_single_examples=False) # the returned value is a generator that will yield one batch of predictions per next() call
predicted_labels = next(predictions)['classes']
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
PROJECT = "" #@param {type:"string"}
NEW_MODEL = True #@param {type:"boolean"}
MODEL_NAME = "estimator_mnist" #@param {type:"string"}
MODEL_VERSION = "v0" #@param {type:"string"}
assert PROJECT, 'For this part, you need a GCP project. Head to http://console.cloud.google.com/ and create one.'
export_path = os.path.join(MODEL_DIR, 'export', MODEL_EXPORT_NAME)
last_export = sorted(tf.gfile.ListDirectory(export_path))[-1]
export_path = os.path.join(export_path, last_export)
print('Saved model directory found: ', export_path)
# Create the model
if NEW_MODEL:
!gcloud ml-engine models create {MODEL_NAME} --project={PROJECT} --regions=us-central1
# Create a version of this model (you can add --async at the end of the line to make this call non blocking)
# Additional config flags are available: https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions
# You can also deploy a model that is stored locally by providing a --staging-bucket=... parameter
!echo "Deployment takes a couple of minutes. You can watch your deployment here: https://console.cloud.google.com/mlengine/models/{MODEL_NAME}"
!gcloud ml-engine versions create {MODEL_VERSION} --model={MODEL_NAME} --origin={export_path} --project={PROJECT} --runtime-version=1.10
# prepare digits to send to online prediction endpoint
digits = np.concatenate((font_digits, validation_digits[:100-N]))
labels = np.concatenate((font_labels, validation_labels[:100-N]))
with open("digits.json", "w") as f:
for digit in digits:
# the format for ML Engine online predictions is: one JSON object per line
data = json.dumps({"serving_input": digit.tolist()}) # "serving_input" because that is what you defined in your serving_input_fn: {"serving_input": tf.placeholder(tf.float32, [None, 28, 28])}
f.write(data+'\n')
# Request online predictions from deployed model (REST API) using the "gcloud ml-engine" command line.
predictions = !gcloud ml-engine predict --model={MODEL_NAME} --json-instances digits.json --project={PROJECT} --version {MODEL_VERSION}
predictions = np.array([int(p.split('[')[0]) for p in predictions[1:]]) # first line is the name of the input layer: drop it, parse the rest
display_top_unrecognized(digits, predictions, labels, N, 100//N)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Parameters
Step3: Colab-only auth
Step4: tf.data.Dataset
Step5: Let's have a look at the data
Step6: Estimator model [WORK REQUIRED]
Step7: Train and validate the model
Step8: Visualize predictions
Step9: Deploy the trained model to ML Engine
Step10: Deploy the model
Step11: Test the deployed model
|
7,028
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import tempfile
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
tmpdir = tempfile.mkdtemp()
physical_devices = tf.config.list_physical_devices('GPU')
for device in physical_devices:
tf.config.experimental.set_memory_growth(device, True)
file = tf.keras.utils.get_file(
"grace_hopper.jpg",
"https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg")
img = tf.keras.utils.load_img(file, target_size=[224, 224])
plt.imshow(img)
plt.axis('off')
x = tf.keras.utils.img_to_array(img)
x = tf.keras.applications.mobilenet.preprocess_input(
x[tf.newaxis,...])
labels_path = tf.keras.utils.get_file(
'ImageNetLabels.txt',
'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
imagenet_labels = np.array(open(labels_path).read().splitlines())
pretrained_model = tf.keras.applications.MobileNet()
result_before_save = pretrained_model(x)
decoded = imagenet_labels[np.argsort(result_before_save)[0,::-1][:5]+1]
print("Result before saving:\n", decoded)
mobilenet_save_path = os.path.join(tmpdir, "mobilenet/1/")
tf.saved_model.save(pretrained_model, mobilenet_save_path)
loaded = tf.saved_model.load(mobilenet_save_path)
print(list(loaded.signatures.keys())) # ["serving_default"]
infer = loaded.signatures["serving_default"]
print(infer.structured_outputs)
labeling = infer(tf.constant(x))[pretrained_model.output_names[0]]
decoded = imagenet_labels[np.argsort(labeling)[0,::-1][:5]+1]
print("Result after saving and loading:\n", decoded)
!ls {mobilenet_save_path}
!saved_model_cli show --dir {mobilenet_save_path} --tag_set serve
!ls {mobilenet_save_path}/variables
class CustomModule(tf.Module):
def __init__(self):
super(CustomModule, self).__init__()
self.v = tf.Variable(1.)
@tf.function
def __call__(self, x):
print('Tracing with', x)
return x * self.v
@tf.function(input_signature=[tf.TensorSpec([], tf.float32)])
def mutate(self, new_v):
self.v.assign(new_v)
module = CustomModule()
module_no_signatures_path = os.path.join(tmpdir, 'module_no_signatures')
module(tf.constant(0.))
print('Saving model...')
tf.saved_model.save(module, module_no_signatures_path)
imported = tf.saved_model.load(module_no_signatures_path)
assert imported(tf.constant(3.)).numpy() == 3
imported.mutate(tf.constant(2.))
assert imported(tf.constant(3.)).numpy() == 6
optimizer = tf.optimizers.SGD(0.05)
def train_step():
with tf.GradientTape() as tape:
loss = (10. - imported(tf.constant(2.))) ** 2
variables = tape.watched_variables()
grads = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(grads, variables))
return loss
for _ in range(10):
# "v" approaches 5, "loss" approaches 0
print("loss={:.2f} v={:.2f}".format(train_step(), imported.v.numpy()))
loaded = tf.saved_model.load(mobilenet_save_path)
print("MobileNet has {} trainable variables: {}, ...".format(
len(loaded.trainable_variables),
", ".join([v.name for v in loaded.trainable_variables[:5]])))
trainable_variable_ids = {id(v) for v in loaded.trainable_variables}
non_trainable_variables = [v for v in loaded.variables
if id(v) not in trainable_variable_ids]
print("MobileNet also has {} non-trainable variables: {}, ...".format(
len(non_trainable_variables),
", ".join([v.name for v in non_trainable_variables[:3]])))
assert len(imported.signatures) == 0
module_with_signature_path = os.path.join(tmpdir, 'module_with_signature')
call = module.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32))
tf.saved_model.save(module, module_with_signature_path, signatures=call)
imported_with_signatures = tf.saved_model.load(module_with_signature_path)
list(imported_with_signatures.signatures.keys())
module_multiple_signatures_path = os.path.join(tmpdir, 'module_with_multiple_signatures')
signatures = {"serving_default": call,
"array_input": module.__call__.get_concrete_function(tf.TensorSpec([None], tf.float32))}
tf.saved_model.save(module, module_multiple_signatures_path, signatures=signatures)
imported_with_multiple_signatures = tf.saved_model.load(module_multiple_signatures_path)
list(imported_with_multiple_signatures.signatures.keys())
class CustomModuleWithOutputName(tf.Module):
def __init__(self):
super(CustomModuleWithOutputName, self).__init__()
self.v = tf.Variable(1.)
@tf.function(input_signature=[tf.TensorSpec([], tf.float32)])
def __call__(self, x):
return {'custom_output_name': x * self.v}
module_output = CustomModuleWithOutputName()
call_output = module_output.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32))
module_output_path = os.path.join(tmpdir, 'module_with_output_name')
tf.saved_model.save(module_output, module_output_path,
signatures={'serving_default': call_output})
imported_with_output_name = tf.saved_model.load(module_output_path)
imported_with_output_name.signatures['serving_default'].structured_outputs
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using the SavedModel format
Step2: You'll use an image of Grace Hopper as a running example, and a Keras pre-trained image classification model since it's easy to use. Custom models work too, and are covered in detail later.
Step3: The top prediction for this image is "military uniform".
Step4: The save-path follows a convention used by TensorFlow Serving where the last path component (1/ here) is a version number for your model - it allows tools like Tensorflow Serving to reason about the relative freshness.
Step5: Imported signatures always return dictionaries. To customize signature names and output dictionary keys, see Specifying signatures during export.
Step6: Running inference from the SavedModel gives the same result as the original model.
Step7: Running a SavedModel in TensorFlow Serving
Step8: The saved_model.pb file stores the actual TensorFlow program, or model, and a set of named signatures, each identifying a function that accepts tensor inputs and produces tensor outputs.
Step9: The variables directory contains a standard training checkpoint (see the guide to training checkpoints).
Step10: The assets directory contains files used by the TensorFlow graph, for example text files used to initialize vocabulary tables. It is unused in this example.
Step11: When you save a tf.Module, any tf.Variable attributes, tf.function-decorated methods, and tf.Modules found via recursive traversal are saved. (See the Checkpoint tutorial for more about this recursive traversal.) However, any Python attributes, functions, and data are lost. This means that when a tf.function is saved, no Python code is saved.
Step12: Loading and using a custom model
Step13: Because no Python code is saved, calling a tf.function with a new input signature will fail
Step14: General fine-tuning
Step15: Specifying signatures during export
Step16: To declare a serving signature, specify a ConcreteFunction using the signatures kwarg. When specifying a single signature, its signature key will be 'serving_default', which is saved as the constant tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY.
Step17: To export multiple signatures, pass a dictionary of signature keys to ConcreteFunctions. Each signature key corresponds to one ConcreteFunction.
Step18: By default, the output tensor names are fairly generic, like output_0. To control the names of outputs, modify your tf.function to return a dictionary that maps output names to outputs. The names of inputs are derived from the Python function arg names.
|
7,029
|
<ASSISTANT_TASK:>
Python Code:
from thermostate import State, Q_, units
from thermostate.plotting import IdealGas
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
substance = 'air'
p_1 = Q_(1.0, 'bar')
T_1 = Q_(300.0, 'K')
T_3 = Q_(1700.0, 'K')
p2_p1 = Q_(8.0, 'dimensionless')
p_low = Q_(2.0, 'dimensionless')
p_high = Q_(50.0, 'dimensionless')
st_1 = State(substance, T=T_1, p=p_1)
h_1 = st_1.h.to('kJ/kg')
s_1 = st_1.s.to('kJ/(kg*K)')
s_2 = s_1
p_2 = p_1*p2_p1
st_2 = State(substance, p=p_2, s=s_2)
h_2 = st_2.h.to('kJ/kg')
T_2 = st_2.T
p_3 = p_2
st_3 = State(substance, p=p_3, T=T_3)
h_3 = st_3.h.to('kJ/kg')
s_3 = st_3.s.to('kJ/(kg*K)')
s_4 = s_3
p_4 = p_1
st_4 = State(substance, p=p_4, s=s_4)
h_4 = st_4.h.to('kJ/kg')
T_4 = st_4.T
Brayton = IdealGas(substance, ('s', 'T'), ('v', 'p'))
Brayton.add_process(st_1, st_2, 'isentropic')
Brayton.add_process(st_2, st_3, 'isobaric')
Brayton.add_process(st_3, st_4, 'isentropic')
Brayton.add_process(st_4, st_1, 'isobaric')
W_c = h_1 - h_2
W_t = h_3 - h_4
W_net = W_c + W_t
Q_23 = h_3 - h_2
eta = W_net/Q_23
p_range = np.linspace(p_low, p_high, 50)
eta_l = np.zeros(shape=p_range.shape) * units.dimensionless
W_net_l = np.zeros(shape=p_range.shape) * units.kJ / units.kg
for i, p_ratio in enumerate(p_range):
s_2 = s_1
p_2 = p_1*p_ratio
st_2 = State(substance, p=p_2, s=s_2)
h_2 = st_2.h.to('kJ/kg')
T_2 = st_2.T
p_3 = p_2
st_3 = State(substance, p=p_3, T=T_3)
h_3 = st_3.h.to('kJ/kg')
s_3 = st_3.s.to('kJ/(kg*K)')
s_4 = s_3
p_4 = p_1
st_4 = State(substance, p=p_4, s=s_4)
h_4 = st_4.h.to('kJ/kg')
T_4 = st_4.T
W_c = h_1 - h_2
W_t = h_3 - h_4
W_net = W_c + W_t
W_net_l[i] = W_net
Q_23 = h_3 - h_2
eta = W_net/Q_23
eta_l[i] = eta
fig, work_ax = plt.subplots()
work_ax.plot(p_range, W_net_l, label='Net work per unit mass flowing', color='C0')
eta_ax = work_ax.twinx()
eta_ax.plot(p_range, eta_l, label='Thermal efficiency', color='C1')
work_ax.set_xlabel('Pressure ratio $p_2/p_1$')
work_ax.set_ylabel('Net work per unit mass flowing (kJ/kg)')
eta_ax.set_ylabel('Thermal efficiency')
lines, labels = work_ax.get_legend_handles_labels()
lines2, labels2 = eta_ax.get_legend_handles_labels()
work_ax.legend(lines + lines2, labels + labels2, loc='best');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Definitions
Step2: Problem Statement
Step3: Summarizing the states,
Step4: Then, the net work is calculated by
Step5: <div class="alert alert-success">
Step6: <div class="alert alert-success">
|
7,030
|
<ASSISTANT_TASK:>
Python Code:
DIM = 100 # Number of bits in the bit strings (i.e. the "models").
NOISE_STDEV = 0.01 # Standard deviation of the simulated training noise.
EARLY_SIGNAL_NOISE = 0.005 # Standard deviation of the noise added to earlier
# observations.
REDUCTION_FACTOR = 100.0 # The factor by which the number of train steps is
# reduced for earlier observations.
class Model(object):
A class representing a model.
Attributes:
arch: the architecture as an int representing a bit-string of length `DIM`.
As a result, the integers are required to be less than `2**DIM`.
observed_accuracy: the simulated validation accuracy observed for the model
during the search. This may be either the accuracy after training for
the maximum number of steps or the accuracy after training for 1/100 the
maximum number of steps.
true_accuracy: the simulated validation accuracy after the maximum train
steps.
def __init__(self):
self.arch = None
self.observed_accuracy = None
self.true_accuracy = None
def get_final_accuracy(arch):
Simulates training for the maximum number of steps and then evaluating.
Args:
arch: the architecture as an int representing a bit-string.
accuracy = float(_sum_bits(arch)) / float(DIM)
accuracy += random.gauss(mu=0.0, sigma=NOISE_STDEV)
accuracy = 0.0 if accuracy < 0.0 else accuracy
accuracy = 1.0 if accuracy > 1.0 else accuracy
return accuracy
def get_early_accuracy(final_accuracy):
Simulates training for 1/100 the maximum steps and then evaluating.
Args:
final_accuracy: the accuracy of the model if trained for the maximum number
of steps.
observed_accuracy = final_accuracy/REDUCTION_FACTOR + random.gauss(mu=0,
sigma=EARLY_SIGNAL_NOISE)
observed_accuracy = 0.0 if observed_accuracy < 0.0 else observed_accuracy
observed_accuracy = 1.0 if observed_accuracy > 1.0 else observed_accuracy
return observed_accuracy
def _sum_bits(arch):
Returns the number of 1s in the bit string.
Args:
arch: an int representing the bit string.
total = 0
for _ in range(DIM):
total += arch & 1
arch = (arch >> 1)
return total
import random
def random_architecture():
Returns a random architecture (bit-string) represented as an int.
return random.randint(0, 2**DIM - 1)
def mutate_arch(parent_arch):
Computes the architecture for a child of the given parent architecture.
Args:
parent_arch: an int representing the architecture (bit-string) of the
parent.
Returns:
An int representing the architecture (bit-string) of the child.
position = random.randint(0, DIM - 1) # Index of the bit to flip.
# Flip the bit at position `position` in `child_arch`.
child_arch = parent_arch ^ (1 << position)
return child_arch
import collections
import random
import copy
def plain_evolution(cycles, population_size, sample_size, early_observation):
Plain evolution.
Args:
cycles: the number of cycles the search is run for.
population_size: the size of the population.
sample_size: the size of the sample for both parent selection and killing.
early_observation: boolean. Whether or not we are observing the models early
by evaluating them for 1/100th the maximum number of train steps.
population = collections.deque()
history = [] # Not used by the algorithm, only used to report results.
# Initialize the population with random models.
while len(population) < population_size:
model = Model()
model.arch = random_architecture()
model.true_accuracy = get_final_accuracy(model.arch)
# If we are observing early, get the early accuracy that corresponds to the
# true_accuracy. Else, we are training each model for the maximum number of
# steps and so the observed_accuracy is the true_accuracy.
if early_observation:
model.observed_accuracy = get_early_accuracy(model.true_accuracy)
else:
model.observed_accuracy = model.true_accuracy
population.append(model)
history.append(model)
# Carry out evolution in cycles. Each cycle produces a model and removes
# another.
while len(history) < cycles:
# Sample randomly chosen models from the current population.
sample = random.sample(population, sample_size)
# The parent is the best model in the samples, according to their observed
# accuracy.
parent = max(sample, key=lambda i: i.observed_accuracy)
# Create the child model and store it.
child = Model()
child.arch = mutate_arch(parent.arch)
child.true_accuracy = get_final_accuracy(child.arch)
# If we are observing early, get the early accuracy that corresponds to the
# true_accuracy. Else, we are training each model for the maximum number of
# steps and so the observed_accuracy is the true_accuracy.
if early_observation:
child.observed_accuracy = get_early_accuracy(child.true_accuracy)
else:
child.observed_accuracy = child.true_accuracy
# Choose model to kill.
sample_indexes = random.sample(range(len(population)), sample_size)
min_fitness = float("inf")
kill_index = population_size
for sample_index in sample_indexes:
if population[sample_index].observed_accuracy < min_fitness:
min_fitness = population[sample_index].observed_accuracy
kill_index = sample_index
# Replace victim with child.
population[kill_index] = child
history.append(child)
return history, population
def pdh_evolution(train_resources, population_size, sample_size):
Evolution with PDH.
Args:
train_resources: the resources alotted for training. An early obsevation
costs 1, while a maximum train step observation costs 100.
population_size: the size of the population.
sample_size: the size of the sample for both parent selection and killing.
population = collections.deque()
history = [] # Not used by the algorithm, only used to report results.
resources_used = 0 # The number of resource units used.
# Initialize the population with random models.
while len(population) < population_size:
model = Model()
model.arch = random_architecture()
model.true_accuracy = get_final_accuracy(model.arch)
# Always initialize with the early observation, since no hurdle has been
# established.
model.observed_accuracy = get_early_accuracy(model.true_accuracy)
population.append(model)
history.append(model)
# Since we are only performing an early observation, we are only consuming
# 1 resource unit.
resources_used += 1
# Carry out evolution in cycles. Each cycle produces a model and removes
# another.
hurdle = None
while resources_used < train_resources:
# Sample randomly chosen models from the current population.
sample = random.sample(population, sample_size)
# The parent is the best model in the sample, according to the observed
# accuracy.
parent = max(sample, key=lambda i: i.observed_accuracy)
# Create the child model and store it.
child = Model()
child.arch = mutate_arch(parent.arch)
child.true_accuracy = get_final_accuracy(child.arch)
# Once the hurdle has been established, a model is trained for the maximum
# amount of train steps if it overcomes the hurdle value. Otherwise, it
# only trains for the lesser amount of train steps.
if hurdle:
child.observed_accuracy = get_early_accuracy(child.true_accuracy)
# Performing the early observation costs 1 resource unit.
resources_used += 1
if child.observed_accuracy > hurdle:
child.observed_accuracy = child.true_accuracy
# Now that the model has trained longer, we consume additional
# resource units.
resources_used += REDUCTION_FACTOR - 1
else:
child.observed_accuracy = get_early_accuracy(child.true_accuracy)
# Since we are only performing an early observation, we are only consuming
# 1 resource unit.
resources_used += 1
# Choose model to kill.
sample_indexes = random.sample(range(len(population)), sample_size)
min_fitness = float("inf")
kill_index = population_size
for sample_index in sample_indexes:
if population[sample_index].observed_accuracy < min_fitness:
min_fitness = population[sample_index].observed_accuracy
kill_index = sample_index
# Replace victim with child.
population[kill_index] = child
history.append(child)
# Create a hurdle, splitting resources such that the number of models
# trained before and after the hurdle are approximately even. Here, our
# appoximation is assuming that every model after the hurdle trains for the
# maximum number of steps.
if not hurdle and resources_used >= int(train_resources/REDUCTION_FACTOR):
hurdle = 0
for model in population:
hurdle += model.observed_accuracy
hurdle /= len(population)
return history, population
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import seaborn as sns
TOTAL_RESOURCES = 10000 # Total number of resource units.
POPULATION_SIZE = 100 # The size of the population.
SAMPLE_SIZE = 10 # The size the subpopulation used for selecting parents and
# kill targets.
def graph_values(values, title, xlim, ylim):
plt.figure()
sns.set_style('white')
xvalues = range(len(values))
yvalues = values
ax = plt.gca()
dot_size = int(TOTAL_RESOURCES / xlim)
ax.scatter(
xvalues, yvalues, marker='.', facecolor=(0.0, 0.0, 0.0),
edgecolor=(0.0, 0.0, 0.0), linewidth=1, s=dot_size)
ax.xaxis.set_major_locator(ticker.LinearLocator(numticks=2))
ax.xaxis.set_major_formatter(ticker.ScalarFormatter())
ax.yaxis.set_major_locator(ticker.LinearLocator(numticks=2))
ax.yaxis.set_major_formatter(ticker.ScalarFormatter())
ax.set_title(title, fontsize=20)
fig = plt.gcf()
fig.set_size_inches(8, 6)
fig.tight_layout()
ax.tick_params(
axis='x', which='both', bottom=True, top=False, labelbottom=True,
labeltop=False, labelsize=14, pad=10)
ax.tick_params(
axis='y', which='both', left=True, right=False, labelleft=True,
labelright=False, labelsize=14, pad=5)
plt.xlabel('Number of Models Evaluated', labelpad=-16, fontsize=16)
plt.ylabel('Accuracy', labelpad=-30, fontsize=16)
plt.xlim(0, xlim + .05)
plt.ylim(0, ylim + .05)
sns.despine()
def graph_history(history):
observed_accuracies = [i.observed_accuracy for i in history]
true_accuracies = [i.true_accuracy for i in history]
graph_values(observed_accuracies, "Observed Accuracy",
xlim=len(history), ylim=max(observed_accuracies))
graph_values(true_accuracies, "True Accuracy",
xlim=len(history), ylim=max(true_accuracies))
history, _ = plain_evolution(
cycles=TOTAL_RESOURCES, population_size=POPULATION_SIZE,
sample_size=SAMPLE_SIZE,
early_observation=True)
graph_history(history)
history, _ = plain_evolution(
cycles=TOTAL_RESOURCES/REDUCTION_FACTOR, population_size=POPULATION_SIZE,
sample_size=SAMPLE_SIZE, early_observation=False)
graph_history(history)
history, _ = pdh_evolution(train_resources=TOTAL_RESOURCES,
population_size=POPULATION_SIZE,
sample_size=SAMPLE_SIZE)
graph_history(history)
import numpy as np
num_trials = 500
print("===========================")
print("Mean Top Fitness Comparison")
print("===========================")
max_fitnesses = []
for _ in range(num_trials):
_, population = plain_evolution(
cycles=TOTAL_RESOURCES, population_size=POPULATION_SIZE,
sample_size=SAMPLE_SIZE,
early_observation=True)
# Assume all models in the final population are fully evaluated
max_fitness = max([indiv.true_accuracy for indiv in population])
max_fitnesses.append(max_fitness)
max_fitnesses = np.array(max_fitnesses)
print("Early Observation Plain Evolution: %.4s ± %.4s" %
(np.mean(max_fitnesses), np.std(max_fitnesses)))
max_fitnesses = []
for _ in range(num_trials):
_, population = plain_evolution(
cycles=TOTAL_RESOURCES/REDUCTION_FACTOR, population_size=POPULATION_SIZE,
sample_size=SAMPLE_SIZE,
early_observation=False)
# Assume all models in the final population are fully evaluated
max_fitness = max([indiv.true_accuracy for indiv in population])
max_fitnesses.append(max_fitness)
max_fitnesses = np.array(max_fitnesses)
print("Max Step Observation Plain Evolution: %.4s ± %.4s" %
(np.mean(max_fitnesses), np.std(max_fitnesses)))
max_fitnesses = []
for _ in range(num_trials):
_, population = pdh_evolution(train_resources=TOTAL_RESOURCES,
population_size=POPULATION_SIZE,
sample_size=SAMPLE_SIZE)
# Assume all models in the final population are fully evaluated
max_fitness = max([indiv.true_accuracy for indiv in population])
max_fitnesses.append(max_fitness)
max_fitnesses = np.array(max_fitnesses)
print("Progressive Dynamic Hurdles: %.4s ± %.4s" %
(np.mean(max_fitnesses), np.std(max_fitnesses)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step6: Copyright 2019 Google LLC
Step9: Search Algorithms
Step10: Experiments
Step11: Plain Evolution
Step12: Plain Evolution
Step13: Progressive Dynamic Hurdles
Step14: Mean Fitness Comparison
|
7,031
|
<ASSISTANT_TASK:>
Python Code:
from pyechonest import config, artist, song
import pandas as pd
config.ECHO_NEST_API_KEY = 'XXXXXXXX' #retrieved from https://developer.echonest.com/account/profile
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
songs = song.search(title='Elastic Heart',artist='Sia',buckets='id:spotify',limit=True,results=1)
elasticHeart = songs[0]
elasticHeartFeatures = pd.DataFrame.from_dict(elasticHeart.audio_summary,orient='index')
pd.DataFrame.from_dict([elasticHeart.audio_summary])
floHottest = song.search(artist = 'Flo Rida' ,sort = 'song_hotttnesss-desc', buckets = 'id:spotify', limit = True, results = 20)
fsongFeatures = []
for song in floHottest:
fsongFeatures.append(song.audio_summary)
S= pd.DataFrame.from_dict(songFeatures)
S.index = [song.title for song in siaHottest]
S['hotness'] = [song.song_hotttnesss for song in siaHottest]
F= pd.DataFrame.from_dict(fsongFeatures)
F.index = [song.title for song in floHottest]
F['hotness'] = [song.song_hotttnesss for song in floHottest]
u,idx = np.unique(S.index,return_index=True)
S = S.ix[idx,:]
u,idx = np.unique(F.index,return_index=True)
F = F.ix[idx,:]
ax = pd.DataFrame({'Flo Rida':F.mean(), 'Sia': S.mean()}).plot(kind='bar',figsize=(18,6),rot=0, color = ['lightblue','salmon']);
ax.set_title("Average Song Features for Artist's Hottest 20 tracks",fontsize=14);
ax.tick_params(axis='x', labelsize=12)
Elastic_Heart = siaHottest[5].get_tracks('spotify')
Elastic_Heart[1]
%%html
<iframe src="https://embed.spotify.com/?uri=spotify:track:3yFdQkEQNzDwpPB1iIFtaM" width="300" height="380" frameborder="0" allowtransparency="true"></iframe>
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Query a single song, get its audio features and make a dataframe
Step2: Grab and compare the hottest tracks, available in Spotify, for 2 artists on a number of audio features
|
7,032
|
<ASSISTANT_TASK:>
Python Code:
from tessfpe.dhu.fpe import FPE
from tessfpe.dhu.unit_tests import check_house_keeping_voltages
fpe1 = FPE(1, debug=False, preload=True, FPE_Wrapper_version='6.1.1')
print fpe1.version
fpe1.cmd_start_frames()
fpe1.cmd_stop_frames()
if check_house_keeping_voltages(fpe1):
print "Wrapper load complete. Interface voltages OK."
def set_fpe_defaults(fpe):
"Set the FPE to the default operating parameters and return a list of the default values"
defaults = {}
for k in range(len(fpe.ops.address)):
if fpe.ops.address[k] is None:
continue
fpe.ops.address[k].value = fpe.ops.address[k].default
defaults[fpe.ops.address[k].name] = fpe.ops.address[k].default
return defaults
set_fpe_defaults(fpe1)
fpe1.cmd_start_frames()
subprocess.check_output(["pwd"])
def capture_frames(self, n):
Capture frames
import subprocess
import os.path
self.cmd_start_frames()
proc = subprocess.Popen(
[os.path.join(self._dir, "..", "fits_capture", "tess_obssim", "tess_obssim"), '-n', str(n)],
shell=False)
proc.communicate()
self.cmd_stop_frames()
from numpy import var, sqrt
samples=5
#from tessfpe.data.housekeeping_channels import housekeeping_channels
fpe1.cmd_cam_status()
# We make sample_data a dictionary and each value will be a set of HK
# data, with key = sample_name.
sample_data = {}
# For later:
signal_names = []
signal_values = []
signal_data = {}
variance_values = {}
#for i in range(samples):
# Get a new set of HK values
# house_keeping_values = fpe1.house_keeping["analogue"]
# data_values = house_keeping_values.values()
# Add the new HK values to the sample_data dictionary:
sample_number = "sample_" + str(i)
sample_data[sample_number] = data_values
# Get the signal names for use later
signal_names = housekeeping_channels.keys()
# Get list of units for later
units = {}
for name in housekeeping_channels:
units[name] = housekeeping_channels[name]['unit']
Assign the set of all HK values of the same signal (e.g. substrate_1)
to the dictionary 'signal_data'
for k in range(len(signal_names)):
# Build the list 'signal_values' for this signal:
for i in range(samples):
sample_number = "sample_" + str(i)
signal_values.append(sample_data[sample_number][k])
# Add signal_values to the signal_data dictionary:
signal_data[signal_names[k]] = signal_values
signal_values = []
Now get the square root of the variance of each of the 'signal_values' in the
signal_data dictionary and put the result in the 'variance_values'
dictionary.
for name in signal_data:
variance_values[name] = sqrt(var(signal_data[name]))
#print units[name]["unit"]
#print signal_data
#print units[name]
print '{0:26} {1:.3} {2}'.format(name, variance_values[name], units[name])
# Results will be displayed below, in engineering units (root-variance).
# Watch ObsSim LEDs for activity.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set all the operating parameters to the default values
Step3: Start the frames
Step6: Run the variance test
|
7,033
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
from scipy import stats as stats
import mne
from mne import spatial_src_connectivity
from mne.stats import spatio_temporal_cluster_test, summarize_clusters_stc
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
stc_fname = data_path + '/MEG/sample/sample_audvis-meg-lh.stc'
subjects_dir = data_path + '/subjects'
src_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
# Load stc to in common cortical space (fsaverage)
stc = mne.read_source_estimate(stc_fname)
stc.resample(50, npad='auto')
# Read the source space we are morphing to
src = mne.read_source_spaces(src_fname)
fsave_vertices = [s['vertno'] for s in src]
morph = mne.compute_source_morph(stc, 'sample', 'fsaverage',
spacing=fsave_vertices, smooth=20,
subjects_dir=subjects_dir)
stc = morph.apply(stc)
n_vertices_fsave, n_times = stc.data.shape
tstep = stc.tstep
n_subjects1, n_subjects2 = 7, 9
print('Simulating data for %d and %d subjects.' % (n_subjects1, n_subjects2))
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X1 = np.random.randn(n_vertices_fsave, n_times, n_subjects1) * 10
X2 = np.random.randn(n_vertices_fsave, n_times, n_subjects2) * 10
X1[:, :, :] += stc.data[:, :, np.newaxis]
# make the activity bigger for the second set of subjects
X2[:, :, :] += 3 * stc.data[:, :, np.newaxis]
# We want to compare the overall activity levels for each subject
X1 = np.abs(X1) # only magnitude
X2 = np.abs(X2) # only magnitude
print('Computing connectivity.')
connectivity = spatial_src_connectivity(src)
# Note that X needs to be a list of multi-dimensional array of shape
# samples (subjects_k) x time x space, so we permute dimensions
X1 = np.transpose(X1, [2, 1, 0])
X2 = np.transpose(X2, [2, 1, 0])
X = [X1, X2]
# Now let's actually do the clustering. This can take a long time...
# Here we set the threshold quite high to reduce computation.
p_threshold = 0.0001
f_threshold = stats.distributions.f.ppf(1. - p_threshold / 2.,
n_subjects1 - 1, n_subjects2 - 1)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu =\
spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1,
threshold=f_threshold, buffer_size=None)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
fsave_vertices = [np.arange(10242), np.arange(10242)]
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# blue blobs are for condition A != condition B
brain = stc_all_cluster_vis.plot('fsaverage', hemi='both',
views='lateral', subjects_dir=subjects_dir,
time_label='Duration significant (ms)',
clim=dict(kind='value', lims=[0, 1, 40]))
brain.save_image('clusters.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Compute statistic
Step3: Visualize the clusters
|
7,034
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
!pip install --quiet -U apache-beam
import apache_beam as beam
with beam.Pipeline() as pipeline:
icons = (
pipeline
| 'Garden plants' >> beam.Create([
('🍓', 'Strawberry'),
('🥕', 'Carrot'),
('🍆', 'Eggplant'),
('🍅', 'Tomato'),
('🥔', 'Potato'),
])
| 'Keys' >> beam.Keys()
| beam.Map(print)
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Keys
Step2: Example
|
7,035
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd #The data package
import sys #The code below wont work for any versions before Python 3. This just ensures that (allegedly).
assert sys.version_info >= (3,5)
import requests
import io
import zipfile #Three packages we'll need to unzip the data
The next two lines of code converts the URL into a format that works
with the "zipfile" package.
url2013 = 'http://www.federalreserve.gov/econresdata/scf/files/scfp2013s.zip'
url2013_requested = requests.get(url2013)
Next, zipfile downloads, unzips, and saves the file to your computer. 'url2013_unzipped'
contains the file path for the file.
zipfile2013 = zipfile.ZipFile(io.BytesIO(url2013_requested.content))
url2013_unzipped = zipfile2013.extract(zipfile2013.namelist()[0])
df2013 = pd.read_stata(url2013_unzipped)
df2013.head(10) #Returns the first 10 rows of the dataframe
def unzip_survey_file(year = '2013'):
import requests, io, zipfile
import pandas as pd
if int(year) <1989:
url = 'http://www.federalreserve.gov/econresdata/scf/files/'+year+'_scf'+year[2:]+'bs.zip'
else:
url = 'http://www.federalreserve.gov/econresdata/scf/files/scfp'+year+'s.zip'
url = requests.get(url)
url_unzipped = zipfile.ZipFile(io.BytesIO(url.content))
return url_unzipped.extract(url_unzipped.namelist()[0])
df1983 = pd.read_stata(unzip_survey_file(year = '1983'))
df1992 = pd.read_stata(unzip_survey_file(year = '1992'))
df2001 = pd.read_stata(unzip_survey_file(year = '2001'))
There is no Summary Extract dataset for 1983, so we'll rename the variable names in the 1983 Full
dataset so that they correspond to the variable names in the other survey years.
Also, 161 out of the 4262 total households covered in the 1983 survey actually reported having
negative income. This isn't the case for the other survey years we are considering, and it
complicates our analysis a bit below. Because of this, we drop any obs. that report negative
incomes before proceeding. This has a near-zero impact on any of our results, since all but 2
of these observations recieve a weight of zero. The two non-zero weight observations reporting
negative incomes account for only <0.05% of the total population, so not much is lost be
excluding them.
Going forward: it might be worthwhile to figure out why there are instances of negative incomes
in the 1983 survey yet none for the other years.
df1983 = df1983.rename(columns = {'b3201':'income', 'b3324':'networth', 'b3015' : 'wgt'})
df1983 = df1983[df1983['income']>=0]
def weighted_percentiles(data, variable, weights, percentiles = [],
dollar_amt = False, subgroup = None, limits = []):
data specifies what dataframe we're working with
variable specifies the variable name (e.g. income, networth, etc.) in the dataframe
percentiles = [] indicates what percentile(s) to return (e.g. 90th percentile = .90)
weights corresponds to the weighting variable in the dataframe
dollar_amt = False returns the percentage of total income earned by that percentile
group (i.e. bottom 80% of earners earned XX% of total)
dollar_amt = True returns the $ amount earned by that percentile (i.e. 90th percentile
earned $X)
subgroup = '' isolates the analysis to a particular subgroup in the dataset. For example
subgroup = 'age' would return the income distribution of the age group
determined by the limits argument
limits = [] Corresponds to the subgroup argument. For example, if you were interesting in
looking at the distribution of income across heads of household aged 18-24,
then you would input "subgroup = 'age', limits = [18,24]"
import numpy
a = list()
data[variable+weights] = data[variable]*data[weights]
if subgroup is None:
tt = data
else:
tt = data[data[subgroup].astype(int).isin(range(limits[0],limits[1]+1))]
values, sample_weight = tt[variable], tt[weights]
for index in percentiles:
values = numpy.array(values)
index = numpy.array(index)
sample_weight = numpy.array(sample_weight)
sorter = numpy.argsort(values)
values = values[sorter]
sample_weight = sample_weight[sorter]
weighted_percentiles = numpy.cumsum(sample_weight) - 0.5 * sample_weight
weighted_percentiles /= numpy.sum(sample_weight)
a.append(numpy.interp(index, weighted_percentiles, values))
if dollar_amt is False:
return[tt.loc[tt[variable]<=a[x],
variable+weights].sum()/tt[variable+weights].sum() for x in range(len(percentiles))]
else:
return a
%pylab inline
import matplotlib.pyplot as plt
def figureprefs(data, variable = 'income', labels = False, legendlabels = []):
percentiles = [i * 0.05 for i in range(20)]+[0.99, 1.00]
fig, ax = plt.subplots(figsize=(10,8));
ax.set_xticks([i*0.1 for i in range(11)]); #Sets the tick marks
ax.set_yticks([i*0.1 for i in range(11)]);
vals = ax.get_yticks() #Labels the tick marks
ax.set_yticklabels(['{:3.0f}%'.format(x*100) for x in vals]);
ax.set_xticklabels(['{:3.0f}%'.format(x*100) for x in vals]);
ax.set_title('Lorenz Curve: United States, 1983 vs. 2013', #Axes titles
fontsize=18, loc='center');
ax.set_ylabel('Cumulative Percent of Total Income', fontsize = 12);
ax.set_xlabel('Percent of Familes Ordered by Incomes', fontsize = 12);
if type(data) == list:
values = [weighted_percentiles(data[x], variable,
'wgt', dollar_amt = False, percentiles = percentiles) for x in range(len(data))]
for index in range(len(data)):
plt.plot(percentiles,values[index],
linewidth=2.0, marker = 's',clip_on=False,label=legendlabels[index]);
for num in [10, 19, 20]:
ax.annotate('{:3.1f}%'.format(values[index][num]*100),
xy=(percentiles[num], values[index][num]),
ha = 'right', va = 'center', fontsize = 12);
else:
values = weighted_percentiles(data, variable,
'wgt', dollar_amt = False, percentiles = percentiles)
plt.plot(percentiles,values,
linewidth=2.0, marker = 's',clip_on=False,label=legendlabels);
plt.plot(percentiles,percentiles, linestyle = '--', color='k',
label='Perfect Equality');
legend(loc = 2)
years_graph = [df2013, df1983]
labels = ['2013', '1983']
figureprefs(years_graph, variable = 'income', legendlabels = labels);
Note: All Summary Extract data for survey years 1989 and later have been adjusted for inflation
(2013=100). This isn't the case for survey data prior to 1989, so we'll have to adjust the 1983
data manually.
from pandas.io import wb # World Bank api
import warnings
warnings.simplefilter(action = "ignore", category = FutureWarning) #Ignore these two lines
cpi = wb.download(indicator='FP.CPI.TOTL' , country= 'USA', start=1983, end=2013) #CPI
The World Bank CPI series is indexed so that 2010 = 100. We'll have to re-index it so that 2013 = 100
to be consistent with the other data.
cpi1983 = (100/cpi['FP.CPI.TOTL'][2013-2013])*cpi['FP.CPI.TOTL'][2013-1983]/100
df1983['realincome'] = df1983['income']/cpi1983
percentiles = [i * 0.01 for i in range(1,100)]+[0.99]+[0.999]
incomes = pd.DataFrame({'2001': weighted_percentiles(df2001, 'income', 'wgt', dollar_amt = True, percentiles =percentiles),
'2013': weighted_percentiles(df2013, 'income', 'wgt', dollar_amt = True, percentiles = percentiles),
'1992': weighted_percentiles(df1992, 'income', 'wgt', dollar_amt = True, percentiles = percentiles),
'1983': weighted_percentiles(df1983, 'realincome', "wgt", dollar_amt = True, percentiles = percentiles)})
fig, ax = plt.subplots(figsize=(10,6))
plt.plot(percentiles,(incomes['2013']-incomes['1983'])/incomes['1983']/(2013-1983+1),
linewidth = 2.0, label = '1983-2013');
yvals = ax.get_yticks()
ax.set_xticks([i * 0.1 for i in range(11)])
xvals = ax.get_xticks()
ax.set_yticklabels(['{:3.2f}%'.format(x*100) for x in yvals]);
ax.set_xticklabels(['{:3.0f}'.format(x*100) for x in xvals]);
ax.set_title('Annual real income growth by income percentile', #Axes titles
fontsize=18, loc='center');
ax.axhline(y=0,xmin = 0, xmax = 1, linestyle = '--', color = 'k');
ax.set_ylabel('Average annual growth rate of real income');
ax.set_xlabel('Income percentile');
legend(loc=2);
fig, ax = plt.subplots(figsize=(10,6))
plt.plot(percentiles,(incomes['2001']-incomes['1992'])/incomes['1992']/(2001-1992+1),
linewidth = 2.0, label = '1992-2001');
plt.plot(percentiles,(incomes['2013']-incomes['1983'])/incomes['1983']/(2013-1983+1),
linewidth = 2.0, label = '1983-2013');
yvals = ax.get_yticks()
ax.set_xticks([i * 0.1 for i in range(11)])
xvals = ax.get_xticks()
ax.set_yticklabels(['{:3.2f}%'.format(x*100) for x in yvals]);
ax.set_xticklabels(['{:3.0f}'.format(x*100) for x in xvals]);
ax.set_title('Annual real income growth by income percentile', #Axes titles
fontsize=18, loc='center');
ax.axhline(y=0,xmin = 0, xmax = 1, linestyle = '--', color = 'k');
ax.set_ylabel('Average annual growth rate of real income');
ax.set_xlabel('Income percentile');
legend(loc=2);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Survey Data
Step4: We'll also be looking at prior-year surveys, so I'll condense the unzipping processes above into a function out of laziness
Step6: Distribution of Income in the United States
Step9: Is the distribution of income in the U.S. becoming more or less equal?
Step10: The figure above is pretty revealing, but the picture might be even more stark for the period 1992-2001, which corresponds roughly to President Clinton's two terms in office (or as close as we can get with the SCF, which is conducted every three years).
|
7,036
|
<ASSISTANT_TASK:>
Python Code:
import json
import requests
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import datetime
import time
import calendar
import pytz
#from matplotlib.dates import date2num, num2date
utc_tz = pytz.utc
def epochsec_to_dt(epochsec):
Return the datetime object for epoch seconds epochsec
dtnaive_dt = datetime.datetime.utcfromtimestamp(epochsec)
dtutc_dt = dtnaive_dt.replace(tzinfo=pytz.utc)
return dtutc_dt
def get_measurement_byvarid(metaresult, var_id):
return [e for e in metaresult['measurements'] if e['var_id'] == var_id][0]
vz_gai_url = "http://www.wikiwatershed-vs.org/services/get_asset_info.php"
meta_r = requests.get(vz_gai_url, params={'asset_type':'siso', 'opt':'meta'})
meta = meta_r.json()
meta.keys(), meta['success']
type(meta['result']), len(meta['result'])
# siso_id is the unique identifier (string type) for the station
siso_id_lst = [e['siso_id'] for e in meta['result']]
# Examine the response for the first station (index 0) in the returned list
meta['result'][0]['siso_id']
meta['result'][0]
stations_rec = []
for sta in meta['result']:
sta_rec = {key:sta[key] for key in ['siso_id', 'name', 'lat', 'lon',
'platform_type', 'provider']}
stations_rec.append(sta_rec)
stations_df = pd.DataFrame.from_records(stations_rec)
stations_df.set_index('siso_id', inplace=True, verify_integrity=True)
stations_df.index.name = 'siso_id'
print len(stations_df)
stations_df.head(10)
stations_df.platform_type.value_counts()
stations_df.provider.value_counts()
# USGS Schuylkill River at Philadelphia
siso_id = 'USGS_01474500'
# (asset_id, a more generic descriptor for the unique id of any asset)
meta_r = requests.get(vz_gai_url, params={'asset_type':'siso', 'opt':'meta',
'asset_id':siso_id})
# use [0] to pull out the dict from the single-element list
metaresult = meta_r.json()['result'][0] # ideally, should first test for success
# var_id is the unique identifier for a "measurement" (or variable)
[(d['var_id'], d['depth']) for d in metaresult['measurements']]
metaresult['name']
data_r = requests.get(vz_gai_url, params={'asset_type':'siso', 'opt':'data', 'units_mode': 'v1',
'asset_id':siso_id, 'var_id':'all'})
data = data_r.json()
data['success'], len(data['result']), data['result'][0].keys(), len(data['result'][0]['data'])
# Mapping of var_id string to 'result' list element index
var_ids = {e['var_id']:i for i,e in enumerate(data['result'])}
var_ids
var_id = 'H1_Discharge'
get_measurement_byvarid(metaresult, var_id)
data['result'][var_ids[var_id]]['data'][-10:]
# Pull out data time series for one variable based on var_id
# returns a list of dicts
data_lst = data['result'][var_ids[var_id]]['data']
data_df = pd.DataFrame.from_records(data_lst)
data_df.head()
data_df['dtutc'] = data_df.time.map(lambda es: epochsec_to_dt(es))
data_df.set_index('dtutc', inplace=True, verify_integrity=True)
data_df.index.name = 'dtutc'
data_df = data_df.rename(columns={'value':var_id})
data_df.info()
data_df.head()
data_df.describe()
var_info = get_measurement_byvarid(metaresult, var_id)
title = "%s (%s) at %s" % (var_info['name'], var_id, metaresult['name'])
data_df[var_id].plot(title=title, figsize=[11,5])
plt.ylabel(var_info['units']);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: DRB Vizer json services
Step2: Service end point
Step3: Meta info (metadata) requests
Step4: Examine all stations (siso assets) by first importing into a Pandas Dataframe
Step5: Summaries (station counts) by platform_type and provider
Step6: Request and examine one station
Step7: Meta info request
Step8: Data request
Step9: Create dtutc column with parsed datetime. Also, it's safer to rename the "value" column to something unlikely to conflict with pandas method names.
|
7,037
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Christian Brodbeck <christianbrodbeck@nyu.edu>
# Tal Linzen <linzen@nyu.edu>
# Denis A. Engeman <denis.engemann@gmail.com>
# Mikołaj Magnuski <mmagnuski@swps.edu.pl>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mne.datasets import sample
from mne import read_evokeds
print(__doc__)
path = sample.data_path()
fname = path + '/MEG/sample/sample_audvis-ave.fif'
# load evoked corresponding to a specific condition
# from the fif file and subtract baseline
condition = 'Left Auditory'
evoked = read_evokeds(fname, condition=condition, baseline=(None, 0))
times = np.arange(0.05, 0.151, 0.02)
evoked.plot_topomap(times, ch_type='mag', time_unit='s')
evoked.plot_topomap(ch_type='mag', time_unit='s')
evoked.plot_topomap(times, ch_type='mag', average=0.05, time_unit='s')
evoked.plot_topomap(times, ch_type='grad', time_unit='s')
evoked.plot_topomap(times, ch_type='mag', cmap='Spectral_r', res=32,
outlines='skirt', contours=4, time_unit='s')
extrapolations = ['box', 'head', 'local']
fig, axes = plt.subplots(figsize=(7.5, 2.5), ncols=3)
for ax, extr in zip(axes, extrapolations):
evoked.plot_topomap(0.1, ch_type='mag', size=2, extrapolate=extr, axes=ax,
show=False, colorbar=False)
ax.set_title(extr, fontsize=14)
evoked.plot_topomap(0.1, ch_type='mag', show_names=True, colorbar=False,
size=6, res=128, title='Auditory response',
time_unit='s')
plt.subplots_adjust(left=0.01, right=0.99, bottom=0.01, top=0.88)
evoked.animate_topomap(ch_type='mag', times=times, frame_rate=10,
time_unit='s')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic plot_topomap options
Step2: If times is set to None at most 10 regularly spaced topographies will be
Step3: Instead of showing topographies at specific time points we can compute
Step4: We can plot gradiometer data (plots the RMS for each pair of gradiometers)
Step5: Additional plot_topomap options
Step6: If you look at the edges of the head circle of a single topomap you'll see
Step7: More advanced usage
Step8: Animating the topomap
|
7,038
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Standarize features
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
# Create logistic regression object using sag solver
clf = LogisticRegression(random_state=0, solver='sag')
# Train model
model = clf.fit(X_std, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Iris Flower Data
Step2: Standardize Features
Step3: Train Logistic Regression Using SAG solver
|
7,039
|
<ASSISTANT_TASK:>
Python Code:
from aesop import DirectedMutagenesis, plotScan_interactive, plotNetwork_interactive
path_apbs = 'path\to\executable\apbs'
path_coulomb = 'path\to\executable\coulomb'
path_pdb2pqr = 'path\to\executable\pdb2pqr'
jobname = 'directedscan'
pdbfile = 'barnase_barstar.pdb'
selstr = ['chain A', 'chain B']
target = ['resnum 27', 'resnum 73', 'resnum 83', 'resnum 87', # mutations in chain A
'resnum 145', 'resnum 149', 'resnum 164', 'resnum 186'] # mutations in chain B
mutation = ['ASP', 'LYS', 'GLU', 'GLU', # mutations in chain A
'ARG', 'ARG', 'ASP', 'LYS'] # mutations in chain B
mutscan = DirectedMutagenesis(pdb=pdbfile,
pdb2pqr_exe=path_pdb2pqr,
apbs_exe=path_apbs,
coulomb_exe=path_coulomb,
jobname=jobname,
selstr=selstr,
target=target,
mutation=mutation,
minim=True)
mutscan.run()
plotScan_interactive(mutscan,display_output='notebook')
#If you are not using a notebook to run your code then use the code below instead:
#plotScan_interactive(mutscan)
plotNetwork_interactive(mutscan,display_output='notebook')
#If you are not using a notebook to run your code then use the code below instead:
#plotNetwork_interactive(mutscan)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Once DirectedMutagenesis is instantiated and finished running, we can plot the results. The plotScan_interactive function by default, outputs the results in an html file and opens it up in your browser. However, if you are using a notebook to view it, you can display the plot inline by passing the argument display_output='notebook' to the function. Here we display it in the notebook so that it's easier to view alongside the code.
Step2: The plotScan_interactive function displays a bar plot similar to plotScan but now hovering over specific bars displays the corresponding asssociation/solvation free energy values. Additionally, clicking and dragging in the plot allows you to zoom in a subset of values. The plotly modebar in the top right has additional options such as zoom, autoscale and saving as a static image.
|
7,040
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from astropy.table import QTable
from astropy import units as u
from astropy import constants as const
from astropy.units import imperial
imperial.enable()
u.m # The unit of meters
u.s # The unit of seconds
u.m / u.s # combine them into a composite unit
u.m.find_equivalent_units()
time_1 = 0.25 * u.s
time_1
position = np.arange(1,6,1) * u.m # np.arange(x,y,z) - create an array of numbers between x and y in steps z
position
velocity = position / time_1
velocity
velocity.value
velocity.unit
distance = 100 * u.km
time_2 = distance / velocity
time_2
time_2.decompose()
time_2.to(u.h)
velocity.to(u.cm / u.h)
velocity.to(imperial.mi / u.h)
velocity.si # quick conversion to SI units
velocity.cgs # quick conversion to CGS units
density = 3000 * (u.kg / u.m**3) # From last week's homework
density.to(u.kg / u.km**3)
u.s.find_equivalent_units()
time_3 = 29 * u.day + 7 * u.h + 56 * u.min + 12 * u.s
time_4 = 2 * u.fortnight + 1.33 * u.day
time_3 - time_4
(time_3 - time_4).to(u.min)
ringo = u.def_unit('Ringos', 3.712 * imperial.yd)
position.to(ringo)
velocity.to(ringo / u.s)
dimless_y = (1 * u.m) / (1 * u.km)
dimless_y
dimless_y.unit
dimless_y.decompose() # returns the scale of the dimentionless quanity
np.log(2 * u.m)
np.log((2 * u.km) / (1 * u.m))
np.log10((2 * u.km) / (1 * u.m))
np.sin(2 * u.m)
np.sin(2 * u.deg)
np.sin(90) # not really what I expected
np.sin(90 * u.deg) # better
planet_table = QTable.read('Planets.csv', format='ascii.csv')
planet_table[0:3]
planet_table['a'].unit = u.AU
planet_table[0:3]
planet_table['a'].to(u.km)
planet_table['a'].to(u.km)[2]
const.G
const.M_sun
distance = planet_table['a'][0:3] # Mercury, Venus, Earth
orbit_v = np.sqrt(const.G * const.M_sun / distance)
orbit_v
orbit_v.decompose()
orbit_v.to(u.km/u.s)
orbit_v.to(ringo/u.ms)
def find_diameter(H,A):
result = (1329 / np.sqrt(A)) * (10 ** (-0.2 * H))
return result * u.km
H = 3.34
A = 0.09
asteroid_diameter = find_diameter(H,A)
asteroid_diameter
def find_mass(D):
p = 3000 * (u.kg / u.m**3)
result = p * (1/6) * np.pi * D**3
return result
asteroid_mass = find_mass(asteroid_diameter)
asteroid_mass
asteroid_mass.decompose()
moon_mass = u.def_unit('Lunar\ Masses', 7.34767309e22 * u.kg)
asteroid_mass.to(moon_mass)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: For any unit you can find all of the built-in units that are equivalent
Step3: The units package is much more useful when you combine it with scalars or arrays to create Quantities
Step4: You can access the number and unit part of the Quantity separately
Step5: At example problem
Step6: Notice that the units are a bit strange. We can simplify this using .decompose()
Step7: Unit conversion is really easy!
Step8: Units make math with Time easy!
Step9: You can define your own units
Step10: Dimentionless Units
Step11: Some math functions only make sense with dimentionless quanities
Step12: Or they expect the correct type of unit!
Step13: Using units can save you headaches.
Step14: Tables and units
Step15: Constants
Step16: An Example
Step17: Functions and Units - (Last week's homework)
|
7,041
|
<ASSISTANT_TASK:>
Python Code:
# Author: Jussi Nurminen (jnu@iki.fi)
#
# License: BSD (3-clause)
import mne
import os
from mne.datasets import multimodal
fname_raw = os.path.join(multimodal.data_path(), 'multimodal_raw.fif')
print(__doc__)
raw = mne.io.read_raw_fif(fname_raw)
print(raw.acqparser)
cond = raw.acqparser.get_condition(raw, 'Auditory right')
epochs = mne.Epochs(raw, **cond)
epochs.average().plot_topo(background_color='w')
evokeds = []
for cat in raw.acqparser.categories:
cond = raw.acqparser.get_condition(raw, cat)
# copy (supported) rejection parameters from DACQ settings
epochs = mne.Epochs(raw, reject=raw.acqparser.reject,
flat=raw.acqparser.flat, **cond)
evoked = epochs.average()
evoked.comment = cat['comment']
evokeds.append(evoked)
# save all averages to an evoked fiff file
# fname_out = 'multimodal-ave.fif'
# mne.write_evokeds(fname_out, evokeds)
newcat = dict()
newcat['comment'] = 'Visual lower left, longer epochs'
newcat['event'] = 3 # reference event
newcat['start'] = -.2 # epoch start rel. to ref. event (in seconds)
newcat['end'] = .7 # epoch end
newcat['reqevent'] = 0 # additional required event; 0 if none
newcat['reqwithin'] = .5 # ...required within .5 sec (before or after)
newcat['reqwhen'] = 2 # ...required before (1) or after (2) ref. event
newcat['index'] = 9 # can be set freely
cond = raw.acqparser.get_condition(raw, newcat)
epochs = mne.Epochs(raw, reject=raw.acqparser.reject,
flat=raw.acqparser.flat, **cond)
epochs.average().plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read raw file
Step2: Check DACQ defined averaging categories and other info
Step3: Extract epochs corresponding to a category
Step4: Get epochs from all conditions, average
Step5: Make a new averaging category
|
7,042
|
<ASSISTANT_TASK:>
Python Code:
X = np.array([[11, 12], [21, 22], [31, 32]])
X
X = np.array([[1,1,1,1], [1,2,4,8], [1,3,5,7], [1,4,16,32], [1,5,9,13]])
X
X[1::2, 1:]
X = np.array([[1,1,1,1], [1,2,4,8], [1,3,5,7],[1,4,16,32],[1,5,9,13]])
X
X[X%4==0]
X = np.ones((5,4))
Y = np.zeros((5,4))
np.hstack([X, Y])
np.arange(1,6)
np.arange(1,6).reshape(5,1)
x = np.arange(5)
x
x[:, np.newaxis] + 1
10 * (x[:, np.newaxis] + 1)
10 * (x[:, np.newaxis] + 1) + x
np.random.seed(0)
X = np.random.random_integers(0, 100, (5,3))
X
X.max(axis=1)
plt.scatter(*np.meshgrid(range(5), range(6)));
A = np.array([[1,2], [3,4]])
B = np.array([[5,6,], [7,8]])
print(np.linalg.det(3*A), 3*np.linalg.det(A)) #no.2
print(np.linalg.det(A+B), np.linalg.det(A) + np.linalg.det(B)) #no.6
print(np.trace(A), np.trace(np.linalg.inv(A))) #no.9
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 문제2.
Step2: 문제3.
Step3: 문제4.
Step4: 문제 5.
Step5: 문제6.
Step6: 문제7.
Step7: 문제10
Step8: 선형대수
|
7,043
|
<ASSISTANT_TASK:>
Python Code:
def lempel_ziv_complexity(sequence):
Lempel-Ziv complexity for a binary sequence, in simple Python code.
sub_strings = set()
n = len(sequence)
ind = 0
inc = 1
# this while loop runs at most n times
while True:
if ind + inc > len(sequence):
break
# this can take some time, takes O(inc)
sub_str = sequence[ind : ind + inc]
# and this also, takes a O(log |size set|) in worst case
# max value for inc = n / size set at the end
# so worst case is that the set contains sub strings of the same size
# and the worst loop takes a O(n / |S| * log(|S|))
# ==> so if n/|S| is constant, it gives O(n log(n)) at the end
# but if n/|S| = O(n) then it gives O(n^2)
if sub_str in sub_strings:
inc += 1
else:
sub_strings.add(sub_str)
ind += inc
inc = 1
return len(sub_strings)
s = '1001111011000010'
lempel_ziv_complexity(s) # 1 / 0 / 01 / 11 / 10 / 110 / 00 / 010
%timeit lempel_ziv_complexity(s)
lempel_ziv_complexity('1010101010101010') # 1, 0, 10, 101, 01, 010, 1010
lempel_ziv_complexity('1001111011000010000010') # 1, 0, 01, 11, 10, 110, 00, 010, 000
lempel_ziv_complexity('100111101100001000001010') # 1, 0, 01, 11, 10, 110, 00, 010, 000, 0101
%timeit lempel_ziv_complexity('100111101100001000001010')
import random
def random_string(size, alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZ"):
return "".join(random.choices(alphabet, k=size))
def random_binary_sequence(size):
return random_string(size, alphabet="01")
random_string(100)
random_binary_sequence(100)
for (r, name) in zip(
[random_string, random_binary_sequence],
["random strings in A..Z", "random binary sequences"]
):
print("\nFor {}...".format(name))
for n in [10, 100, 1000, 10000, 100000]:
print(" of sizes {}, Lempel-Ziv complexity runs in:".format(n))
%timeit lempel_ziv_complexity(r(n))
%load_ext cython
%%cython
import cython
ctypedef unsigned int DTYPE_t
@cython.boundscheck(False) # turn off bounds-checking for entire function, quicker but less safe
def lempel_ziv_complexity_cython(str sequence not None):
Lempel-Ziv complexity for a string, in simple Cython code (C extension).
cdef set sub_strings = set()
cdef str sub_str = ""
cdef DTYPE_t n = len(sequence)
cdef DTYPE_t ind = 0
cdef DTYPE_t inc = 1
while True:
if ind + inc > len(sequence):
break
sub_str = sequence[ind : ind + inc]
if sub_str in sub_strings:
inc += 1
else:
sub_strings.add(sub_str)
ind += inc
inc = 1
return len(sub_strings)
s = '1001111011000010'
lempel_ziv_complexity_cython(s) # 1 / 0 / 01 / 11 / 10 / 110 / 00 / 010
%timeit lempel_ziv_complexity(s)
%timeit lempel_ziv_complexity_cython(s)
lempel_ziv_complexity_cython('1010101010101010') # 1, 0, 10, 101, 01, 010, 1010
lempel_ziv_complexity_cython('1001111011000010000010') # 1, 0, 01, 11, 10, 110, 00, 010, 000
lempel_ziv_complexity_cython('100111101100001000001010') # 1, 0, 01, 11, 10, 110, 00, 010, 000, 0101
for (r, name) in zip(
[random_string, random_binary_sequence],
["random strings in A..Z", "random binary sequences"]
):
print("\nFor {}...".format(name))
for n in [10, 100, 1000, 10000, 100000]:
print(" of sizes {}, Lempel-Ziv complexity in Cython runs in:".format(n))
%timeit lempel_ziv_complexity_cython(r(n))
from numba import jit
@jit
def lempel_ziv_complexity_numba(sequence : str) -> int:
Lempel-Ziv complexity for a sequence, in Python code using numba.jit() for automatic speedup (hopefully).
sub_strings = set()
n : int= len(sequence)
ind : int = 0
inc : int = 1
while True:
if ind + inc > len(sequence):
break
sub_str : str = sequence[ind : ind + inc]
if sub_str in sub_strings:
inc += 1
else:
sub_strings.add(sub_str)
ind += inc
inc = 1
return len(sub_strings)
s = '1001111011000010'
lempel_ziv_complexity_numba(s) # 1 / 0 / 01 / 1110 / 1100 / 0010
%timeit lempel_ziv_complexity_numba(s)
lempel_ziv_complexity_numba('1010101010101010') # 1, 0, 10, 101, 01, 010, 1010
lempel_ziv_complexity_numba('1001111011000010000010') # 1, 0, 01, 11, 10, 110, 00, 010, 000
9
lempel_ziv_complexity_numba('100111101100001000001010') # 1, 0, 01, 11, 10, 110, 00, 010, 000, 0101
%timeit lempel_ziv_complexity_numba('100111101100001000001010')
from numpy.random import binomial
def bernoulli(p, size=1):
One or more samples from a Bernoulli of probability p.
return binomial(1, p, size)
bernoulli(0.5, 20)
''.join(str(i) for i in bernoulli(0.5, 20))
def random_binary_sequence(n, p=0.5):
Uniform random binary sequence of size n, with rate of 0/1 being p.
return ''.join(str(i) for i in bernoulli(p, n))
random_binary_sequence(50)
random_binary_sequence(50, p=0.1)
random_binary_sequence(50, p=0.25)
random_binary_sequence(50, p=0.5)
random_binary_sequence(50, p=0.75)
random_binary_sequence(50, p=0.9)
def tests_3_functions(n, p=0.5, debug=True):
s = random_binary_sequence(n, p=p)
c1 = lempel_ziv_complexity(s)
if debug:
print("Sequence s = {} ==> complexity C = {}".format(s, c1))
c2 = lempel_ziv_complexity_cython(s)
c3 = lempel_ziv_complexity_numba(s)
assert c1 == c2 == c3, "Error: the sequence {} gave different values of the Lempel-Ziv complexity from 3 functions ({}, {}, {})...".format(s, c1, c2, c3)
return c1
tests_3_functions(5)
tests_3_functions(20)
tests_3_functions(50)
tests_3_functions(500)
tests_3_functions(5000)
%timeit lempel_ziv_complexity('100111101100001000001010')
%timeit lempel_ziv_complexity_cython('100111101100001000001010')
%timeit lempel_ziv_complexity_numba('100111101100001000001010')
%timeit lempel_ziv_complexity('10011110110000100000101000100100101010010111111011001111111110101001010110101010')
%timeit lempel_ziv_complexity_cython('10011110110000100000101000100100101010010111111011001111111110101001010110101010')
%timeit lempel_ziv_complexity_numba('10011110110000100000101000100100101010010111111011001111111110101001010110101010')
%timeit tests_3_functions(10, debug=False)
%timeit tests_3_functions(20, debug=False)
%timeit tests_3_functions(40, debug=False)
%timeit tests_3_functions(80, debug=False)
%timeit tests_3_functions(160, debug=False)
%timeit tests_3_functions(320, debug=False)
def test_cython(n):
s = random_binary_sequence(n)
c = lempel_ziv_complexity_cython(s)
return c
%timeit test_cython(10)
%timeit test_cython(20)
%timeit test_cython(40)
%timeit test_cython(80)
%timeit test_cython(160)
%timeit test_cython(320)
%timeit test_cython(640)
%timeit test_cython(1280)
%timeit test_cython(2560)
%timeit test_cython(5120)
%timeit test_cython(10240)
%timeit test_cython(20480)
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(context="notebook", style="darkgrid", palette="hls", font="sans-serif", font_scale=1.4)
import numpy as np
import timeit
sizes = np.array(np.trunc(np.logspace(1, 6, 30)), dtype=int)
times = np.array([
timeit.timeit(
stmt="lempel_ziv_complexity_cython(random_string({}))".format(n),
globals=globals(),
number=10,
)
for n in sizes
])
plt.figure(figsize=(15, 10))
plt.plot(sizes, times, 'o-')
plt.xlabel("Length $n$ of the binary sequence $S$")
plt.ylabel(r"Time in $\mu\;\mathrm{s}$")
plt.title("Time complexity of Lempel-Ziv complexity")
plt.show()
plt.figure(figsize=(15, 10))
plt.loglog(sizes, times, 'o-')
plt.xlabel("Length $n$ of the binary sequence $S$")
plt.ylabel(r"Time in $\mu\;\mathrm{s}$")
plt.title("Time complexity of Lempel-Ziv complexity, loglog scale")
plt.show()
%%time
%%script julia
Lempel-Ziv complexity for a sequence, in simple Julia code.
function lempel_ziv_complexity(sequence)
sub_strings = Set()
n = length(sequence)
ind = 1
inc = 1
while true
if ind + inc > n
break
end
sub_str = sequence[ind : ind + inc]
if sub_str in sub_strings
inc += 1
else
push!(sub_strings, sub_str)
ind += inc
inc = 1
end
end
return length(sub_strings)
end
s = "1001111011000010"
lempel_ziv_complexity(s) # 1 / 0 / 01 / 1110 / 1100 / 0010
M = 1000;
N = 10000;
for _ in 1:M
s = join(rand(0:1, N));
lempel_ziv_complexity(s);
end
lempel_ziv_complexity(s) # 1 / 0 / 01 / 1110 / 1100 / 0010
%%time
%%pypy
def lempel_ziv_complexity(sequence):
Lempel-Ziv complexity for a binary sequence, in simple Python code.
sub_strings = set()
n = len(sequence)
ind = 0
inc = 1
while True:
if ind + inc > len(sequence):
break
sub_str = sequence[ind : ind + inc]
if sub_str in sub_strings:
inc += 1
else:
sub_strings.add(sub_str)
ind += inc
inc = 1
return len(sub_strings)
s = "1001111011000010"
lempel_ziv_complexity(s) # 1 / 0 / 01 / 11 / 10 / 110 / 00 / 010
from random import random
M = 1000
N = 10000
for _ in range(M):
s = ''.join(str(int(random() < 0.5)) for _ in range(N))
lempel_ziv_complexity(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Table of Contents
Step2: Tests (1/2)
Step4: We can start to see that the time complexity of this function seems to grow linearly as the size grows.
Step5: Let try it!
Step6: Now for a test of the speed?
Step8: $\implies$ Yay! It seems faster indeed! but only x2 times faster...
Step9: Let try it!
Step11: $\implies$ Well... It doesn't seem that much faster from the naive Python code.
Step13: That's probably not optimal, but we can generate a string with
Step14: And so, this function can test to check that the three implementations (naive, Cython-powered, Numba-powered) always give the same result.
Step15: Benchmarks
Step16: Let check the time used by all the three functions, for longer and longer sequences
Step17: Complexity ?
Step19: It is linear in $\log\log$ scale, so indeed the algorithm seems to have a linear complexity.
Step21: And to compare it fairly, let us use Pypy for comparison.
|
7,044
|
<ASSISTANT_TASK:>
Python Code:
from kubernetes import client, config
from kubernetes.client.rest import ApiException
config.load_kube_config()
api_instance = client.CoreV1Api()
cmap = client.V1ConfigMap()
cmap.metadata = client.V1ObjectMeta(name="special-config")
cmap.data = {}
cmap.data["special.how"] = "very"
cmap.data["special.type"] = "charm"
api_instance.create_namespaced_config_map(namespace="default", body=cmap)
pod = client.V1Pod()
spec = client.V1PodSpec()
pod.metadata = client.V1ObjectMeta(name="dapi-test-pod")
container = client.V1Container()
container.name = "test-container"
container.image = "gcr.io/google_containers/busybox"
container.command = ["/bin/sh", "-c", "env"]
container.env = [client.V1EnvVar(name="SPECIAL_LEVEL_KEY"), client.V1EnvVar(name="SPECIAL_TYPE_KEY")]
container.env[0].value_from = client.V1EnvVarSource()
container.env[0].value_from.config_map_key_ref = client.V1ConfigMapKeySelector(name="special-config", key="special.how")
container.env[1].value_from = client.V1EnvVarSource()
container.env[1].value_from.config_map_key_ref = client.V1ConfigMapKeySelector(name="special-config", key="special.type")
spec.restart_policy = "Never"
spec.containers = [container]
pod.spec = spec
api_instance.create_namespaced_pod(namespace="default",body=pod)
log = ""
try:
log = api_instance.read_namespaced_pod_log(name="dapi-test-pod", namespace="default")
except ApiException as e:
if str(e).find("ContainerCreating") != -1:
print("Creating Pod container.\nRe-run current cell.")
else:
print("Exception when calling CoreV1Api->read_namespaced_pod_log: %s\n" % e)
for line in log.split("\n"):
if line.startswith("SPECIAL"):
print(line)
api_instance.delete_namespaced_config_map(name="special-config", namespace="default", body=cmap)
api_instance.delete_namespaced_pod(name="dapi-test-pod", namespace="default", body=client.V1DeleteOptions())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load config from default location
Step2: Create API endpoint instance and API resource instances
Step3: Create key value pair data for the ConfigMap
Step4: Create ConfigMap
Step5: Create API endpoint instance and API resource instances for test Pod
Step6: Initialize test Pod container
Step7: Define Pod environment variables with data from ConfigMaps
Step8: Create Pod
Step9: View ConfigMap data from Pod log
Step10: Delete ConfigMap
Step11: Delete Pod
|
7,045
|
<ASSISTANT_TASK:>
Python Code:
from keras.applications import VGG16
from keras.applications.imagenet_utils import preprocess_input, decode_predictions
import os
# -- Jupyter/IPython way to see documentation
# please focus on parameters (e.g. include top)
VGG16??
vgg16 = VGG16(include_top=True, weights='imagenet')
IMAGENET_FOLDER = 'imgs/imagenet' #in the repo
!ls imgs/imagenet
from keras.preprocessing import image
import numpy as np
img_path = os.path.join(IMAGENET_FOLDER, 'strawberry_1157.jpeg')
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
preds = vgg16.predict(x)
print('Predicted:', decode_predictions(preds))
img_path = os.path.join(IMAGENET_FOLDER, 'apricot_696.jpeg')
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
preds = vgg16.predict(x)
print('Predicted:', decode_predictions(preds))
img_path = os.path.join(IMAGENET_FOLDER, 'apricot_565.jpeg')
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
preds = vgg16.predict(x)
print('Predicted:', decode_predictions(preds))
# from keras.applications import VGG19
## from keras.applications import ...
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you're wondering where this HDF5 files with weights is stored, please take a look at ~/.keras/models/
Step2: <img src="imgs/imagenet/strawberry_1157.jpeg" >
Step3: <img src="imgs/imagenet/apricot_696.jpeg" >
Step4: <img src="imgs/imagenet/apricot_565.jpeg" >
Step5: Hands On
Step6: Residual Networks
|
7,046
|
<ASSISTANT_TASK:>
Python Code:
from fig_utils import *
import matplotlib.pyplot as plt
import time
%matplotlib inline
# Plot parameters
country = 'nigeria'
country_path = '../data/LSMS/nigeria/'
dimension = None
k = 5
k_inner = 5
points = 10
alpha_low = 1
alpha_high = 5
margin = 0.25
# Plot single panel
t0 = time.time()
X, y, y_hat, r_squareds_test = predict_consumption(country, country_path,
dimension, k, k_inner, points, alpha_low,
alpha_high, margin)
t1 = time.time()
print 'Finished in {} seconds'.format(t1-t0)
# Plot parameters
country = 'tanzania'
country_path = '../data/LSMS/tanzania/'
dimension = None
k = 5
k_inner = 5
points = 10
alpha_low = 1
alpha_high = 5
margin = 0.25
# Plot single panel
t0 = time.time()
X, y, y_hat, r_squareds_test = predict_consumption(country, country_path,
dimension, k, k_inner, points, alpha_low,
alpha_high, margin)
t1 = time.time()
print 'Finished in {} seconds'.format(t1-t0)
# Plot parameters
country = 'uganda'
country_path = '../data/LSMS/uganda/'
dimension = None
k = 5
k_inner = 5
points = 10
alpha_low = 1
alpha_high = 5
margin = 0.25
# Plot single panel
t0 = time.time()
X, y, y_hat, r_squareds_test = predict_consumption(country, country_path,
dimension, k, k_inner, points, alpha_low,
alpha_high, margin)
t1 = time.time()
print 'Finished in {} seconds'.format(t1-t0)
# Plot parameters
country = 'malawi'
country_path = '../data/LSMS/malawi/'
dimension = None
k = 5
k_inner = 5
points = 10
alpha_low = 1
alpha_high = 5
margin = 0.25
# Plot single panel
t0 = time.time()
X, y, y_hat, r_squareds_test = predict_consumption(country, country_path,
dimension, k, k_inner, points, alpha_low,
alpha_high, margin)
t1 = time.time()
print 'Finished in {} seconds'.format(t1-t0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Predicting consumption expeditures
Step2: Panel B
Step3: Panel C
Step4: Panel D
|
7,047
|
<ASSISTANT_TASK:>
Python Code:
%run Regexp-2-NFA.ipynb
%run NFA-2-DFA.ipynb
def cartesian_product(A, B):
return { (x, y) for x in A
for y in B
}
cartesian_product({1, 2}, {'a', 'b'})
def fsm_complement(F1, F2):
States1, Σ, 𝛿1, q1, A1 = F1
States2, _, 𝛿2, q2, A2 = F2
States = cartesian_product(States1, States2)
𝛿 = {}
for p1, p2 in States:
for c in Σ:
𝛿[(p1, p2), c] = (𝛿1[p1, c], 𝛿2[p2, c])
return States, Σ, 𝛿, (q1, q2), cartesian_product(A1, States2 - A2)
def regexp2DFA(r, Σ):
converter = RegExp2NFA(Σ)
nfa = converter.toNFA(r)
dfa = nfa2dfa(nfa)
return dfa
def is_empty(F):
States, Σ, δ, q0, Accepting = F
Reachable = { q0 }
while True:
NewFound = { δ[q, c] for q in Reachable
for c in Σ
}
if NewFound <= Reachable:
break
Reachable |= NewFound
return Reachable & Accepting == set()
def regExpEquiv(r1, r2, Σ):
F1 = regexp2DFA(r1, Σ)
F2 = regexp2DFA(r2, Σ)
r1_minus_r2 = fsm_complement(F1, F2)
r2_minus_r1 = fsm_complement(F2, F1)
return is_empty(r1_minus_r2) and is_empty(r2_minus_r1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NFA-2-DFA.ipynb contains the function nfa2dfa that converts a non-deterministic
Step2: Given two sets A and B, the function cartesian_product(A, B) computes the
Step3: Given to deterministic FSMs F1 and F2, the expression fsm_complement(F1, F2) computes a deterministic
Step4: Given a regular expression $r$ and an alphabet $\Sigma$, the function $\texttt{regexp2DFA}(r, \Sigma)$
Step5: Given a deterministic FSM $F$ the function
Step6: The function regExpEquiv takes three arguments
|
7,048
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model_name = type(model).__name__
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
7,049
|
<ASSISTANT_TASK:>
Python Code:
# Load pickled data
import pickle
import cv2 # for grayscale and normalize
# TODO: Fill this in based on where you saved the training and testing data
training_file ='traffic-signs-data/train.p'
validation_file='traffic-signs-data/valid.p'
testing_file = 'traffic-signs-data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_trainLd, y_trainLd = train['features'], train['labels']
#X_trainLd, y_trainLd = X_trainLd[:20000], y_trainLd[:20000]
X_validLd, y_validLd = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
print("Xtrain shape : "+str(X_trainLd.shape))
print("ytrain shape : "+str(y_trainLd.shape))
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
# TODO: Number of training examples
n_train = X_trainLd.shape[0]
# TODO: Number of validation examples
n_validation = X_validLd.shape[0]
# TODO: Number of testing examples.
n_test = X_test.shape[0]
# TODO: What's the shape of an traffic sign image?
image_shape = X_trainLd.shape[1:4]
# TODO: How many unique classes/labels there are in the dataset.
#n_classes = n_train+n_validation+n_test -- this doesn't seem correct 43 in excel file
n_classes = 43
print("Xtrain Shape = " ,X_trainLd.shape)
print("ytrain Shape = " ,y_trainLd.shape)
print("Number of training examples =", n_train)
print("Number of validation examples =", n_validation)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
import random
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
#-------------------------------------------
#normalise
def normit(img):
image = img.astype(float)
temp = np.zeros(shape=img.shape)
i=0
for pix in image:
b=0
for pix2 in pix:
check = float((pix2-128)/128)
pix[b] = check
#print("stuff" +str(check) +" norm "+ str(pix[b])+" i "+str(b)+" "+ str(pix2) +" "+str(check))
b=b+1
temp[i] = pix
i=i+1
return temp
def normit3(img):
imagen = img.astype(float)
temp = np.zeros(shape=img.shape)
#print("imagen 0 " + str(imagen[0][0]))
i=0
for pix in imagen:
d=0
for pix2 in pix:
for b in range(0,3):
check = float((pix2[b]-128)/128)
pix2[b] = check
temp[i][d] = pix2
d=d+1
i=i+1
return temp
def normit4(img1,imagegray):
imagen = img1.astype(float)
imagen2 = imagegray.astype(float)
tempy = np.zeros((img1.shape[0],img1.shape[1],4))
#print("shapey : ",tempy.shape)
temp= tempy.reshape(32,32,4)
#print("imagen 0 " + str(imagen[0][0]))
i=0
for pix in imagen:
d=0
for pix2 in pix:
for b in range(0,3):
check = float((pix2[b]-128)/128)
pix2[b] = check
#pix3 = np.concatenate(pix2,((imagen2[i][d]-128)/128))
temp[i][d] =[pix2[0],pix2[1], pix2[2],((imagen2[i][d]-128)/128)]
#if i==20 and d==20:
#print("here : " ,temp[i][d])
d=d+1 #g
i=i+1
#print("here : " ,temp[20][20])
return temp
#-------------------------------------------
index = random.randint(0, len(X_trainLd))
imagetest = X_trainLd[109] #squeeze : Remove single-dimensional entries from the shape of an array.
print(imagetest.shape)
#print(image[0])
np.set_printoptions(threshold=np.inf)
#print(image[0])
imagegray = cv2.cvtColor(imagetest,cv2.COLOR_RGB2GRAY)
imagegr = imagegray
#-------------------------------------------
temp2 = normit4(imagetest,imagegray)
print("here : " ,temp2[20][20])
#-------------------------------------------
#imagegnormnew = temp2.reshape(32,32,1)
#imagegnorm = temp.reshape(list(imagef.shape[:-1]) + [1]) #method list() takes sequence types and converts them to lists. This is used to convert a given tuple into list.
plt.figure(figsize=(1,1))
#plt.imshow(imageld.squeeze())
plt.imshow(temp2.squeeze(),cmap='gray')
print("gray ", imagegray[20][20], ((imagegray[20][20]-128)/128), "imagetest : ",imagetest[20][20], "temp2 ",temp2[20][20])
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
import cv2
from sklearn.utils import shuffle
#print("X_Train 0" +str(X_train[0]))
#gre
## xtrain
grey_X_train = np.zeros(shape=[X_trainLd.shape[0],X_trainLd.shape[1],X_trainLd.shape[2]])
norm_X_train = np.zeros(shape=[X_trainLd.shape[0],X_trainLd.shape[1],X_trainLd.shape[2],4])
norm_X_train = norm_X_train.astype(float)
X_train_Pre, y_train_Pre = shuffle(X_trainLd, y_trainLd)
X_valid_Pre, y_valid_Pre = X_validLd, y_validLd
i=0
for img in X_train_Pre:
img2 = cv2.cvtColor(img,cv2.COLOR_RGB2GRAY)
grey_X_train[i] = img2
i=i+1
i=0
for p in grey_X_train:
#pnor= cv2.normalize(p, dst =image_shape, alpha=0, beta=25, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8UC1)
#p2 = pnor.reshape(list(pnor.shape[:-1]) + [1])
temp2 = normit4(X_train_Pre[i], p)
#temp = temp2.reshape(32,32,1)
#print(p.shape)
#print("test " + str(test.shape) )
norm_X_train[i] = temp2
i=i+1
if i%1000 ==0:
print(i)
##validate
grey_X_valid = np.zeros(shape=[X_validLd.shape[0],X_validLd.shape[1],X_validLd.shape[2]])
norm_X_valid = np.zeros(shape=[X_validLd.shape[0],X_validLd.shape[1],X_validLd.shape[2],4])
norm_X_valid=norm_X_valid.astype(float)
i=0
for imgv in X_valid_Pre:
imgv2 = cv2.cvtColor(imgv,cv2.COLOR_RGB2GRAY)
grey_X_valid[i] = imgv2
i=i+1
i=0
for v in grey_X_valid:
tv = normit4(X_train_Pre[i],v)
#tempv = tv.reshape(32,32,1)
norm_X_valid[i] = tv
i=i+1
if i%1000 ==0:
print(i)
print("after normalise")
### Define your architecture here.
### Feel free to use as many code cells as needed.
import tensorflow as tf
EPOCHS = 15
BATCH_SIZE = 128 #SMcM change to 256 from 128
#X_train=norm_X_train,
#X_valid=norm_X_valid
#X_train,y_train=X_trainLd, y_trainLd
#X_valid,y_valid=X_validLd, y_validLd
X_train,y_train=norm_X_train, y_train_Pre
X_valid,y_valid=norm_X_valid, y_valid_Pre
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.05 #SMcM very good result with .05 for standard lenet no preprocesing
# SOLUTION: Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5,4, 6), mean = mu, stddev = sigma)) #SMcM depth cahnged to 3
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
#conv1 = tf.nn.relu(conv1) #SMcM add an extra relu
# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(43))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
print("model")
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
#Features and Labels
x = tf.placeholder(tf.float32, (None, 32, 32, 4))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 43)
#Training Pipeline
rate = 0.001 # SMCM decreased rate to .0008 from 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
#Model Evaluation
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
#Train the Model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
#evaluate the model
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Include an exploratory visualization of the dataset
Step3: Step 2
Step4: Model Architecture
Step5: Train, Validate and Test the Model
Step6: Evaluate the Model
Step7: Step 3
Step8: Predict the Sign Type for Each Image
Step9: Analyze Performance
Step10: Output Top 5 Softmax Probabilities For Each Image Found on the Web
Step11: Project Writeup
|
7,050
|
<ASSISTANT_TASK:>
Python Code:
import sklearn.model_selection, numpy, astropy.io.ascii as asc
table = asc.read('/Users/alger/data/Crowdastro/one-table-to-rule-them-all.tbl')
# clean = numpy.array(asc.read('clean-atlas.tbl')['Clean']).astype(bool)
# clean.shape
primary_component_to_norris_swire = {}
primary_component_to_rgz_swire = {}
# Only includes primary components (though labels may come from any component).
primary_components_with_both_labels = set()
for row in table:# table[clean]
primary_component_id = row['Primary Component ID (RGZ)']
if not primary_component_id:
# If something does not have a primary component ID, it cannot have an RGZ label, so we don't care about it!
continue
norris_swire = row['Source SWIRE (Norris)']
rgz_swire = row['Source SWIRE Name (RGZ)']
if not norris_swire or not rgz_swire or not norris_swire.startswith('SWIRE'):
continue
primary_component_to_norris_swire[primary_component_id] = norris_swire
primary_component_to_rgz_swire[primary_component_id] = rgz_swire
primary_components_with_both_labels.add(primary_component_id)
print(len(primary_components_with_both_labels))
keys = numpy.array([r['Key'] for r in table
if r['Component Zooniverse ID (RGZ)'] and
r['Component ID (Franzen)'] in primary_components_with_both_labels], dtype=int)
print(', '.join(table[keys]['Component Zooniverse ID (RGZ)']))
kf = sklearn.model_selection.KFold(n_splits=5, shuffle=True)
# List of lists of one-table keys.
sets = [[keys[s] for s in f] for f in kf.split(keys)]
import h5py
crowdastro_f = h5py.File('/Users/alger/data/Crowdastro/crowdastro-swire.h5', 'r')
import scipy.spatial
swire_coords = crowdastro_f['/swire/cdfs/numeric'][:, :2]
swire_tree = scipy.spatial.KDTree(swire_coords)
swire_names = crowdastro_f['/swire/cdfs/string'].value
name_to_crowdastro = {j.decode('ascii'):i for i, j in enumerate(swire_names)}
swire_sets = []
norris_label_sets = []
rgz_label_sets = []
for split in sets:
swire_split = []
for ts in split:
# ts is a list of keys.
locs = numpy.array([(i[0], i[1]) for i in table['Component RA (Franzen)', 'Component DEC (Franzen)'][ts]])
nearby_swire_indices = sorted(set(numpy.concatenate(swire_tree.query_ball_point(locs, 1 / 60)))) # 1 arcmin
swire_split.append(nearby_swire_indices)
swire_sets.append(swire_split)
training_f = h5py.File('/Users/alger/data/Crowdastro/training-swire.h5', 'r')
swire_distances = training_f['raw_features'][:, 8]
swire_images = training_f['raw_features'][:, 9:]
assert swire_images.shape[1] == 32 * 32
# AstroPy can't deal with a large file like the SWIRE file, so we have to do this line-by-line...
SPITZER_SENSITIVITIES = {
36: 7.3,
45: 9.7,
58: 27.5,
80: 32.5,
24: 450,
}
headers = []
swire_features = numpy.zeros((len(swire_coords),
6 + # Magnitude differences
1 + # S_3.6
2 + # Stellarities
1 + # Distances
32 * 32 # Image
))
for row_num, line in enumerate(open('/Users/alger/data/SWIRE/SWIRE3_CDFS_cat_IRAC24_21Dec05.tbl')):
if line.startswith('\\'):
continue
if line.startswith('|') and not headers:
headers.extend(map(str.strip, line.split('|')[1:-1]))
lengths = list(map(len, headers))
continue
if line.startswith('|'):
continue
line = dict(zip(headers, line.split()))
name = line['object']
if name not in name_to_crowdastro:
continue # Skip non-crowdastro SWIRE.
crowdastro_index = name_to_crowdastro[name]
fluxes = []
for s in [36, 45, 58, 80]:
aps = []
for ap in range(1, 6):
v = line['flux_ap{}_{}'.format(ap, s)]
try:
v = float(v)
if v != -99.0:
aps.append(v)
except:
pass
if aps:
fluxes.append(numpy.mean(aps))
else:
fluxes.append(SPITZER_SENSITIVITIES[s]) # 5 sigma is an upper-bound for flux in each band.
mags = [numpy.log10(s) for s in fluxes]
mag_diffs = [mags[0] - mags[1], mags[0] - mags[2], mags[0] - mags[3],
mags[1] - mags[2], mags[1] - mags[3],
mags[2] - mags[3]]
# Guaranteed a stellarity in the first two bands; not so much in the others.
stellarities_ = [line['stell_{}'.format(s)] for s in [36, 45]]
stellarities = []
for s in stellarities_:
if s != 'null' and s != '-9.00':
stellarities.append(float(s))
else:
stellarities.append(float('nan'))
# We will have nan stellarities - but we will replace those with the mean later.
features = numpy.concatenate([
mag_diffs,
mags[:1],
stellarities,
[swire_distances[crowdastro_index]],
swire_images[crowdastro_index],
])
swire_features[crowdastro_index] = features
# Set nans to the mean.
for feature in range(swire_features.shape[1]):
nan = numpy.isnan(swire_features[:, feature])
swire_features[:, feature][nan] = swire_features[:, feature][~nan].mean()
import seaborn, matplotlib.pyplot as plt
plt.figure(figsize=(15, 8))
feature_names = ['$[3.6] - [4.5]$', '$[3.6] - [5.8]$', '$[3.6] - [8.0]$',
'$[4.5] - [5.8]$', '$[4.5] - [8.0]$',
'$[5.8] - [8.0]$', '$\log_{10} S_{3.6}$',
'Stellarity (3.6)', 'Stellarity (4.5)', 'Distance']
for feature in range(10):
plt.subplot(2, 5, feature + 1)
plt.title(feature_names[feature])
seaborn.distplot(swire_features[:, feature])
plt.subplots_adjust(hspace=0.4)
plt.show()
# Normalise and centre the features.
swire_features -= swire_features.mean(axis=0)
swire_features /= swire_features.std(axis=0)
plt.figure(figsize=(15, 8))
feature_names = ['$[3.6] - [4.5]$', '$[3.6] - [5.8]$', '$[3.6] - [8.0]$',
'$[4.5] - [5.8]$', '$[4.5] - [8.0]$',
'$[5.8] - [8.0]$', '$\log_{10} S_{3.6}$',
'Stellarity (3.6)', 'Stellarity (4.5)', 'Distance']
for feature in range(10):
plt.subplot(2, 5, feature + 1)
plt.title(feature_names[feature])
seaborn.distplot(swire_features[:, feature])
plt.subplots_adjust(hspace=0.4)
plt.show()
swire_norris_labels = numpy.zeros((len(swire_coords),), dtype=bool)
swire_rgz_labels = numpy.zeros((len(swire_coords),), dtype=bool)
import astropy.coordinates, re
for row in table:
n = row['Source SWIRE (Norris)']
if n and n.startswith('SWIRE'):
if n in name_to_crowdastro:
index = name_to_crowdastro[n]
swire_norris_labels[index] = True
else:
m = re.match(r'SWIRE3_J(\d\d)(\d\d)(\d\d\.\d\d)(-\d\d)(\d\d)(\d\d\.\d)', n)
ra, dec = ' '.join(m.groups()[:3]), ' '.join(m.groups()[3:])
sc = astropy.coordinates.SkyCoord(ra=ra, dec=dec, unit=('hourangle', 'deg'))
coord = (sc.ra.deg, sc.dec.deg)
dist, index = swire_tree.query(coord)
if dist < 5 / 60 / 60:
swire_norris_labels[index] = True
n = row['Source SWIRE Name (RGZ)']
if n:
index = name_to_crowdastro[n]
swire_rgz_labels[index] = True
swire_norris_labels.sum(), swire_rgz_labels.sum()
import sklearn.linear_model, crowdastro.crowd.util, itertools
bas = {'RGZ': {'RGZ': [], 'Norris': []}, 'Norris': {'RGZ': [], 'Norris': []}}
for (train_name, train_labels), (test_name, test_labels) in itertools.product(
[('RGZ', swire_rgz_labels), ('Norris', swire_norris_labels)], repeat=2):
for train, test in swire_sets:
X_train = swire_features[train, :]
X_test = swire_features[test, :]
T_train = train_labels[train]
T_test = test_labels[test]
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced', penalty='l1')
lr.fit(X_train, T_train)
preds = lr.predict(X_test)
ba = crowdastro.crowd.util.balanced_accuracy(T_test, preds)
bas[train_name][test_name].append(ba)
for tr in ['RGZ', 'Norris']:
for te in ['RGZ', 'Norris']:
print('LR({:^6}) vs {:^6}: ({:.02f} +- {:.02f})%'.format(
tr, te,
numpy.mean(bas[tr][te]) * 100,
numpy.std(bas[tr][te]) * 100))
import astropy.table
def plot_bas(bas):
dx = []
dy = []
dh = []
for train in bas:
for test in bas[train]:
for i in bas[train][test]:
dx.append(train)
dh.append(test)
dy.append(i)
data = astropy.table.Table(data=[dx, dy, dh], names=('train', 'BA', 'test')).to_pandas()
ax = plt.figure(figsize=(15, 7))
vp = seaborn.violinplot(
scale='width',
orient='v',
x='train',
y='BA',
hue='test',
data=data,
split=True)
plt.show()
plot_bas(bas)
import sklearn.ensemble
bas_rf = {'RGZ': {'RGZ': [], 'Norris': []}, 'Norris': {'RGZ': [], 'Norris': []}}
for (train_name, train_labels), (test_name, test_labels) in itertools.product(
[('RGZ', swire_rgz_labels), ('Norris', swire_norris_labels)], repeat=2):
for train, test in swire_sets:
X_train = swire_features[train, :]
X_test = swire_features[test, :]
T_train = train_labels[train]
T_test = test_labels[test]
rf = sklearn.ensemble.RandomForestClassifier(class_weight='balanced', criterion='entropy',
min_samples_leaf=40)
rf.fit(X_train, T_train)
preds = rf.predict(X_test)
ba = crowdastro.crowd.util.balanced_accuracy(T_test, preds)
bas_rf[train_name][test_name].append(ba)
for tr in ['RGZ', 'Norris']:
for te in ['RGZ', 'Norris']:
print('RF({:^6}) vs {:^6}: ({:.02f} +- {:.02f})%'.format(
tr, te,
numpy.mean(bas_rf[tr][te]) * 100,
numpy.std(bas_rf[tr][te]) * 100))
plot_bas(bas_rf)
def powerset(iterable: [1,2,3]) -> [(), (1,), (2,), (3,), (1,2), (1,3), (2,3), (1,2,3)]:
s = list(iterable)
return itertools.chain.from_iterable(itertools.combinations(s, r) for r in range(len(s)+1))
featuresets = {
'colour': swire_features[:, :2],
'stellarity': swire_features[:, 2:4],
'distance': swire_features[:, 4:5],
'image': swire_features[:, 5:],
}
import collections
bas_ablation = collections.defaultdict(list) # Maps features -> balanced accuracies.
for i in powerset(['distance', 'colour', 'stellarity', 'image']):
if not i:
continue
print('Testing features:', ', '.join(i))
for train, test in swire_sets:
this_featureset = numpy.concatenate([featuresets[j] for j in i], axis=1)
X_train = this_featureset[train, :]
X_test = this_featureset[test, :]
T_train = swire_norris_labels[train]
T_test = swire_norris_labels[test]
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced', penalty='l1')
lr.fit(X_train, T_train)
preds = lr.predict(X_test)
ba = crowdastro.crowd.util.balanced_accuracy(T_test, preds)
bas_ablation[i].append(ba)
# for i in powerset(['distance', 'colour', 'stellarity', 'image']):
# if not i:
# continue
# print('{:<40}: ({:.02f} +- {:.02f})%'.format(
# ', '.join(i),
# numpy.mean(bas_ablation[i]) * 100,
# numpy.std(bas_ablation[i]) * 100))
ax = plt.figure(figsize=(15, 7))
vp = seaborn.violinplot(
scale='width',
orient='v',
data=[bas_ablation[i] for i in sorted(bas_ablation.keys())])
vp.set_xticklabels(sorted([', '.join(k) for k in bas_ablation.keys()]), rotation='vertical')
plt.show()
import h5py
# Convert SWIRE/ATLAS sets into boolean arrays.
n_swire = len(swire_coords)
n_atlas = crowdastro_f['/atlas/cdfs/numeric'].shape[0]
swire_sets_train_bool = numpy.zeros((5, n_swire), dtype=bool)
swire_sets_test_bool = numpy.zeros((5, n_swire), dtype=bool)
for k in range(5):
swire_sets_train_bool[k, swire_sets[k][0]] = True
swire_sets_test_bool[k, swire_sets[k][1]] = True
with h5py.File('/Users/alger/data/Crowdastro/swire_11_05_17.h5', 'w') as f:
f.create_dataset('features', data=swire_features)
f.create_dataset('rgz_labels', data=swire_rgz_labels)
f.create_dataset('norris_labels', data=swire_norris_labels)
f.create_dataset('clean_swire_sets_train', data=swire_sets_train_bool)
f.create_dataset('clean_swire_sets_test', data=swire_sets_test_bool)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate training/testing subsets
Step2: Associate SWIRE objects with each set
Step3: Generate Features for Each SWIRE Object
Step4: Generate labels for each SWIRE object
Step5: Experiment
Step6: Experiment
Step7: Experiment
Step8: Export
|
7,051
|
<ASSISTANT_TASK:>
Python Code:
# Initial imports and notebook setup, click arrow to show
from copy import copy
import matplotlib.pyplot as plt
import numpy as np
from HARK.ConsumptionSaving.ConsIndShockModel import PerfForesightConsumerType
from HARK.utilities import plot_funcs
mystr = lambda number: "{:.4f}".format(number)
PerfForesightDict = {
# Parameters actually used in the solution method
"CRRA": 2.0, # Coefficient of relative risk aversion
"Rfree": 1.03, # Interest factor on assets
"DiscFac": 0.96, # Default intertemporal discount factor
"LivPrb": [0.98], # Survival probability
"PermGroFac": [1.01], # Permanent income growth factor
"BoroCnstArt": None, # Artificial borrowing constraint
"aXtraCount": 200, # Maximum number of gridpoints in consumption function
# Parameters that characterize the nature of time
"T_cycle": 1, # Number of periods in the cycle for this agent type
"cycles": 0, # Number of times the cycle occurs (0 --> infinitely repeated)
}
PFexample = PerfForesightConsumerType(**PerfForesightDict)
PFexample.cycles = 0
PFexample.solve()
print(PFexample.solution)
print(vars(PFexample.solution[0]))
print("Linear perfect foresight consumption function:")
mMin = PFexample.solution[0].mNrmMin
plot_funcs(PFexample.solution[0].cFunc, mMin, mMin + 10.0)
print("Perfect foresight value function:")
plot_funcs(PFexample.solution[0].vFunc, mMin + 0.1, mMin + 10.1)
LiqConstrDict = copy(PerfForesightDict)
LiqConstrDict["BoroCnstArt"] = 0.0 # Set the artificial borrowing constraint to zero
LiqConstrExample = PerfForesightConsumerType(**LiqConstrDict)
LiqConstrExample.cycles = 0 # Make this type be infinite horizon
LiqConstrExample.solve()
print("Liquidity constrained perfect foresight consumption function:")
plot_funcs(LiqConstrExample.solution[0].cFunc, 0.0, 10.0)
# At this time, the value function for a perfect foresight consumer with an artificial borrowing constraint is not computed nor included as part of its $\texttt{solution}$.
SimulationParams = {
"AgentCount": 10000, # Number of agents of this type
"T_sim": 120, # Number of periods to simulate
"aNrmInitMean": -6.0, # Mean of log initial assets
"aNrmInitStd": 1.0, # Standard deviation of log initial assets
"pLvlInitMean": 0.0, # Mean of log initial permanent income
"pLvlInitStd": 0.0, # Standard deviation of log initial permanent income
"PermGroFacAgg": 1.0, # Aggregate permanent income growth factor
"T_age": None, # Age after which simulated agents are automatically killed
}
PFexample.assign_parameters(**SimulationParams)
PFexample.track_vars = ['mNrm']
PFexample.initialize_sim()
PFexample.simulate()
# Each simulation variable $\texttt{X}$ named in $\texttt{track_vars}$ will have the *history* of that variable for each agent stored in the attribute $\texttt{X_hist}$ as an array of shape $(\texttt{T_sim},\texttt{AgentCount})$. To see that the simulation worked as intended, we can plot the mean of $m_t$ in each simulated period:
plt.plot(np.mean(PFexample.history['mNrm'], axis=1))
plt.xlabel("Time")
plt.ylabel("Mean normalized market resources")
plt.show()
N = PFexample.AgentCount
F = np.linspace(0.0, 1.0, N)
plt.plot(np.sort(PFexample.t_age), F)
plt.xlabel("Current age of consumers")
plt.ylabel("Cumulative distribution")
plt.show()
PFexample.initialize_sim()
PFexample.simulate(80)
PFexample.state_prev['aNrm'] += -5.0 # Adjust all simulated consumers' assets downward by 5
PFexample.simulate(40)
plt.plot(np.mean(PFexample.history['mNrm'], axis=1))
plt.xlabel("Time")
plt.ylabel("Mean normalized market resources")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The module HARK.ConsumptionSaving.ConsIndShockModel concerns consumption-saving models with idiosyncratic shocks to (non-capital) income. All of the models assume CRRA utility with geometric discounting, no bequest motive, and income shocks are fully transitory or fully permanent.
Step2: Solving and examining the solution of the perfect foresight model
Step3: The $\texttt{solve}$ method fills in the instance's attribute solution as a time-varying list of solutions to each period of the consumer's problem. In this case, solution will be a list with exactly one instance of the class ConsumerSolution, representing the solution to the infinite horizon model we specified.
Step4: Each element of solution has a few attributes. To see all of them, we can use the \texttt{vars} built in function
Step5: The two most important attributes of a single period solution of this model are the (normalized) consumption function $\texttt{cFunc}$ and the (normalized) value function $\texttt{vFunc}$. Let's plot those functions near the lower bound of the permissible state space (the attribute $\texttt{mNrmMin}$ tells us the lower bound of $m_t$ where the consumption function is defined).
Step6: An element of solution also includes the (normalized) marginal value function $\texttt{vPfunc}$, and the lower and upper bounds of the marginal propensity to consume (MPC) $\texttt{MPCmin}$ and $\texttt{MPCmax}$. Note that with a linear consumption function, the MPC is constant, so its lower and upper bound are identical.
Step7: Simulating the perfect foresight consumer model
Step8: To generate simulated data, we need to specify which variables we want to track the "history" of for this instance. To do so, we set the track_vars attribute of our PerfForesightConsumerType instance to be a list of strings with the simulation variables we want to track.
Step9: A perfect foresight consumer can borrow against the PDV of his future income-- his human wealth-- and thus as time goes on, our simulated agents approach the (very negative) steady state level of $m_t$ while being steadily replaced with consumers with roughly $m_t=1$.
Step10: The distribution is (discretely) exponential, with a point mass at 120 with consumers who have survived since the beginning of the simulation.
|
7,052
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy as sp
import scipy.signal as signal
import matplotlib
import matplotlib.pyplot as pl
%matplotlib inline
import seaborn as sn
sn.set(style="ticks")
# extra dependencies of this notebook, for data loading and fitting of kernels
import pandas as pd
from lmfit import minimize, Parameters, Parameter, report_fit
import sys
import os
workingDir = os.getcwd()[:-5]
sys.path.append(workingDir)
from fir import FIRDeconvolution
sample_rate = 1000.0
eye_dict = pd.read_csv('data/eye_dict.csv')
blink_dict = pd.read_csv('data/blink_dict.csv')
sac_dict = pd.read_csv('data/sac_dict.csv')
# variables to work with:
start_time = eye_dict.timepoints[0]
timepoints = eye_dict.timepoints - start_time
pupil = eye_dict.pupil
blink_starts = np.array(blink_dict.start_timestamp - start_time, dtype=int)
blink_ends = np.array(blink_dict.end_timestamp - start_time, dtype=int)
sac_starts = np.array(sac_dict.start_timestamp - start_time, dtype=int)
sac_ends = np.array(sac_dict.end_timestamp - start_time, dtype=int)
x = np.arange(timepoints.shape[0]) / sample_rate
f = pl.figure(figsize = (10,3.5))
pl.plot(x, pupil)
pl.xlabel('Time (s)')
pl.ylabel('Pupil size')
sn.despine(offset=10)
margin = 100 # ms
margin = int((margin*sample_rate)/1000)
pupil_interpolated = np.array(pupil.copy())
for b in np.arange(blink_starts.shape[0]):
blink_start = np.where(timepoints==blink_starts[b])[0][0]-margin+1
blink_end = np.where(timepoints==blink_ends[b])[0][0]+margin+1
interpolated_signal = np.linspace(pupil_interpolated[blink_start],
pupil_interpolated[blink_end],
blink_end-blink_start,
endpoint=False)
pupil_interpolated[blink_start:blink_end] = interpolated_signal
f = pl.figure(figsize = (10,3.5))
pl.plot(x, pupil_interpolated)
pl.xlabel('Time (s)')
pl.ylabel('Pupil size')
sn.despine(offset=10)
f = pl.figure(figsize = (10,3.5))
pl.axvspan((-margin + blink_starts[7]) / sample_rate, (margin + blink_ends[7]) / sample_rate, alpha=0.15, color='k')
pl.axvline( (-margin + blink_starts[7]) / sample_rate, color = 'k', alpha = 0.5, lw = 1.5)
pl.axvline( (margin + blink_ends[7]) / sample_rate, color = 'k', alpha = 0.5, lw = 1.5)
pl.plot(x, pupil, label='raw pupil')
pl.plot(x, pupil_interpolated, label='interpolated pupil')
pl.xlim((-margin + blink_starts[7] - 1000) / sample_rate,
(margin + blink_ends[7] + 1000) / sample_rate)
pl.xlabel('Time (s)')
pl.ylabel('Pupil size')
pl.legend(loc=3)
sn.despine(offset=10)
def _butter_lowpass(data, highcut, fs, order=5):
nyq = 0.5 * fs
high = highcut / nyq
b, a = signal.butter(order, high, btype='lowpass')
y = signal.filtfilt(b, a, data)
return y
def _butter_highpass(data, lowcut, fs, order=5):
nyq = 0.5 * fs
low = lowcut / nyq
b, a = signal.butter(order, low, btype='highpass')
y = signal.filtfilt(b, a, data)
return y
def _butter_bandpass(data, lowcut, highcut, fs, order=5):
nyq = 0.5 * fs
low = lowcut / nyq
high = highcut / nyq
data_hp = _butter_highpass(data, lowcut, fs, order)
b, a = signal.butter(order, high, btype='lowpass')
y = signal.filtfilt(b, a, data_hp)
return y
# High pass:
pupil_interpolated_hp = _butter_highpass(data=pupil_interpolated, lowcut=0.01, fs=1000, order=3)
# Low pass:
pupil_interpolated_lp = _butter_lowpass(data=pupil_interpolated, highcut=10, fs=1000, order=3)
# Band pass:
pupil_interpolated_bp = _butter_bandpass(data=pupil_interpolated, lowcut=0.01, highcut=10, fs=1000, order=3)
f = pl.figure(figsize = (16,3.5))
pl.plot(pupil_interpolated_lp, label='low pass')
pl.plot(pupil_interpolated_hp, label='high pass')
pl.plot(pupil_interpolated_bp, label='band pass')
pl.xlabel('Time (s)')
pl.ylabel('Pupil size')
pl.legend()
sn.despine(offset=10)
downsample_rate = 100
new_sample_rate = sample_rate / downsample_rate
interval = 6
# events:
events = [(blink_ends / sample_rate),
(sac_ends / sample_rate)]
# compute blink and sac kernels with deconvolution (on downsampled timeseries):
a = FIRDeconvolution(signal=sp.signal.decimate(pupil_interpolated_bp, downsample_rate, 1),
events=events, event_names=['blinks', 'sacs'], sample_frequency=new_sample_rate,
deconvolution_frequency=new_sample_rate, deconvolution_interval=[0,interval],)
a.create_design_matrix()
a.regress()
a.betas_for_events()
blink_response = np.array(a.betas_per_event_type[0]).ravel()
sac_response = np.array(a.betas_per_event_type[1]).ravel()
# baseline the kernels:
blink_response = blink_response - blink_response[0].mean()
sac_response = sac_response - blink_response[0].mean()
# plot:
x = np.linspace(0, interval, len(blink_response))
f = pl.figure(figsize = (10,3.5))
pl.plot(x, blink_response, label='blink response')
pl.plot(x, sac_response, label='sac response')
pl.xlabel('Time from event (s)')
pl.ylabel('Pupil size')
pl.axhline(0,color = 'k', lw = 0.5, alpha = 0.5)
pl.legend(loc=2)
sn.despine(offset=10)
def single_pupil_IRF(params, x):
s1 = params['s1']
n1 = params['n1']
tmax1 = params['tmax1']
return s1 * ((x**n1) * (np.e**((-n1*x)/tmax1)))
def single_pupil_IRF_ls(params, x, data):
s1 = params['s1'].value
n1 = params['n1'].value
tmax1 = params['tmax1'].value
model = s1 * ((x**n1) * (np.e**((-n1*x)/tmax1)))
return model - data
def double_pupil_IRF(params, x):
s1 = params['s1']
s2 = params['s2']
n1 = params['n1']
n2 = params['n2']
tmax1 = params['tmax1']
tmax2 = params['tmax2']
return s1 * ((x**n1) * (np.e**((-n1*x)/tmax1))) + s2 * ((x**n2) * (np.e**((-n2*x)/tmax2)))
def double_pupil_IRF_ls(params, x, data):
s1 = params['s1'].value
s2 = params['s2'].value
n1 = params['n1'].value
n2 = params['n2'].value
tmax1 = params['tmax1'].value
tmax2 = params['tmax2'].value
model = s1 * ((x**n1) * (np.e**((-n1*x)/tmax1))) + s2 * ((x**n2) * (np.e**((-n2*x)/tmax2)))
return model - data
# create a set of Parameters
params = Parameters()
params.add('s1', value=-1, min=-np.inf, max=-1e-25)
params.add('s2', value=1, min=1e-25, max=np.inf)
params.add('n1', value=10, min=9, max=11)
params.add('n2', value=10, min=8, max=12)
params.add('tmax1', value=0.9, min=0.5, max=1.5)
params.add('tmax2', value=2.5, min=1.5, max=4)
# do fit, here with powell method:
blink_result = minimize(double_pupil_IRF_ls, params, method='powell', args=(x, blink_response))
blink_kernel = double_pupil_IRF(blink_result.params, x)
sac_result = minimize(single_pupil_IRF_ls, params, method='powell', args=(x, sac_response))
sac_kernel = single_pupil_IRF(sac_result.params, x)
# plot:
f = pl.figure(figsize = (10,3.5))
pl.plot(x, blink_response, label='blink response')
pl.plot(x, blink_kernel, label='blink fit')
pl.plot(x, sac_response, label='sac response')
pl.plot(x, sac_kernel, label='sac fit')
pl.xlabel('Time from event (s)')
pl.ylabel('Pupil size')
pl.axhline(0,color = 'k', lw = 0.5, alpha = 0.5)
pl.legend(loc=4)
sn.despine(offset=10)
# upsample:
x = np.linspace(0, interval, interval*sample_rate)
blink_kernel = double_pupil_IRF(blink_result.params, x)
sac_kernel = double_pupil_IRF(sac_result.params, x)
# regressors:
blink_reg = np.zeros(len(pupil))
blink_reg[blink_ends] = 1
blink_reg_conv = sp.signal.fftconvolve(blink_reg, blink_kernel, 'full')[:-(len(blink_kernel)-1)]
sac_reg = np.zeros(len(pupil))
sac_reg[blink_ends] = 1
sac_reg_conv = sp.signal.fftconvolve(sac_reg, sac_kernel, 'full')[:-(len(sac_kernel)-1)]
regs = [blink_reg_conv, sac_reg_conv]
# GLM:
design_matrix = np.matrix(np.vstack([reg for reg in regs])).T
betas = np.array(((design_matrix.T * design_matrix).I * design_matrix.T) * np.matrix(pupil_interpolated_bp).T).ravel()
explained = np.sum(np.vstack([betas[i]*regs[i] for i in range(len(betas))]), axis=0)
# clean pupil:
pupil_clean_bp = pupil_interpolated_bp - explained
# plot:
f = pl.figure(figsize = (10,3.5))
x = np.arange(timepoints.shape[0]) / sample_rate
pl.plot(x, pupil_interpolated_bp, 'r', label='band-passed')
pl.plot(x, pupil_clean_bp, 'k', label='blinks/sacs regressed out')
pl.xlabel('Time (s)')
pl.ylabel('Pupil size')
pl.axhline(0,color = 'k', lw = 0.5, alpha = 0.5)
pl.legend()
sn.despine(offset=10)
pupil_clean_lp = pupil_clean_bp + (pupil_interpolated_lp-pupil_interpolated_bp)
f = pl.figure(figsize = (10,3.5))
x = np.arange(timepoints.shape[0]) / sample_rate
pl.plot(x, pupil, 'k', label='rawest version')
pl.plot(x, pupil_interpolated, label='band-passed')
pl.plot(x, pupil_clean_lp, label='blinks/sacs regressed out')
pl.xlabel('Time (s)')
pl.ylabel('Pupil size')
pl.axhline(0,color = 'k', lw = 0.5, alpha = 0.5)
pl.legend()
sn.despine(offset=10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Let's plot the raw pupil timeseries
Step3: The periods where the timeseries drop to 0 correspond to blinks. Let's linearly interpolate these blinks.
Step4: To see what happened, let's zoom in on one interpolated blink
Step5: Let's filter blink interpolated pupil timeseries now. We'll construct a low pass (<10Hz), and a band-pass (0.01-10Hz) signal. And again, let's plot the results.
Step6: The band-pass filtered signal we can use now to estimate pupil responses to blinks and saccades. You can think of these of simple event related averages. However, to account for temporally adjacent event, and hence overlapping responses (due to slow pupil IRF), here we will rely on deconvolution.
Step7: Let's fit kernels to these
Step8: Now, with a GLM let's regress these responses to blinks and saccades from our pupil timeseries.
Step9: Finally, let's add back the slow drift, which is meaningful part of the signal!
|
7,053
|
<ASSISTANT_TASK:>
Python Code:
import sys
#sys.path.append('/Users/esumitra/workspaces/mc/mcpipy')
# Start typing below
# once you are done typing, press (Ctrl+Enter) to run the code
import mcpi.minecraft as minecraft
import time
mc = minecraft.Minecraft.create()
mc.postToChat("Hello kids")
time.sleep(5)
# Program for Adventure 1 - Task 1
# Program for Adventure 1 - Task 2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Send a chat message
Step2: If everthing went well, you saw a chat message in Minecraft. You have now written your first program for Minecraft. Jump up and down three times yelling
Step3: Task 2
|
7,054
|
<ASSISTANT_TASK:>
Python Code:
#I don't think this is the code golf winner. Try to beat me.
for i in range(100):
print('FizzBuzz'*(not (i+1)%5)*(not (i+1)%3) or 'Fizz'*(not (i+1)%5) or 'Buzz'*(not (i+1)%3) or str(i+1))
def sum_digits(number):
'''
Function that takes a number as an input and sums its digits
Parameters
number: int
an integer number with several digits
Returns
total: int
the sum of the digits in number
'''
string_num = str(number)
total = 0
for letter in string_num:
total += int(letter)
return total
def list_multiple(number, limit = None):
'''
Function that lists the sum_digits of every multiple of the input
until the limit or the square of number is reached
Parameters
number: int
an integer
Optional
limit: int
limit on the multiple of number for which to print its sum_digits()
Returns
None
'''
if limit == None:
limit = number**2
mul = 1
result = number+0
while (result+number) < limit:
result = mul*number
mul += 1
print(sum_digits(result))
return
print(sum_digits(128))
list_multiple(4, 21)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) Write a function called sum_digits that returns the sum of the digits of an integer argument; that is, sum_digits(123) should return 6. Use this function in an other function that prints out the sum of the digits of every integer multiple of the first argument, up to either a second optional argument (if included) or the first argument's square. That is
|
7,055
|
<ASSISTANT_TASK:>
Python Code:
data_df = pd.read_excel("RESSALES-mf.xlsx", sheetname='data')
data_df.head()
categories_df = pd.read_excel("RESSALES-mf.xlsx", sheetname='categories')
data_types_df = pd.read_excel("RESSALES-mf.xlsx", sheetname='data_types')
error_types_df = pd.read_excel("RESSALES-mf.xlsx", sheetname='error_types')
geo_levels_df = pd.read_excel("RESSALES-mf.xlsx", sheetname='geo_levels')
periods_df = pd.read_excel("RESSALES-mf.xlsx", sheetname='periods')
categories_df.head(2)
# it auto-merges cat_idx in our original dataframe with cat_idx in categories_df
# it auto-merges dt_idx in our original dataframe with dt_idx in data_types_df
# it auto-merges geo_idx in our original dataframe with geo_idx in geo_levels_df
# it auto-merges per_idx in our original dataframe with per_idx in periods_df
df = data_df.merge(categories_df).merge(data_types_df).merge(geo_levels_df).merge(periods_df)
# We only want to look at the total number of homes sold across entire the united states
df = df[(df['cat_code'] == 'SOLD') & (df['geo_code'] == 'US') & (df['dt_code'] == 'TOTAL')]
# We don't merge error_types_df because all of the errors are the same
df['et_idx'].value_counts()
df.head(2)
# Now let's remove the join columns to keep things clean
df = df.drop(['per_idx', 'cat_idx', 'dt_idx', 'et_idx', 'geo_idx'], axis=1)
df.head()
# At least we can see 'per_name' (period name) is already a datetime!
df.info()
df.head(3)
# First we move it over into the index column
df.index = df['per_name']
df.head(2)
# Then we delete the per_name column because we don't need it any more...
del df['per_name']
df.head(2)
# Everything in March, 1963
df['1963-3']
# Everything in 2010
df['2010']
# Make our list of fruits
ranked_fruits = ('banana', 'orange', 'apple', 'blueberries', 'strawberries')
# Start from the beginning, get the first two
ranked_fruits[:2]
# Start from two, get up until the fourth element
ranked_fruits[2:4]
# Starting from the third element, get all the rest
ranked_fruits[3:]
# Everything after 2001
df["2001":]
# Everything between June 1990 and March 1995
df["1990-06":"1995-03"]
df.plot(y='val')
# http://stackoverflow.com/a/17001474 gives us a list of what we can pass to 'resample'
df.resample('A').median()
# If 'A' is every year, 10A is every 5 years
df.resample('5A').median()
# We can graph these!
df.plot(y='val', label="Monthly")
df.resample('A').median().plot(y='val', label="Annual")
df.resample('10A').median().plot(y='val', label="Decade")
# We can graph these ALL ON THE SAME PLOT!
# we store the 'ax' from the first .plot and pass it to the others
ax = df.plot(y='val', label="Monthly")
df.resample('A').median().plot(y='val', ax=ax, label="Annual")
df.resample('10A').median().plot(y='val', ax=ax, label="Decade")
# Which year had the worst month?
df.resample('A').median()
# Group by the month, check the median
df.groupby(by=df.index.month).median()
# Group by the month, check the median, plot the results
df.groupby(by=df.index.month).median().plot(y='val')
# Group by the month, check the median, plot the results
ax = df.groupby(by=df.index.month).median().plot(y='val', legend=False)
ax.set_xticks([1,2,3,4,5,6,7,8,9,10,11, 12])
ax.set_xticklabels(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
ax.set_ylabel("Houses sold (in thousands)")
ax.set_title("House sales by month, 1963-2016")
df.resample('A')['val'].max().plot()
# The fewest?
df.resample('A')['val'].min().plot()
# We now know we can look at the range
ax = df.resample('A')['val'].median().plot()
df.resample('A')['val'].max().plot(ax=ax)
df.resample('A')['val'].min().plot(ax=ax)
# We now know we can look at the range IN AN EVEN COOLER WAY
ax = df.resample('A')['val'].median().plot()
x_values = df.resample('A').index
min_values = df.resample('A')['val'].min()
max_values = df.resample('A')['val'].max()
ax.fill_between(x_values, min_values, max_values, alpha=0.5)
ax.set_ylim([0,130])
ax.set_ylabel("Houses sold (in thousands)")
ax.set_xlabel("Year")
ax.set_title("The Housing Bubble")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Changing the index to the datetime
Step2: It's the column on the far left - 0, 1, 2, 3, 4... boring and useless! If we replace the index with the datetime, though, we can start to have some fun
Step3: Selecting specific(-ish) dates via the index
Step4: List slices with datetimes
Step5: Instead of using boring ol' numbers, we can use dates instead.
Step6: Info on our time series
Step7: Hmmm, looks like something might have happened at some point. Maybe we want to see some numbers instead of a graph? To do aggregate statistics on time series in pandas we use a method called .resample(), and we're going to tell it to group the data by year.
Step8: That still looks like too much data! What about every decade?
Step9: Cyclical data
Step10: More details
|
7,056
|
<ASSISTANT_TASK:>
Python Code:
# iterating over a list by object
x = ['bob', 'sue', 'mary']
for name in x:
print(name.upper() + ' WAS HERE')
# alternatively, you could iterate over position
for i in range(len(x)):
print(x[i].upper() + ' WAS HERE')
dir(x) # ignore the __ methods for now
y = (x*x for x in [1, 2, 3])
type(y)
dir(y)
y.send??
y[5]
next(y)
y.send(1)
next(y) # run this cell twice - what happens?
z = range(10, 5, -1)
dir(range)
# let's filter that list a little
[x for x in dir(range) if not x.startswith('_')]
z.start
len(z) # __ function - overloaded operator
for i in z:
print(i)
GPA = zip(['bob', 'sue', 'mary'], [2.3, 4.0, 3.7])
type(GPA)
dir(GPA)
next(GPA)
next(GPA)[1]
dict?
GPA_2 = dict(bob=2.0, sue=3.4, mary=4.0)
names = ['bob', 'mary', 'sue', 'lisa']
gpas = [3.2, 4.0, 3.1, 2.8]
GPA_3 = dict(zip(names, gpas))
GPA_3
# explicitly named arguments are also positional
# Anything after * in a function is a positional argument - tuple
# Anything after ** is a named argument
# the latter are unpacked as dicts
def arg_explainer(x, y, *args, **kwargs):
print('-'*30)
print('x is %d, even though you didn\'t specify it, because of its position.' % x)
print('same with y, which is %d.' %y)
if args:
print('-'*30)
print('type(*args) = %s' % type(args))
print('these are the *args arguments: ')
for arg in args:
print(arg)
else:
print('-'*30)
print('no *args today!')
if kwargs:
print('-'*30)
print('type(**kwargs) == %s' % type(kwargs))
for key in kwargs:
print(key, kwargs[key])
else:
print('-'*30)
print('no **kwargs today!')
print('-'*30)
arg_explainer(2, 4, 3, 7, 8, 9, 10, plot=True, sharey=True, rotate=False)
my_kwargs = {'plot': False, 'sharey': True}
arg_explainer(1, 2, **my_kwargs)
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
?plt.plot
x = np.linspace(-5, 5, 100)
y1 = np.sin(x)
y2 = np.cos(x)
plt.plot(x, y1) # all of these arguments are *args
plt.plot(x, y2, color='red', label='just on the cosine, for no reason at all') # starting w/ color, **kwargs
plt.legend(loc='center');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: generators return their contents 'lazily'. This leaves a minimal memory footprint, at the cost of making the generator nonreusable.
Step2: 'range' is something like a generator, but with special properties because of its intended use case (in 'for' loops or similar structures.
Step3: From the docs (https
Step4: zips produced iterators from pairs
Step5: More on Dicts
Step6: from assignment
Step7: from iterator
Step8: In function definitions
Step9: In function calls
Step10: This allows, for instance, matplotlibs plot function to accept a huge range of different plotting options, or few to none at all.
|
7,057
|
<ASSISTANT_TASK:>
Python Code:
! pip uninstall -y kfp
! pip install --no-cache-dir kfp torch captum
import kfp
import json
import os
from kfp.onprem import use_k8s_secret
from kfp import components
from kfp.components import load_component_from_file, load_component_from_url, InputPath
from kfp import dsl
from kfp import compiler
kfp.__version__
INGRESS_GATEWAY='http://istio-ingressgateway.istio-system.svc.cluster.local'
AUTH="<enter your token here>"
NAMESPACE="kubeflow-user-example-com"
COOKIE="authservice_session="+AUTH
EXPERIMENT="Default"
dist_volume = 'dist-vol'
volume_mount_path ="/model"
dataset_path = volume_mount_path+"/dataset"
checkpoint_dir = volume_mount_path+"/checkpoint"
tensorboard_root = volume_mount_path+"/tensorboard"
MINIO_ENDPOINT="http://minio-service.kubeflow:9000"
LOG_BUCKET="mlpipeline"
TENSORBOARD_IMAGE="public.ecr.aws/pytorch-samples/tboard:latest"
client = kfp.Client(host=INGRESS_GATEWAY+"/pipeline", cookies=COOKIE)
client.create_experiment(EXPERIMENT)
experiments = client.list_experiments(namespace=NAMESPACE)
my_experiment = experiments.experiments[0]
my_experiment
DEPLOY_NAME="bert-dist"
MODEL_NAME="bert"
! python utils/generate_templates.py bert/template_mapping.json
prepare_tensorboard_op = load_component_from_file(
"yaml/tensorboard_component.yaml"
)
prep_op = components.load_component_from_file(
"yaml/preprocess_component.yaml"
)
# Use GPU image in train component
train_op = components.load_component_from_file(
"yaml/train_component.yaml"
)
deploy_op = load_component_from_file(
"yaml/deploy_component.yaml"
)
minio_op = components.load_component_from_file(
"yaml/minio_component.yaml"
)
pytorch_job_op = load_component_from_file("../../../components/kubeflow/pytorch-launcher/component.yaml")
kubernetes_create_pvc_op = load_component_from_file(
"../../../components/kubernetes/Create_PersistentVolumeClaim/component.yaml"
)
cp_op = load_component_from_file(
"yaml/copy_component.yaml"
)
from kubernetes.client.models import V1Volume, V1PersistentVolumeClaimVolumeSource
def create_dist_pipeline():
kubernetes_create_pvc_op(name=dist_volume, storage_size= "20Gi")
create_volume_run = client.create_run_from_pipeline_func(create_dist_pipeline, arguments={})
create_volume_run.wait_for_run_completion()
@dsl.pipeline(name="Training pipeline", description="Sample training job test")
def pytorch_bert(
minio_endpoint=MINIO_ENDPOINT,
log_bucket=LOG_BUCKET,
log_dir=f"tensorboard/logs/{dsl.RUN_ID_PLACEHOLDER}",
confusion_matrix_log_dir=f"confusion_matrix/{dsl.RUN_ID_PLACEHOLDER}/",
mar_path=f"mar/{dsl.RUN_ID_PLACEHOLDER}/model-store/",
config_prop_path=f"mar/{dsl.RUN_ID_PLACEHOLDER}/config/",
model_uri=f"pvc://{dist_volume}/mar/{dsl.RUN_ID_PLACEHOLDER}",
tf_image=TENSORBOARD_IMAGE,
deploy=DEPLOY_NAME,
namespace=NAMESPACE,
num_samples=1000,
max_epochs=1,
gpus=2,
num_nodes=2
):
prepare_tb_task = prepare_tensorboard_op(
log_dir_uri=f"s3://{log_bucket}/{log_dir}",
image=tf_image,
pod_template_spec=json.dumps({
"spec": {
"containers": [{
"env": [
{
"name": "AWS_ACCESS_KEY_ID",
"valueFrom": {
"secretKeyRef": {
"name": "mlpipeline-minio-artifact",
"key": "accesskey",
}
},
},
{
"name": "AWS_SECRET_ACCESS_KEY",
"valueFrom": {
"secretKeyRef": {
"name": "mlpipeline-minio-artifact",
"key": "secretkey",
}
},
},
{
"name": "AWS_REGION",
"value": "minio"
},
{
"name": "S3_ENDPOINT",
"value": f"{minio_endpoint}",
},
{
"name": "S3_USE_HTTPS",
"value": "0"
},
{
"name": "S3_VERIFY_SSL",
"value": "0"
},
]
}]
}
}),
).set_display_name("Visualization")
prep_task = prep_op().after(prepare_tb_task).set_display_name("Preprocess & Transform")
copy_task = cp_op("true", prep_task.outputs['output_data'], dataset_path,"").add_pvolumes({volume_mount_path: dsl.PipelineVolume(pvc=dist_volume)}).after(prep_task).set_display_name("Copy Dataset")
confusion_matrix_url = f"minio://{log_bucket}/{confusion_matrix_log_dir}"
train_task = pytorch_job_op(
name="pytorch-bert",
namespace=namespace,
master_spec=
{
"replicas": 1,
"imagePullPolicy": "Always",
"restartPolicy": "OnFailure",
"template": {
"metadata": {
"annotations": {
"sidecar.istio.io/inject": "false"
}
},
"spec": {
"containers": [
{
"name": "pytorch",
"image": "public.ecr.aws/pytorch-samples/kfp_samples:latest-gpu",
"command": ["python3", "bert/agnews_classification_pytorch.py"],
"args": [
"--dataset_path", dataset_path,
"--checkpoint_dir", checkpoint_dir,
"--script_args", f"model_name=bert.pth,num_samples={num_samples}",
"--tensorboard_root", tensorboard_root,
"--ptl_args", f"max_epochs={max_epochs},profiler=pytorch,gpus={gpus},accelerator=ddp,num_nodes={num_nodes},confusion_matrix_url={confusion_matrix_url}"
],
"ports": [
{
"containerPort": 24456,
"name": "pytorchjob-port"
}
],
"resources": {
"limits": {
"nvidia.com/gpu": 2
}
},
"volumeMounts": [
{
"mountPath": volume_mount_path,
"name": "model-volume"
}
]
}
],
"volumes": [
{
"name": "model-volume",
"persistentVolumeClaim": {
"claimName": dist_volume
}
}
]
}
}
},
worker_spec=
{
"replicas": 1,
"imagePullPolicy": "Always",
"restartPolicy": "OnFailure",
"template": {
"metadata": {
"annotations": {
"sidecar.istio.io/inject": "false"
}
},
"spec": {
"containers": [
{
"name": "pytorch",
"image": "public.ecr.aws/pytorch-samples/kfp_samples:latest-gpu",
"command": ["python3", "bert/agnews_classification_pytorch.py"],
"args": [
"--dataset_path", dataset_path,
"--checkpoint_dir", checkpoint_dir,
"--script_args", f"model_name=bert.pth,num_samples={num_samples}",
"--tensorboard_root", tensorboard_root,
"--ptl_args", f"max_epochs={max_epochs},profiler=pytorch,gpus={gpus},accelerator=ddp,num_nodes={num_nodes},confusion_matrix_url={confusion_matrix_url}"
],
"ports": [
{
"containerPort": 24456,
"name": "pytorchjob-port"
}
],
"resources": {
"limits": {
"nvidia.com/gpu": 2
}
},
"volumeMounts": [
{
"mountPath": volume_mount_path,
"name": "model-volume"
}
]
}
],
"volumes": [
{
"name": "model-volume",
"persistentVolumeClaim": {
"claimName": dist_volume
}
}
]
}
}
},
delete_after_done=False
).after(copy_task)
mar_folder_restructure_task = dsl.ContainerOp(
name='mar restructure',
image='library/bash:4.4.23',
command=['sh', '-c'],
arguments=[f'mkdir -p {volume_mount_path}/{mar_path}; mkdir -p {volume_mount_path}/{config_prop_path}; cp {checkpoint_dir}/*.mar {volume_mount_path}/{mar_path}; cp {checkpoint_dir}/config.properties {volume_mount_path}/{config_prop_path}']).add_pvolumes({volume_mount_path: dsl.PipelineVolume(pvc=dist_volume)}).after(train_task).set_display_name("Restructure MAR and config.properties path")
mar_folder_restructure_task.execution_options.caching_strategy.max_cache_staleness = "P0D"
copy_tensorboard = cp_op("false", "", "", tensorboard_root).add_pvolumes({volume_mount_path: dsl.PipelineVolume(pvc=dist_volume)}).after(mar_folder_restructure_task).set_display_name("Copy Tensorboard Logs")
copy_tensorboard.execution_options.caching_strategy.max_cache_staleness = "P0D"
minio_tb_upload = (
minio_op(
bucket_name=log_bucket,
folder_name=log_dir,
input_path=copy_tensorboard.outputs["destination_path"],
filename="",
).after(copy_tensorboard)
.set_display_name("Tensorboard Events Pusher")
)
# Deploy inferenceservice in gpu
gpu_count = "1"
isvc_gpu_yaml =
apiVersion: "serving.kubeflow.org/v1beta1"
kind: "InferenceService"
metadata:
name: {}
namespace: {}
spec:
predictor:
serviceAccountName: sa
pytorch:
storageUri: {}
resources:
requests:
cpu: 4
memory: 8Gi
limits:
cpu: 4
memory: 8Gi
nvidia.com/gpu: {}
.format(
deploy, namespace, model_uri, gpu_count
)
deploy_task = (
deploy_op(action="apply", inferenceservice_yaml=isvc_gpu_yaml)
.after(minio_tb_upload)
.set_display_name("Deployer")
)
deploy_task.execution_options.caching_strategy.max_cache_staleness = "P0D"
dsl.get_pipeline_conf().add_op_transformer(
use_k8s_secret(
secret_name="mlpipeline-minio-artifact",
k8s_secret_key_to_env={
"secretkey": "MINIO_SECRET_KEY",
"accesskey": "MINIO_ACCESS_KEY",
},
)
)
# Compile pipeline
compiler.Compiler().compile(pytorch_bert, 'pytorch.tar.gz', type_check=True)
# Execute pipeline
run = client.run_pipeline(my_experiment.id, 'pytorch-bert', 'pytorch.tar.gz')
!kubectl get isvc $DEPLOY
INFERENCE_SERVICE_LIST = ! kubectl get isvc {DEPLOY_NAME} -n {NAMESPACE} -o json | python3 -c "import sys, json; print(json.load(sys.stdin)['status']['url'])"| tr -d '"' | cut -d "/" -f 3
INFERENCE_SERVICE_NAME = INFERENCE_SERVICE_LIST[0]
INFERENCE_SERVICE_NAME
!curl -v -H "Host: $INFERENCE_SERVICE_NAME" -H "Cookie: $COOKIE" "$INGRESS_GATEWAY/v1/models/$MODEL_NAME:predict" -d @./bert/sample.txt > bert_prediction_output.json
! cat bert_prediction_output.json
!curl -v -H "Host: $INFERENCE_SERVICE_NAME" -H "Cookie: $COOKIE" "$INGRESS_GATEWAY/v1/models/$MODEL_NAME:explain" -d @./bert/sample.txt > bert_explaination_output.json
! cat bert_explaination_output.json
explanations_json = json.loads(open("./bert_explaination_output.json", "r").read())
explanations_json
prediction_json = json.loads(open("./bert_prediction_output.json", "r").read())
import torch
attributions = explanations_json["explanations"][0]['importances']
tokens = explanations_json["explanations"][0]['words']
delta = explanations_json["explanations"][0]['delta']
attributions = torch.tensor(attributions)
pred_prob = 0.75
pred_class = prediction_json["predictions"][0]
true_class = "Business"
attr_class ="world"
from captum.attr import visualization
vis_data_records =[]
vis_data_records.append(visualization.VisualizationDataRecord(
attributions,
pred_prob,
pred_class,
true_class,
attr_class,
attributions.sum(),
tokens,
delta))
vis = visualization.visualize_text(vis_data_records)
! kubectl delete --all isvc -n $NAMESPACE
! kubectl delete pod --field-selector=status.phase==Succeeded -n $NAMESPACE
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Enter your gateway and the cookie
Step2: Set Log bucket and Tensorboard Image
Step4: Define pipeline
Step5: Wait for inference service below to go to READY True state.
Step6: Get Inferenceservice name
Step7: Prediction Request
Step8: Explanation Request
Step9: Visualization of Predictions
Step10: visualization appreas as below
|
7,058
|
<ASSISTANT_TASK:>
Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Instala a versão exata do pacote matplotlib
!pip install -q -U matplotlib==3.2.1
import matplotlib as mat
mat.__version__
import sqlite3
import random
import datetime
import matplotlib.pyplot as plt
%matplotlib notebook
# Criando uma conexão
conn = sqlite3.connect('dsa.db')
# Criando um cursor
c = conn.cursor()
# Função para criar uma tabela
def create_table():
c.execute('CREATE TABLE IF NOT EXISTS produtos(id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, date TEXT, '\
'prod_name TEXT, valor REAL)')
# Função para inserir uma linha
def data_insert():
c.execute("INSERT INTO produtos VALUES(now, 'Teclado', 130 )")
conn.commit()
c.close()
conn.close()
# Usando variáveis para inserir dados
def data_insert_var():
new_date = datetime.datetime.now()
new_prod_name = 'monitor'
new_valor = random.randrange(50,100)
c.execute("INSERT INTO produtos (date, prod_name, valor) VALUES (?, ?, ?, ?)",
(new_date, new_prod_name, new_valor))
conn.commit()
# Leitura de dados
def leitura_todos_dados():
c.execute("SELECT * FROM PRODUTOS")
for linha in c.fetchall():
print(linha)
# Leitura de registros específicos
def leitura_registros():
c.execute("SELECT * FROM PRODUTOS WHERE valor > 60.0")
for linha in c.fetchall():
print(linha)
# Leitura de colunas específicos
def leitura_colunas():
c.execute("SELECT * FROM PRODUTOS")
for linha in c.fetchall():
print(linha[3])
# Update
def atualiza_dados():
c.execute("UPDATE produtos SET valor = 70.00 WHERE valor > 80.0")
conn.commit()
# Delete
def remove_dados():
c.execute("DELETE FROM produtos WHERE valor = 62.0")
conn.commit()
# Gerar gráfico com os dados no banco de dados
def dados_grafico():
c.execute("SELECT id, valor FROM produtos")
ids = []
valores = []
dados = c.fetchall()
for linha in dados:
ids.append(linha[0])
valores.append(linha[1])
plt.bar(ids, valores)
plt.show()
# Gerando gráficos
dados_grafico()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Gráficos
|
7,059
|
<ASSISTANT_TASK:>
Python Code:
exam_scores = [67,78,94,45,55,66]
print("scores: " ,exam_scores)
exam_scores = [67,78,94,45,55]
print("score 2: " ,exam_scores[1])
print("score 3: " ,exam_scores[2])
print("score 2 & 3: " ,exam_scores[1:3])
exam_scores = [67,78,94,45,55]
exam_scores[2] = 90
print("score: " ,exam_scores[2])
exam_scores = [67,78,94,45,55]
print ("Here are the scores:",exam_scores)
exam_scores = [67,78,94,45,55]
exam_scores.append(32) #add a variable to this list
print ("Here are the scores:",exam_scores)
# extend(list) - appends another list
exam_scores = [67,78,94,45,55]
exam_scores2 = [54,73,65]
exam_scores.extend(exam_scores2)
print (exam_scores)
# insert(index,item)-insert an item at a given index
exam_scores = [67,78,94,45,55]
exam_scores.insert(4,90)
print (exam_scores)
exam_scores = [67,78,94,45,55]
exam_scores.pop(2)
print (exam_scores)
exam_scores = [67,78,94,45,55]
popped_value = exam_scores.pop(2)
print (exam_scores)
print("Popped value=", popped_value)
exam_scores = [67,78,94,45,55]
exam_scores.reverse()
print (exam_scores)
names = ["James", "John", "Andy", "Ben", "Chris", "Thomas"]
sorted_names = sorted(names)
print("Sorted names:" ,sorted_names)
names = ["James", "John", "Andy", "Ben", "Chris", "Thomas"]
names.sort()
print("Sorted names:",names)
values = [5, 7, 4, 6, 1, 2]
sorted_values = sorted(values)
print("Sorted values:", sorted_values)
values = [5, 7, 4, 6, 1, 2, 45, 12]
sorted_values = sorted(values,reverse=True)
print("Sorted values:", sorted_values)
values = [5, 7, 4, 6, 1, 2, 45, 12]
values.sort(reverse=True)
print("Sorted values:",values)
exam_scores = [67,78,94,45,55]
length = len(exam_scores)
print("number of scores:",length)
x = 5
y = 5
if x == y :
print("x and y are the same")
a = 5
b = 4
if a < b:
print("a is less than b") #conditional block must be indented four spaces or a tab
else:
print("a is not less than b")
name1 = "Alistair"
name2 = "Alastair"
if name1 == name2:
print("names are the same")
else:
print("names are not the same")
biol733_mark = 75
biol734_mark = 72
if biol733_mark >= 70 and biol734_mark >= 70:
print("You're getting a distinction :-)")
else:
print("You're not getting a distinction :-(")
biol733_mark = 34
biol734_mark = 55
if biol733_mark <50 or biol734_mark < 50:
print("You've failed a module :-( ")
else:
print("You've passed your modules :-) ")
day = "Monday"
if day != "Monday" or day != "Friday":
print("Alistair is researching")
else:
print("Alistair is not researching")
day = "Monday"
if day != "Monday" and day != "Friday":
print("Alistair is researching\n")
else:
print("Alistair is not researching\n")
a = 4
b = 4
c = 6
if a == b:
print("a equals b")
if b >= c:
print("and b is greater or equal to c")
else:
print("and b is less than c")
else:
print("a doesn't equal b")
module_code = "BIOL734"
if module_code == "BIOL007":
module_name = "Statistics for Life Science"
elif module_code == "BIOL733":
module_name = "Perl Programming"
elif module_code == "BIOL734":
module_name = "Post-genomic technologies"
else:
module_name = "Unknown module code"
print("The module is " + module_name + "\n")
a = 4
b = 5
if a == b:
print("a equals b\n")
else:
exit("Error - a doesn't equal b\n")
print("Program continues...\n")
for x in range(1, 6):
print("Row number " + str(x))
exam_scores = [67,78,94,45,55]
counter = 1
for x in exam_scores:
print("score " + str(counter) + ": " + str(x))
counter +=1 # “+=“ is the same as counter = counter + 1 incrementing
a = 1
while a < 5:
print("The value of a is ",a)
a+=1
# a = 1
# while 4<5:
# print("The value of a is ",a)
# a+=1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lists can be accessed by numerical position (aka index)
Step2: Slicing can also be used with strings in the same way. Just imagine that each character in the string is the same as a list element
Step3: Print all values in a list
Step4: Note
Step5: Other useful list methods
Step6: Sometimes we may want to capture the popped value.
Step7: Sorting list
Step8: sort() – is a method and will replace the original list object
Step9: Sorting lists numerically
Step10: Sorting big to small
Step11: Finding the length of a list
Step12: CONDITIONALS
Step13: Examples of logical operators in action
Step14: Boolean logic
Step15: Using or
Step16: Converting English to Boolean
Step17: This is wrong!
Step18: i.e. with negatives both conditions have to be true
Step19: Note
Step20: This tests the if statement first, then the first elsif statement and so on.
Step21: exit is very useful for error checking in your program e.g. if you want to check that something must happen to avoid a runtime error (remember example of dividing by zero last week...)
Step22: Note
Step23: while loops
Step24: while loops are used a lot for reading from files (lecture 5) – the conditional statement is that you keep reading until there are no lines left in the file
|
7,060
|
<ASSISTANT_TASK:>
Python Code:
import tohu
from tohu import *
from utils import print_generated_sequence
print(f"Tohu version: {tohu.__version__}")
class FoobarGenerator(CustomGenerator):
a = Integer(low=1000, high=3000)
b = Sequential(prefix="Foo_", digits=2)
c = Float(low=1.0, high=4.0)
g1 = FoobarGenerator()
g2 = FoobarGenerator()
g1.reset(seed=12345)
g2.reset(seed=9999)
print_generated_sequence(g1, num=3, sep='\n')
print_generated_sequence(g2, num=3, sep='\n')
print("-------------------------------------------------")
g1.reset(seed=12345)
g2.reset(seed=9999)
g1.reset(seed=12345)
print_generated_sequence(g1, num=3, sep='\n')
print_generated_sequence(g2, num=3, sep='\n')
class FoobarGenerator(CustomGenerator):
a1 = HashDigest(length=8)
a2 = HashDigest(length=8)
a3 = HashDigest(length=8)
b1 = Integer(low=0, high=1000)
b2 = Integer(low=0, high=1000)
b3 = Integer(low=0, high=1000)
g = FoobarGenerator()
g.reset(seed=99999)
print_generated_sequence(g, num=10, sep='\n')
g = NumpyRandomGenerator(method="normal", loc=3.0, scale=5.0)
g.reset(seed=12345); print_generated_sequence(g, num=4)
g.reset(seed=99999); print_generated_sequence(g, num=4)
g1 = NumpyRandomGenerator(method="normal", loc=3.0, scale=5.0)
g2 = NumpyRandomGenerator(method="normal", loc=3.0, scale=5.0)
g1.reset(seed=12345)
g2.reset(seed=99999)
print_generated_sequence(g1, num=4)
print_generated_sequence(g2, num=4)
g1 = Integer(0, 100)
g2 = Integer(0, 100)
g1.reset(12345)
g2.reset(12345)
print_generated_sequence(g1, num=10)
print_generated_sequence(g2, num=10)
z = Zip(g1, g2)
h1, h2 = Split(z)
z.reset(seed=12345)
print_generated_sequence(h1, num=10)
print_generated_sequence(h2, num=10)
g1 = FakerGenerator(method="name")
g2 = FakerGenerator(method="name")
g1.reset(seed=12345)
g2.reset(seed=12345)
print_generated_sequence(g1, num=5)
print_generated_sequence(g2, num=5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Resetting one generator should not reset others of the same type
Step2: The random generators which produce the attributes of items generated by g1 and g2 must be independent of each other. This regression test checks this by making sure that g1.reset() does not interfere with g2.reset(). In particular, the two outputs below should be identical (because the final call g1.reset(12345) should not interfere with the previous call g2.reset(9999)).
Step3: Sub-generators of custom generator should be initialised with separate seeds
Step4: NumpyRandomGenerator
Step5: Zip should use different seeds internally to reset its input generators
Step6: If .reset() is called on a Zip generator, it should use different seeds internally to reset each of its constituent input generators. In particular, the output of h1 and h2 below should be different.
Step7: Multiple FakerGenerator instances should be independent
|
7,061
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import time
%matplotlib inline
### function for shuffling the data and labels
def shuffle_in_unison(features, labels):
rng_state = np.random.get_state()
np.random.shuffle(features)
np.random.set_state(rng_state)
np.random.shuffle(labels)
### calculate classification errors
# return a percentage: (number misclassified)/(total number of datapoints)
def calc_classification_error(predictions, class_labels):
n = predictions.size
num_of_errors = 0.
for idx in xrange(n):
if (predictions[idx] >= 0.5 and class_labels[idx]==0) or (predictions[idx] < 0.5 and class_labels[idx]==1):
num_of_errors += 1
return num_of_errors/n
from sklearn.decomposition import PCA
# load the 403 x 360,000 matrix
br_paintings = np.load(open('../data/bob_ross/bob_ross_paintings.npy','rb'))
# perform PCA again
pca = PCA(n_components=400)
start_time = time.time()
pca_paintings = pca.fit_transform(br_paintings)
end_time = time.time()
# remove the br_paintings from memory
br_paintings = None
print "Training took a total of %.2f seconds." %(end_time-start_time)
print "Preserved percentage of original variance: %.2f%%" %(pca.explained_variance_ratio_.sum() * 100)
print "Dataset is now of size: %d x %d"%(pca_paintings.shape)
br_labels_data = pd.read_csv('../data/bob_ross/elements-by-episode.csv')
br_labels_data.head()
labels = (br_labels_data['HILLS'] + br_labels_data['MOUNTAIN'] + br_labels_data['MOUNTAINS'] \
+ br_labels_data['SNOWY_MOUNTAIN'] > 0).astype('int8').as_matrix()
print "Contains mountain?: "+str(bool(labels[5]))
recon_img = pca.inverse_transform(pca_paintings[5,:])
plt.imshow(np.reshape(recon_img, (300, 400, 3)))
plt.show()
# set the random number generator for reproducability
np.random.seed(182)
# shuffle data
N = pca_paintings.shape[0]
shuffle_in_unison(pca_paintings, labels)
# split into train and test sets
train_features = pca_paintings[:int(.8*N), :]
test_features = pca_paintings[int(.8*N):, :]
train_labels = labels[:int(.8*N)]
test_labels = labels[int(.8*N):]
np.random.seed(182)
from sklearn.linear_model import LogisticRegression
# initialize and train a logistic regression model
lr_model = LogisticRegression()
lr_model.fit(train_features, train_labels)
# compute error on test data
lr_predictions = lr_model.predict(test_features)
one_model_test_error_rate = calc_classification_error(lr_predictions, test_labels)
print "Classification error on test set: %.2f%%" %(one_model_test_error_rate*100)
# compute the baseline error since the classes are imbalanced
print "Baseline Error: %.2f%%" %((sum(test_labels)*100.)/len(test_labels))
### function for bootstrap resampling
def bootstrap_resample(features, labels, n_resamples):
N = features.shape[0]
idxs = np.arange(N)
# numpy's choice() handles the sampling w/ replacement
resampled_idxs = np.random.choice(idxs, size=(N,n_resamples), replace=True)
boot_samps_x = []
boot_samps_y = []
for i in xrange(n_resamples):
boot_samps_x.append(features[resampled_idxs[:,i],:])
boot_samps_y.append(labels[resampled_idxs[:,i]])
return boot_samps_x, boot_samps_y
def fit_and_predict_on_bootstrap_samples(model, bootstrapped_features, bootstrapped_labels, \
test_features, n_bootstrap_samples):
n_test = test_features.shape[0]
ensemb_probs = np.zeros((n_test,))
ensemb_preds = np.zeros((n_test,))
for idx in xrange(n_bootstrap_samples):
print "training model #%d" %(idx+1)
model.fit(bootstrapped_features[idx], bootstrapped_labels[idx])
ensemb_probs += model.predict_proba(test_features)[:,1]
ensemb_preds += model.predict(test_features)
ensemb_probs /= n_bootstrap_samples
ensemb_preds /= n_bootstrap_samples
ensemb_probs = np.around(ensemb_probs)
ensemb_preds = np.around(ensemb_preds)
return ensemb_probs, ensemb_preds
np.random.seed(182)
n_bootstrap_samples = 7
bootstrapped_features, bootstrapped_labels = bootstrap_resample(train_features, train_labels, n_bootstrap_samples)
ensembled_probs, ensembled_preds = \
fit_and_predict_on_bootstrap_samples(lr_model, bootstrapped_features, bootstrapped_labels, \
test_features, n_bootstrap_samples)
print
print "Averaging probabilities: classification error on test set is %.2f%%" \
%(calc_classification_error(ensembled_probs, test_labels)*100)
print "Averaging predictions: classification error on test set is %.2f%%" \
%(calc_classification_error(ensembled_preds, test_labels)*100)
print
print "One logistic regression model error: %.2f%%"%(one_model_test_error_rate*100)
# compute the baseline error since the classes are imbalanced
print "Baseline error: %.2f%%" %((sum(test_labels)*100.)/len(test_labels))
np.random.seed(182)
# import the three new classifiers
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
# initialize models
d_tree_model = DecisionTreeClassifier()
knn_model = KNeighborsClassifier(n_neighbors=5)
nb_model = GaussianNB()
# fit models
d_tree_model.fit(train_features, train_labels)
knn_model.fit(train_features, train_labels)
nb_model.fit(train_features, train_labels)
# predict on test data
tree_predictions = d_tree_model.predict(test_features)
knn_predictions = knn_model.predict(test_features)
nb_predictions = nb_model.predict(test_features)
# average predictions
# add in the logistic regression predictions calcuated previously
avg_predictions = np.around((tree_predictions + knn_predictions + nb_predictions + lr_predictions)/4.)
print "Averaging predictions: classification error on test set is %.2f%%" \
%(calc_classification_error(avg_predictions, test_labels)*100)
print
print "One logistic regression model error: %.2f%%"%(one_model_test_error_rate*100)
# compute the baseline error since the classes are imbalanced
print "Baseline error: %.2f%%" %((sum(test_labels)*100.)/len(test_labels))
np.random.seed(182)
### TRAINING
# calculate probabilities on the training data
tree_probs = d_tree_model.predict_proba(train_features)[:,1][np.newaxis].T
knn_probs = knn_model.predict_proba(train_features)[:,1][np.newaxis].T
nb_probs = nb_model.predict_proba(train_features)[:,1][np.newaxis].T
logReg_probs = lr_model.predict_proba(train_features)[:,1][np.newaxis].T
# combine into a new 'feature' matrix
train_probs_matrix = np.hstack([tree_probs, knn_probs, nb_probs, logReg_probs])
# train logistic regression
meta_classifier = LogisticRegression()
meta_classifier.fit(train_probs_matrix, train_labels)
# plot the weights learned for each classifier
f,ax = plt.subplots()
ticks = np.arange(4)
ax.bar(ticks, meta_classifier.coef_[0])
ax.set_xticks(ticks+.4)
ax.set_xticklabels(['Decision Tree', 'kNN', 'Naive Bayes', 'Log. Regression'])
ax.set_title('Weights Learned for Each Classifier')
plt.show()
### TESTING
# calculate probabilities on the test data
tree_probs = d_tree_model.predict_proba(test_features)[:,1][np.newaxis].T
knn_probs = knn_model.predict_proba(test_features)[:,1][np.newaxis].T
nb_probs = nb_model.predict_proba(test_features)[:,1][np.newaxis].T
logReg_probs = lr_model.predict_proba(test_features)[:,1][np.newaxis].T
# combine into a new 'feature' matrix
test_probs_matrix = np.hstack([tree_probs, knn_probs, nb_probs, logReg_probs])
stacked_predictions = meta_classifier.predict(test_probs_matrix)
print "Averaging predictions: classification error on test set is %.2f%%" %(calc_classification_error(stacked_predictions, test_labels)*100)
print
print "One logistic regression model error: %.2f%%"%(one_model_test_error_rate*100)
# compute the baseline error since the classes are imbalanced
print "Baseline error: %.2f%%" %((sum(test_labels)*100.)/len(test_labels))
train_stacked_predictions = meta_classifier.predict(train_probs_matrix)
print "Stacking train error: %.2f%%" %(calc_classification_error(train_stacked_predictions, train_labels))
from sklearn.datasets import fetch_lfw_pairs
lfw_train = fetch_lfw_pairs(subset='train')
lfw_test = fetch_lfw_pairs(subset='test')
lfw_train_pairs = lfw_train['pairs']
lfw_train_targets = lfw_train['target']
lfw_test_pairs = lfw_test['pairs']
lfw_test_targets = lfw_test['target']
print "The training data is of size: %d instances x %d faces x %d pixels x %d pixels" %(lfw_train_pairs.shape)
print "The test data is of size: %d instances x %d faces x %d pixels x %d pixels" %(lfw_test_pairs.shape)
face_idx=0
# subplot containing first image
ax1 = plt.subplot(1,2,1)
ax1.imshow(lfw_train_pairs[face_idx,0,:,:],cmap='Greys_r')
# subplot containing second image
ax2 = plt.subplot(1,2,2)
ax2.imshow(lfw_train_pairs[face_idx,1,:,:],cmap='Greys_r')
plt.show()
train_x = np.reshape(lfw_train_pairs, (2200, 5828))
train_y = lfw_train_targets
test_x = np.reshape(lfw_test_pairs, (1000, 5828))
test_y = lfw_test_targets
# print the shapes just to check its what we expect
print train_x.shape
print train_y.shape
print test_x.shape
print test_y.shape
np.random.seed(182)
# initialize and train a logistic regression model
lr_model = LogisticRegression()
lr_model.fit(train_x, train_y)
# compute error on test data
lr_predictions = lr_model.predict(test_x)
one_model_test_error_rate = calc_classification_error(lr_predictions, test_y)
print "Classification error on test set: %.2f%%" %(one_model_test_error_rate*100)
# compute the baseline error since the classes are imbalanced
print "Baseline Error: %.2f%%" %((sum(test_y)*100.)/len(test_y))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Again we need functions for shuffling the data and calculating classification errrors.
Step2: 0.1 Load the dataset of paintings
Step3: We want to formulate a binary classification problem. In the data folder there's a file that has labels denoting what is in each painting (tree, mountain, etc.). Let's load it...
Step4: Let's make two classes
Step5: Make training and test split...
Step6: 0.2 Run a baseline classifier
Step7: 1. The Bias-Variance Tradeoff
Step8: 2.2 Training on Bootstrap Samples
Step9: Now run the function...
Step10: 2. Voting
Step11: 3. Stacking Models
Step12: Ah! The training error is zero percent--a dead giveaway that the added power of stacking caused us to overfit. One way of prevent this overfitting is to split the training data and fit the base learners and meta-classifier on different subsets.
Step13: Let's visualize the images...
Step14: As a last step of pre-processing, let's flatten the data tensor...
Step15: And just to get you started, here's code to train a logistic regression classifier...
|
7,062
|
<ASSISTANT_TASK:>
Python Code:
!python3 -c "import kfp; print('KFP SDK version: {}'.format(kfp.__version__))"
import os
import json
from functools import partial
import kfp
import pprint
import yaml
from jinja2 import Template
from kfp.v2 import dsl
from kfp.v2.compiler import compiler
from kfp.v2.dsl import Dataset
from kfp.v2.google.client import AIPlatformClient
project_id='woven-rush-197905'
project_number='297370817971'
af_registry_location='asia-southeast1'
af_registry_name='mlops-vertex-kit'
components_dir='../components/'
def _load_custom_component(project_id: str,
af_registry_location: str,
af_registry_name: str,
components_dir: str,
component_name: str):
component_path = os.path.join(components_dir,
component_name,
'component.yaml.jinja')
with open(component_path, 'r') as f:
component_text = Template(f.read()).render(
project_id=project_id,
af_registry_location=af_registry_location,
af_registry_name=af_registry_name)
return kfp.components.load_component_from_text(component_text)
load_custom_component = partial(_load_custom_component,
project_id=project_id,
af_registry_location=af_registry_location,
af_registry_name=af_registry_name,
components_dir=components_dir)
preprocess_op = load_custom_component(component_name='data_preprocess')
train_op = load_custom_component(component_name='train_model')
check_metrics_op = load_custom_component(component_name='check_model_metrics')
create_endpoint_op = load_custom_component(component_name='create_endpoint')
test_endpoint_op = load_custom_component(component_name='test_endpoint')
deploy_model_op = load_custom_component(component_name='deploy_model')
monitor_model_op = load_custom_component(component_name='monitor_model')
pipeline_region='asia-southeast1'
pipeline_root='gs://vertex_pipeline_demo_root/pipeline_root'
data_region='asia-southeast1'
input_dataset_uri='bq://woven-rush-197905.vertex_pipeline_demo.banknote_authentication'
gcs_data_output_folder='gs://vertex_pipeline_demo_root/datasets/training'
training_data_schema='VWT:float;SWT:float;KWT:float;Entropy:float;Class:int'
data_pipeline_root='gs://vertex_pipeline_demo_root/compute_root'
training_container_image_uri=f'{af_registry_location}-docker.pkg.dev/{project_id}/{af_registry_name}/training:latest'
serving_container_image_uri=f'{af_registry_location}-docker.pkg.dev/{project_id}/{af_registry_name}/serving:latest'
custom_job_service_account=f'{project_number}-compute@developer.gserviceaccount.com'
training_container_image_uri,serving_container_image_uri,custom_job_service_account
train_additional_args = json.dumps({
'num_leaves_hp_param_min': 6,
'num_leaves_hp_param_max': 11,
'max_depth_hp_param_min': -1,
'max_depth_hp_param_max': 4,
'num_boost_round': 300,
'min_data_in_leaf': 5
})
train_additional_args
@dsl.pipeline(name='training-pipeline-template')
def pipeline(project_id: str,
data_region: str,
gcs_data_output_folder: str,
input_dataset_uri: str,
training_data_schema: str,
data_pipeline_root: str,
training_container_image_uri: str,
train_additional_args: str,
serving_container_image_uri: str,
custom_job_service_account: str,
hptune_region: str,
hp_config_suggestions_per_request: int,
hp_config_max_trials: int,
metrics_name: str,
metrics_threshold: float,
endpoint_machine_type: str,
endpoint_min_replica_count: int,
endpoint_max_replica_count: int,
endpoint_test_instances: str,
monitoring_user_emails: str,
monitoring_log_sample_rate: float,
monitor_interval: int,
monitoring_default_threshold: float,
monitoring_custom_skew_thresholds: str,
monitoring_custom_drift_thresholds: str,
machine_type: str = "n1-standard-8",
accelerator_count: int = 0,
accelerator_type: str = 'ACCELERATOR_TYPE_UNSPECIFIED',
vpc_network: str = "",
enable_model_monitoring: str = 'False'):
dataset_importer = kfp.v2.dsl.importer(
artifact_uri=input_dataset_uri,
artifact_class=Dataset,
reimport=False)
preprocess_task = preprocess_op(
project_id=project_id,
data_region=data_region,
gcs_output_folder=gcs_data_output_folder,
gcs_output_format="CSV",
input_dataset=dataset_importer.output)
train_task = train_op(
project_id=project_id,
data_region=data_region,
data_pipeline_root=data_pipeline_root,
input_data_schema=training_data_schema,
training_container_image_uri=training_container_image_uri,
train_additional_args=train_additional_args,
serving_container_image_uri=serving_container_image_uri,
custom_job_service_account=custom_job_service_account,
input_dataset=preprocess_task.outputs['output_dataset'],
machine_type=machine_type,
accelerator_count=accelerator_count,
accelerator_type=accelerator_type,
hptune_region=hptune_region,
hp_config_max_trials=hp_config_max_trials,
hp_config_suggestions_per_request=hp_config_suggestions_per_request,
vpc_network=vpc_network)
check_metrics_task = check_metrics_op(
metrics_name=metrics_name,
metrics_threshold=metrics_threshold,
basic_metrics=train_task.outputs['basic_metrics'])
create_endpoint_task = create_endpoint_op(
project_id=project_id,
data_region=data_region,
data_pipeline_root=data_pipeline_root,
display_name='endpoint-classification-template',
create_if_not_exists=True)
deploy_model_task = deploy_model_op(
project_id=project_id,
data_region=data_region,
data_pipeline_root=data_pipeline_root,
machine_type=endpoint_machine_type,
min_replica_count=endpoint_min_replica_count,
max_replica_count=endpoint_max_replica_count,
model=train_task.outputs['output_model'],
endpoint=create_endpoint_task.outputs['endpoint'])
test_endpoint_task = test_endpoint_op(
project_id=project_id,
data_region=data_region,
data_pipeline_root=data_pipeline_root,
endpoint=create_endpoint_task.outputs['endpoint'],
test_instances=endpoint_test_instances,
).after(deploy_model_task)
with dsl.Condition(enable_model_monitoring == 'True', name='Monitoring'):
monitor_model_task = monitor_model_op(
project_id=project_id,
data_region=data_region,
user_emails=monitoring_user_emails,
log_sample_rate=monitoring_log_sample_rate,
monitor_interval=monitor_interval,
default_threshold=monitoring_default_threshold,
custom_skew_thresholds=monitoring_custom_skew_thresholds,
custom_drift_thresholds=monitoring_custom_drift_thresholds,
endpoint=create_endpoint_task.outputs['endpoint'],
instance_schema=train_task.outputs['instance_schema'],
dataset=preprocess_task.outputs['output_dataset'])
monitor_model_task.after(deploy_model_task)
compiler.Compiler().compile(
pipeline_func=pipeline,
package_path="training_pipeline_job.json"
)
api_client = AIPlatformClient(
project_id=project_id,
region=pipeline_region)
test_instances = json.dumps([
{"VWT":3.6216,"SWT":8.6661,"KWT":-2.8073,"Entropy":-0.44699,"Class":"0"},
{"VWT":4.5459,"SWT":8.1674,"KWT":-2.4586,"Entropy":-1.4621,"Class":"0"},
{"VWT":3.866,"SWT":-2.6383,"KWT":1.9242,"Entropy":0.10645,"Class":"0"},
{"VWT":-3.7503,"SWT":-13.4586,"KWT":17.5932,"Entropy":-2.7771,"Class":"1"},
{"VWT":-3.5637,"SWT":-8.3827,"KWT":12.393,"Entropy":-1.2823,"Class":"1"},
{"VWT":-2.5419,"SWT":-0.65804,"KWT":2.6842,"Entropy":1.1952,"Class":"1"}
])
test_instances
pipeline_params = {
'project_id': project_id,
'data_region': data_region,
'gcs_data_output_folder': gcs_data_output_folder,
'input_dataset_uri': input_dataset_uri,
'training_data_schema': training_data_schema,
'data_pipeline_root': data_pipeline_root,
'training_container_image_uri': training_container_image_uri,
'train_additional_args': train_additional_args,
'serving_container_image_uri': serving_container_image_uri,
'custom_job_service_account': custom_job_service_account,
'hptune_region':"asia-east1",
'hp_config_suggestions_per_request': 5,
'hp_config_max_trials': 30,
'metrics_name': 'au_prc',
'metrics_threshold': 0.4,
'endpoint_machine_type': 'n1-standard-4',
'endpoint_min_replica_count': 1,
'endpoint_max_replica_count': 1,
'endpoint_test_instances': test_instances,
'monitoring_user_emails': 'luoshixin@google.com',
'monitoring_log_sample_rate': 0.8,
'monitor_interval': 3600,
'monitoring_default_threshold': 0.3,
'monitoring_custom_skew_thresholds': 'VWT:.5,SWT:.2,KWT:.7,Entropy:.4',
'monitoring_custom_drift_thresholds': 'VWT:.5,SWT:.2,KWT:.7,Entropy:.4',
'enable_model_monitoring': 'True'
}
response = api_client.create_run_from_job_spec(
job_spec_path="training_pipeline_job.json",
pipeline_root=pipeline_root,
parameter_values=pipeline_params,
enable_caching=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then define the pipeline using the following function
Step2: Compile and run the end-to-end ML pipeline
Step3: Next, instantiate an API client
Step4: Next, kick off a pipeline run
|
7,063
|
<ASSISTANT_TASK:>
Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
from scipy.stats import weibull_min
def weibull_dist(lam, k):
return weibull_min(k, scale=lam)
lam = 3
k = 0.8
actual_dist = weibull_dist(lam, k)
import numpy as np
from empiricaldist import Cdf
from utils import decorate
qs = np.linspace(0, 12, 101)
ps = actual_dist.cdf(qs)
cdf = Cdf(ps, qs)
cdf.plot()
decorate(xlabel='Duration in time',
ylabel='CDF',
title='CDF of a Weibull distribution')
np.random.seed(17)
data = actual_dist.rvs(10)
data
from utils import make_uniform
lams = np.linspace(0.1, 10.1, num=101)
prior_lam = make_uniform(lams, name='lambda')
ks = np.linspace(0.1, 5.1, num=101)
prior_k = make_uniform(ks, name='k')
from utils import make_joint
prior = make_joint(prior_lam, prior_k)
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh)
densities.shape
likelihood = densities.prod(axis=2)
likelihood.sum()
from utils import normalize
posterior = prior * likelihood
normalize(posterior)
def update_weibull(prior, data):
Update the prior based on data.
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
densities = weibull_dist(lam_mesh, k_mesh).pdf(data_mesh)
likelihood = densities.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
posterior = update_weibull(prior, data)
from utils import plot_contour
plot_contour(posterior)
decorate(title='Posterior joint distribution of Weibull parameters')
from utils import marginal
posterior_lam = marginal(posterior, 0)
posterior_k = marginal(posterior, 1)
import matplotlib.pyplot as plt
plt.axvline(3, color='C5')
posterior_lam.plot(color='C4', label='lambda')
decorate(xlabel='lam',
ylabel='PDF',
title='Posterior marginal distribution of lam')
plt.axvline(0.8, color='C5')
posterior_k.plot(color='C12', label='k')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
print(lam, posterior_lam.credible_interval(0.9))
print(k, posterior_k.credible_interval(0.9))
np.random.seed(19)
start = np.random.uniform(0, 8, size=10)
start
np.random.seed(17)
duration = actual_dist.rvs(10)
duration
import pandas as pd
d = dict(start=start, end=start+duration)
obs = pd.DataFrame(d)
obs = obs.sort_values(by='start', ignore_index=True)
obs
censored = obs['end'] > 8
obs.loc[censored, 'end'] = 8
obs.loc[censored, 'status'] = 0
def plot_lifelines(obs):
Plot a line for each observation.
obs: DataFrame
for y, row in obs.iterrows():
start = row['start']
end = row['end']
status = row['status']
if status == 0:
# ongoing
plt.hlines(y, start, end, color='C0')
else:
# complete
plt.hlines(y, start, end, color='C1')
plt.plot(end, y, marker='o', color='C1')
decorate(xlabel='Time (weeks)',
ylabel='Dog index',
title='Lifelines showing censored and uncensored observations')
plt.gca().invert_yaxis()
plot_lifelines(obs)
obs['T'] = obs['end'] - obs['start']
data1 = obs.loc[~censored, 'T']
data2 = obs.loc[censored, 'T']
data1
data2
posterior1 = update_weibull(prior, data1)
def update_weibull_incomplete(prior, data):
Update the prior using incomplete data.
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
# evaluate the survival function
probs = weibull_dist(lam_mesh, k_mesh).sf(data_mesh)
likelihood = probs.prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
posterior2 = update_weibull_incomplete(posterior1, data2)
plot_contour(posterior2)
decorate(title='Posterior joint distribution, incomplete data')
posterior_lam2 = marginal(posterior2, 0)
posterior_k2 = marginal(posterior2, 1)
posterior_lam.plot(color='C5', label='All complete',
linestyle='dashed')
posterior_lam2.plot(color='C2', label='Some censored')
decorate(xlabel='lambda',
ylabel='PDF',
title='Marginal posterior distribution of lambda')
posterior_k.plot(color='C5', label='All complete',
linestyle='dashed')
posterior_k2.plot(color='C12', label='Some censored')
decorate(xlabel='k',
ylabel='PDF',
title='Posterior marginal distribution of k')
download('https://gist.github.com/epogrebnyak/7933e16c0ad215742c4c104be4fbdeb1/raw/c932bc5b6aa6317770c4cbf43eb591511fec08f9/lamps.csv')
df = pd.read_csv('lamps.csv', index_col=0)
df.head()
from empiricaldist import Pmf
pmf_bulb = Pmf(df['f'].to_numpy(), df['h'])
pmf_bulb.normalize()
pmf_bulb.mean()
lams = np.linspace(1000, 2000, num=51)
prior_lam = make_uniform(lams, name='lambda')
ks = np.linspace(1, 10, num=51)
prior_k = make_uniform(ks, name='k')
prior_bulb = make_joint(prior_lam, prior_k)
data_bulb = np.repeat(df['h'], df['f'])
len(data_bulb)
posterior_bulb = update_weibull(prior_bulb, data_bulb)
plot_contour(posterior_bulb)
decorate(title='Joint posterior distribution, light bulbs')
lam_mesh, k_mesh = np.meshgrid(
prior_bulb.columns, prior_bulb.index)
means = weibull_dist(lam_mesh, k_mesh).mean()
means.shape
prod = means * posterior_bulb
prod.to_numpy().sum()
def joint_weibull_mean(joint):
Compute the mean of a joint distribution of Weibulls.
lam_mesh, k_mesh = np.meshgrid(
joint.columns, joint.index)
means = weibull_dist(lam_mesh, k_mesh).mean()
prod = means * joint
return prod.to_numpy().sum()
def update_weibull_between(prior, data, dt=12):
Update the prior based on data.
lam_mesh, k_mesh, data_mesh = np.meshgrid(
prior.columns, prior.index, data)
dist = weibull_dist(lam_mesh, k_mesh)
cdf1 = dist.cdf(data_mesh)
cdf2 = dist.cdf(data_mesh-12)
likelihood = (cdf1 - cdf2).prod(axis=2)
posterior = prior * likelihood
normalize(posterior)
return posterior
posterior_bulb2 = update_weibull_between(prior_bulb, data_bulb)
plot_contour(posterior_bulb2)
decorate(title='Joint posterior distribution, light bulbs')
joint_weibull_mean(posterior_bulb)
joint_weibull_mean(posterior_bulb2)
lam = 1550
k = 4.25
t = 1000
prob_dead = weibull_dist(lam, k).cdf(t)
prob_dead
from utils import make_binomial
n = 100
p = prob_dead
dist_num_dead = make_binomial(n, p)
dist_num_dead.plot(label='known parameters')
decorate(xlabel='Number of dead bulbs',
ylabel='PMF',
title='Predictive distribution with known parameters')
posterior_series = posterior_bulb.stack()
posterior_series.head()
pmf_seq = []
for (k, lam) in posterior_series.index:
prob_dead = weibull_dist(lam, k).cdf(t)
pmf = make_binomial(n, prob_dead)
pmf_seq.append(pmf)
from utils import make_mixture
post_pred = make_mixture(posterior_series, pmf_seq)
dist_num_dead.plot(label='known parameters')
post_pred.plot(label='unknown parameters')
decorate(xlabel='Number of dead bulbs',
ylabel='PMF',
title='Posterior predictive distribution')
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
import scipy.stats
def gamma_dist(k, theta):
Makes a gamma object.
k: shape parameter
theta: scale parameter
returns: gamma object
return scipy.stats.gamma(k, scale=theta)
# Load the data file
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/data/2203951.csv')
weather = pd.read_csv('2203951.csv')
weather.head()
rained = weather['PRCP'] > 0
rained.sum()
prcp = weather.loc[rained, 'PRCP']
prcp.describe()
cdf_data = Cdf.from_seq(prcp)
cdf_data.plot()
decorate(xlabel='Total rainfall (in)',
ylabel='CDF',
title='Distribution of rainfall on days it rained')
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This chapter introduces "survival analysis", which is a set of statistical methods used to answer questions about the time until an event.
Step2: As an example, here's a Weibull distribution with parameters $\lambda=3$ and $k=0.8$.
Step3: The result is an object that represents the distribution.
Step4: actual_dist provides rvs, which we can use to generate a random sample from this distribution.
Step5: So, given the parameters of the distribution, we can generate a sample.
Step6: And a uniform prior for $k$
Step7: I'll use make_joint to make a joint prior distribution for the two parameters.
Step8: The result is a DataFrame that represents the joint prior, with possible values of $\lambda$ across the columns and values of $k$ down the rows.
Step9: Now we can use weibull_dist to compute the PDF of the Weibull distribution for each pair of parameters and each data point.
Step10: The likelihood of the data is the product of the probability densities along axis=2.
Step11: Now we can compute the posterior distribution in the usual way.
Step13: The following function encapsulates these steps.
Step14: Here's how we use it.
Step15: And here's a contour plot of the joint posterior distribution.
Step16: It looks like the range of likely values for $\lambda$ is about 1 to 4, which contains the actual value we used to generate the data, 3.
Step17: And compute the posterior means and 90% credible intervals.
Step18: The vertical gray line show the actual value of $\lambda$.
Step19: The posterior distributions are wide, which means that with only 10 data points we can't estimated the parameters precisely.
Step20: Incomplete Data
Step21: Now let's suppose that the residence times follow the Weibull distribution we used in the previous example.
Step22: I'll use these values to construct a DataFrame that contains the arrival and departure times for each dog, called start and end.
Step23: For display purposes, I'll sort the rows of the DataFrame by arrival time.
Step24: Notice that several of the lifelines extend past the observation window of 8 weeks.
Step25: censored is a Boolean Series that is True for lifelines that extend past Week 8.
Step27: Now we can plot a "lifeline" for each dog, showing the arrival and departure times on a time line.
Step28: And I'll add one more column to the table, which contains the duration of the observed parts of the lifelines.
Step29: What we have simulated is the data that would be available at the beginning of Week 8.
Step30: For the complete data, we can use update_weibull, which uses the PDF of the Weibull distribution to compute the likelihood of the data.
Step32: For the incomplete data, we have to think a little harder.
Step33: Here's the update with the incomplete data.
Step34: And here's what the joint posterior distribution looks like after both updates.
Step35: Compared to the previous contour plot, it looks like the range of likely values for $\lambda$ is substantially wider.
Step36: Here's the posterior marginal distribution for $\lambda$ compared to the distribution we got using all complete data.
Step37: The distribution with some incomplete data is substantially wider.
Step38: In this example, the marginal distribution is shifted to the left when we have incomplete data, but it is not substantially wider.
Step39: We can load the data into a DataFrame like this
Step40: Column h contains the times when bulbs failed in hours; Column f contains the number of bulbs that failed at each time.
Step41: Because of the design of this experiment, we can consider the data to be a representative sample from the distribution of lifetimes, at least for light bulbs that are lit continuously.
Step42: Assuming that these data are well modeled by a Weibull distribution, let's estimate the parameters that fit the data.
Step43: For this example, there are 51 values in the prior distribution, rather than the usual 101. That's because we are going to use the posterior distributions to do some computationally-intensive calculations.
Step44: Although we have data for 50 light bulbs, there are only 32 unique lifetimes in the dataset. For the update, it is convenient to express the data in the form of 50 lifetimes, with each lifetime repeated the given number of times.
Step45: Now we can use update_weibull to do the update.
Step46: Here's what the posterior joint distribution looks like
Step47: To summarize this joint posterior distribution, we'll compute the posterior mean lifetime.
Step48: Now for each pair of parameters we'll use weibull_dist to compute the mean.
Step49: The result is an array with the same dimensions as the joint distribution.
Step50: Finally we compute the sum of the weighted means.
Step52: Based on the posterior distribution, we think the mean lifetime is about 1413 hours.
Step54: Incomplete Information
Step55: The probability that a value falls in an interval is the difference between the CDF at the beginning and end of the interval.
Step56: And here are the results.
Step57: Visually this result is almost identical to what we got using the PDF.
Step58: When we take into account the 12-hour interval between observations, the posterior mean is about 6 hours less.
Step59: If there are 100 bulbs and each has this probability of dying, the number of dead bulbs follows a binomial distribution.
Step60: And here's what it looks like.
Step61: But that's based on the assumption that we know $\lambda$ and $k$, and we don't.
Step62: The result is a Series with a MultiIndex that contains two "levels"
Step63: Now we can use make_mixture, passing as parameters the posterior probabilities in posterior_series and the sequence of binomial distributions in pmf_seq.
Step64: Here's what the posterior predictive distribution looks like, compared to the binomial distribution we computed with known parameters.
Step65: The posterior predictive distribution is wider because it represents our uncertainty about the parameters as well as our uncertainty about the number of dead bulbs.
Step67: Exercise
Step68: Now we need some data.
Step69: Now we can load it into a DataFrame
Step70: I'll make a Boolean Series to indicate which days it rained.
Step71: And select the total rainfall on the days it rained.
Step72: Here's what the CDF of the data looks like.
Step73: The maximum is 1.14 inches of rain is one day.
|
7,064
|
<ASSISTANT_TASK:>
Python Code:
# Create a function that
def function(names):
# For each name in a list of names
for name in names:
# Returns the name
return name
# Create a variable of that function
students = function(['Abe', 'Bob', 'Christina', 'Derek', 'Eleanor'])
# Run the function
students
# Create a generator that
def generator(names):
# For each name in a list of names
for name in names:
# Yields a generator object
yield name
# Same as above, create a variable for the generator
students = generator(['Abe', 'Bob', 'Christina', 'Derek', 'Eleanor'])
# Run the generator
students
# Return the next student
next(students)
# Return the next student
next(students)
# Return the next student
next(students)
# List all remaining students in the generator
list(students)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we have a problem, we were only returned the name of the first student. Why? Because the function only ran the for name in names iteration once!
Step2: Everything has been the same so far, but now things get interesting. Above when we ran students when it was a function, it returned one name. However, now that students refers to a generator, it yields a generator object of names!
Step3: What can we do this a generator object? A lot! As a generator students will can each student in the list of students
Step4: It is interesting to note that if we use list(students) we can see all the students still remaining in the generator object's iteration
|
7,065
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import bruges as bg
w, top, base, ref = bg.models.wedge()
plt.imshow(w, interpolation='none')
plt.axvline(ref, color='k', ls='--')
plt.plot(top, 'r-', lw=4)
plt.plot(base, 'r-', lw=4)
plt.show()
import numpy as np
vps = np.array([2320, 2350, 2350])
rhos = np.array([2650, 2600, 2620])
impedances = vps * rhos
w, top, base, ref = bg.models.wedge(strat=impedances)
plt.imshow(w, interpolation='none')
plt.axvline(ref, color='k', ls='--')
plt.plot(top, 'r-', lw=4)
plt.plot(base, 'r-', lw=4)
plt.colorbar()
plt.show()
w, top, base, ref = bg.models.wedge(depth=(100, 600, 100),
width=(200, 1600, 200),
strat=(0, (1, 2, 1, 2, 1), 3),
mode='linear'
)
plt.imshow(w, interpolation='none')
plt.axvline(ref, color='k', ls='--')
plt.plot(top, 'r-', lw=4)
plt.plot(base, 'r-', lw=4)
plt.show()
w, top, base, ref = bg.models.wedge(depth=(10., 80, 10),
width=(10, 80, 10),
strat=(0, (1, 2, 2, 1, 2, 1, 0, 1, 1,), 3), # Floats in the wedge
thickness=(1, 1),
mode='linear',
)
plt.imshow(w, interpolation='none')
plt.axvline(ref, color='k', ls='--')
plt.plot(top, 'r-', lw=4)
plt.plot(base, 'r-', lw=4)
plt.show()
w, top, base, ref = bg.models.wedge(depth=(10., 80, 10),
width=(10, 80, 10),
strat=(1.48, (2.10, 2.25, 2.35), 2.40), # Floats in the wedge.
thickness=(1, 0.5),
mode='linear',
)
plt.imshow(w, interpolation='none')
plt.axvline(ref, color='k', ls='--')
plt.plot(top, 'r-', lw=4)
plt.plot(base, 'r-', lw=4)
cb = plt.colorbar()
cb.ax.invert_yaxis()
plt.show()
confs = ['both', 'top', 'bottom']
fig, axs = plt.subplots(ncols=len(confs), figsize=(12, 4))
for ax, conf in zip(axs, confs):
w, top, base, ref = bg.models.wedge(strat=((0, 1, 0), (2, 3, 2, 3, 2), (4, 5, 4)),
conformance=conf)
ax.imshow(w, interpolation='none')
ax.axvline(ref, color='k', ls='--')
ax.plot(top, 'r-', lw=4)
ax.plot(base, 'r-', lw=4)
ax.set_title(f"{conf} conformant")
plt.show()
modes = ['linear', 'root', 'power', 'sigmoid']
fig, axs = plt.subplots(ncols=len(modes), figsize=(15, 5))
for ax, mode in zip(axs, modes):
w, top, base, ref = bg.models.wedge(mode=mode)
ax.imshow(w, interpolation='none')
ax.axvline(ref, color='k', ls='--')
ax.plot(top, 'r-', lw=4)
ax.plot(base, 'r-', lw=4)
ax.set_title(mode)
plt.show()
def wavy(start, stop, num):
Custom wedge shape.
x = np.linspace(0, 10*np.pi, num)
y = np.sin(x) + x
# Normalize to 0-1.
y_ = (y - np.min(y)) / (np.max(y)-np.min(y))
# Scale to required output.
return min(start, stop) + abs(stop-start) * y_
# The wedge function will pass 'left' and 'right' thicknesses.
# You only need to worry about the case where left < right.
left, right = 1, 2
y = wavy(left, right, 100)
plt.plot(y)
plt.ylim(right, 0)
plt.show()
w, top, base, ref = bg.models.wedge(mode=wavy, thickness=(1, 0))
plt.imshow(w, interpolation='none')
plt.axvline(ref, color='k', ls='--')
plt.plot(top, 'r-', lw=4)
plt.plot(base, 'r-', lw=4)
plt.show()
w, top, base, ref = bg.models.wedge(strat=(0, (1, 2, 1, 1, 2, 1), 3), # Binary wedge.
breadth=100)
w.shape, top.shape, base.shape, ref
slices = [0, 50, 99]
fig, axs = plt.subplots(ncols=len(slices), figsize=(16, 4))
for ax, slic in zip(axs, slices):
ax.imshow(w[..., slic], interpolation='none')
ax.plot(top[:, slic], 'r-', lw=4)
ax.plot(base[:, slic], 'r-', lw=4)
ax.set_title(f"Wedge slice: {slic}")
plt.show()
slices = [30, 50, 90]
fig, axs = plt.subplots(ncols=len(slices), figsize=(16, 4))
for ax, slic in zip(axs, slices):
ax.imshow(w[:, slic], interpolation='none')
ax.plot(top[slic], 'r-', lw=4)
ax.plot(base[slic], 'r-', lw=4)
ax.set_title(f"Net:gross slice: {slic}")
plt.show()
vps = np.array([2320, 2350, 2350, 2370])
rhos = np.array([2650, 2600, 2620, 2610])
vp = vps[w]
rho = rhos[w]
rc = bg.reflection.acoustic_reflectivity(vp, rho)
plt.imshow(rc[:, :, 50])
ricker, t = bg.filters.ricker(duration=0.064, dt=0.001, f=40)
syn = bg.filters.convolve(rc, ricker, axis=0) # Time is on the first axis.
syn.shape
plt.imshow(syn[:, :, 50])
ma = np.percentile(syn, 99.9)
vols, cmaps = [w, syn], ['viridis', 'gray']
fig, axs = plt.subplots(ncols=3, nrows=2, figsize=(14, 8))
for row, vol, cm in zip(axs, vols, cmaps):
row[0].imshow(vol[:, :, 24], aspect='auto', interpolation='none', cmap=cm, vmin=-ma if vol is syn else None, vmax=ma if vol is syn else None)
row[0].axhline(40, c='w', lw=0.67)
row[0].axvline(50, c='w', lw=0.67)
row[0].set_title(f"Wedge axis")
row[1].imshow(vol[:, 50, :], aspect='auto', interpolation='none', cmap=cm, vmin=-ma if vol is syn else None, vmax=ma if vol is syn else None)
row[1].axhline(40, c='w', lw=0.67)
row[1].axvline(24, c='w', lw=0.67)
row[1].set_title(f"Net:gross axis")
row[2].imshow(vol[40, :, :], aspect='auto', interpolation='none', cmap=cm, vmin=0 if vol is w else -ma, vmax=ma if vol is syn else None)
row[2].axhline(50, c='w', lw=0.67)
row[2].axvline(24, c='w', lw=0.67)
row[2].set_title(f"Timeslice axis")
plt.show()
log_data = np.loadtxt('../data/P-129.csv', delimiter=',', skiprows=1)
depths, dt, rhob = log_data[:, 0], log_data[:, 2], log_data[:, 4]
impedance = (1e6 / dt) * rhob
datum, top, base, floor = 500, 701, 805, 1005
log_upper = impedance[(depths >= datum) & (depths < top)]
log_wedge = impedance[(depths >= top) & (depths < base)]
log_lower = impedance[(depths >= base) & (depths < floor)]
upper_depths = depths[(depths >= datum) & (depths < top)]
wedge_depths = depths[(depths >= top) & (depths < base)]
lower_depths = depths[(depths >= base) & (depths < floor)]
plt.figure(figsize=(2, 8))
plt.plot(log_upper, upper_depths, lw=2)
plt.plot(log_wedge, wedge_depths, lw=1)
plt.plot(log_lower, lower_depths, lw=1)
plt.ylim(floor, datum)
w, top, base, ref = bg.models.wedge(depth=(400, 208, 400),
width=(20, 260, 20),
strat=(log_upper, log_wedge, log_lower),
mode='sigmoid', conformance='bottom',
thickness=(0, 2)
)
log = w[:, ref]
# Make a scaled version of the log for the section plot.
log_ = (log - np.mean(log)) / np.std(log)
depth = np.arange(len(log))
sf = 2.5
log_bias = np.ptp(log_)
fig, axs = plt.subplots(figsize=(16, 8), ncols=2, gridspec_kw={'width_ratios': (1, 5)})
axs[0].plot(log, depth)
axs[0].set_ylim(depth[-1], depth[0])
axs[0].set_title('Impedance')
im=axs[1].imshow(w, aspect='auto', cmap='viridis')
axs[1].plot(top, 'r-', lw=4)
axs[1].plot(base, 'r-', lw=4)
axs[1].fill_betweenx(depth, ref + log_bias + (sf*log_), ref, color='k', alpha=0.3)
# axs[1].plot(ref + log_bias + (sf*log), depth, c='k', lw=1)
axs[1].axvline(ref, color='k', ls='--', lw=1)
axs[1].set_title('Well A')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The function returns the following
Step2: And look at the result
Step3: Now the wedge contains rock properties, not integer labels.
Step4: Notice that the wedge function returns 4 things
Step5: Here's another example. We can reverse the wedge direction by passing in (1, 0.5) for the thickess. As a result, the left-hand-side is the maximum thickness of 1, the right is half the thickness.
Step6: Top and bottom conformance
Step7: Different 'wedge' shapes
Step9: If you're feeling creative, you can also give wedge() your own function (since version 0.4.3). Your function should have an API like np.linspace() (the function that produces the standard wedge shape). Here's an example
Step10: Let's use that function to make a model
Step11: Varying net
Step12: Let's look at 3 slices
Step13: Slices in/out of the page, look like this
Step14: Let's simulate the seismic. We'll provide 4 'rocks', one for each of the integers in the array
Step15: We can check the reflectivities on a vertical slice half-way through the 'breadth'
Step16: Now we can convolve with a wavelet to get a synthetic. The synthetic will, like the RC series, be three-dimensional
Step17: Let's look at the three orthognal profiles through this synthetic
Step18: Models from well logs
Step19: And calculate an acoustic impedance log as input into our model
Step20: And now we can segment this array into upper, middle, and lower segments.
Step21: We'll create arrays of depths for the corresponding zones as well
Step22: A quick plot of the three zones shows us the stratigraphic inputs to the wedge function.
Step23: The middle piece (orange) will be fitted to the middle layer of the wedge (resulting in the number of pixels given in the depth argument. The upper (blue) and lower (green) pieces will then be cropped to fit their layers, so you must provide enough data for this to happen. The safest thing to do is to provide the entire log above and the same below.
Step24: Now we can plot everything together
|
7,066
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -v -d -u -p pandas,scikit-learn,numpy,matplotlib
feature_dict = {i:label for i,label in zip(
range(4),
('sepal length in cm',
'sepal width in cm',
'petal length in cm',
'petal width in cm', ))}
import pandas as pd
df = pd.io.parsers.read_csv(
filepath_or_buffer='https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data',
header=None,
sep=',',
)
df.columns = [l for i,l in sorted(feature_dict.items())] + ['class label']
df.dropna(how="all", inplace=True) # to drop the empty line at file-end
df.tail()
from sklearn.preprocessing import LabelEncoder
X = df[[0,1,2,3]].values
y = df['class label'].values
enc = LabelEncoder()
label_encoder = enc.fit(y)
y = label_encoder.transform(y) + 1
label_dict = {1: 'Setosa', 2: 'Versicolor', 3:'Virginica'}
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import math
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(12,6))
for ax,cnt in zip(axes.ravel(), range(4)):
# set bin sizes
min_b = math.floor(np.min(X[:,cnt]))
max_b = math.ceil(np.max(X[:,cnt]))
bins = np.linspace(min_b, max_b, 25)
# plottling the histograms
for lab,col in zip(range(1,4), ('blue', 'red', 'green')):
ax.hist(X[y==lab, cnt],
color=col,
label='class %s' %label_dict[lab],
bins=bins,
alpha=0.5,)
ylims = ax.get_ylim()
# plot annotation
leg = ax.legend(loc='upper right', fancybox=True, fontsize=8)
leg.get_frame().set_alpha(0.5)
ax.set_ylim([0, max(ylims)+2])
ax.set_xlabel(feature_dict[cnt])
ax.set_title('Iris histogram #%s' %str(cnt+1))
# hide axis ticks
ax.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
# remove axis spines
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
axes[0][0].set_ylabel('count')
axes[1][0].set_ylabel('count')
fig.tight_layout()
plt.show()
from sklearn import preprocessing
preprocessing.scale(X, axis=0, with_mean=True, with_std=True, copy=False)
print()
np.set_printoptions(precision=4)
mean_vectors = []
for cl in range(1,4):
mean_vectors.append(np.mean(X[y==cl], axis=0))
print('Mean Vector class %s: %s\n' %(cl, mean_vectors[cl-1]))
S_W = np.zeros((4,4))
for cl,mv in zip(range(1,4), mean_vectors):
class_sc_mat = np.zeros((4,4)) # scatter matrix for every class
for row in X[y == cl]:
row, mv = row.reshape(4,1), mv.reshape(4,1) # make column vectors
class_sc_mat += (row-mv).dot((row-mv).T)
S_W += class_sc_mat # sum class scatter matrices
print('within-class Scatter Matrix:\n', S_W)
overall_mean = np.mean(X, axis=0)
S_B = np.zeros((4,4))
for i,mean_vec in enumerate(mean_vectors):
n = X[y==i+1,:].shape[0]
mean_vec = mean_vec.reshape(4,1) # make column vector
overall_mean = overall_mean.reshape(4,1) # make column vector
S_B += n * (mean_vec - overall_mean).dot((mean_vec - overall_mean).T)
print('between-class Scatter Matrix:\n', S_B)
eig_vals, eig_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
for i in range(len(eig_vals)):
eigvec_sc = eig_vecs[:,i].reshape(4,1)
print('\nEigenvector {}: \n{}'.format(i+1, eigvec_sc.real))
print('Eigenvalue {:}: {:.2e}'.format(i+1, eig_vals[i].real))
for i in range(len(eig_vals)):
eigv = eig_vecs[:,i].reshape(4,1)
np.testing.assert_array_almost_equal(np.linalg.inv(S_W).dot(S_B).dot(eigv),
eig_vals[i] * eigv,
decimal=6, err_msg='', verbose=True)
print('ok')
# Make a list of (eigenvalue, eigenvector) tuples
eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eig_pairs = sorted(eig_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for i in eig_pairs:
print(i[0])
print('Variance explained:\n')
eigv_sum = sum(eig_vals)
for i,j in enumerate(eig_pairs):
print('eigenvalue {0:}: {1:.2%}'.format(i+1, (j[0]/eigv_sum).real))
W = np.hstack((eig_pairs[0][1].reshape(4,1), eig_pairs[1][1].reshape(4,1)))
print('Matrix W:\n', W.real)
X_lda = X.dot(W)
assert X_lda.shape == (150,2), "The matrix is not 2x150 dimensional."
from matplotlib import pyplot as plt
def plot_step_lda():
ax = plt.subplot(111)
for label,marker,color in zip(
range(1,4),('^', 's', 'o'),('blue', 'red', 'green')):
plt.scatter(x=X_lda[:,0].real[y == label],
y=X_lda[:,1].real[y == label],
marker=marker,
color=color,
alpha=0.5,
label=label_dict[label]
)
plt.xlabel('LD1')
plt.ylabel('LD2')
leg = plt.legend(loc='upper right', fancybox=True)
leg.get_frame().set_alpha(0.5)
plt.title('LDA: Iris projection onto the first 2 linear discriminants')
# hide axis ticks
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
# remove axis spines
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
plt.grid()
plt.tight_layout
plt.show()
plot_step_lda()
from sklearn.decomposition import PCA as sklearnPCA
sklearn_pca = sklearnPCA(n_components=2)
X_pca = sklearn_pca.fit_transform(X)
def plot_pca():
ax = plt.subplot(111)
for label,marker,color in zip(
range(1,4),('^', 's', 'o'),('blue', 'red', 'green')):
plt.scatter(x=X_pca[:,0][y == label],
y=X_pca[:,1][y == label],
marker=marker,
color=color,
alpha=0.5,
label=label_dict[label]
)
plt.xlabel('PC1')
plt.ylabel('PC2')
leg = plt.legend(loc='upper right', fancybox=True)
leg.get_frame().set_alpha(0.5)
plt.title('PCA: Iris projection onto the first 2 principal components')
# hide axis ticks
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
# remove axis spines
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
plt.tight_layout
plt.grid()
plt.show()
plot_pca()
plot_step_lda()
from sklearn.lda import LDA
# LDA
sklearn_lda = LDA(n_components=2)
X_lda_sklearn = sklearn_lda.fit_transform(X, y)
def plot_scikit_lda(X, title, mirror=1):
ax = plt.subplot(111)
for label,marker,color in zip(
range(1,4),('^', 's', 'o'),('blue', 'red', 'green')):
plt.scatter(x=X[:,0][y == label]*mirror,
y=X[:,1][y == label],
marker=marker,
color=color,
alpha=0.5,
label=label_dict[label]
)
plt.xlabel('LD1')
plt.ylabel('LD2')
leg = plt.legend(loc='upper right', fancybox=True)
leg.get_frame().set_alpha(0.5)
plt.title(title)
# hide axis ticks
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
# remove axis spines
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
plt.grid()
plt.tight_layout
plt.show()
plot_step_lda()
plot_scikit_lda(X_lda_sklearn, title='Default LDA via scikit-learn', mirror=(-1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font size="1.5em">More information about the watermark magic command extension.</font>
Step 1
Step2: <a name="sample_data"></a>
Step3: $\pmb X = \begin{bmatrix} x_{1_{\text{sepal length}}} & x_{1_{\text{sepal width}}} & x_{1_{\text{petal length}}} & x_{1_{\text{petal width}}}\
Step4: $\pmb y = \begin{bmatrix}{\text{setosa}}\
Step5: From just looking at these simple graphical representations of the features, we can already tell that the petal lengths and widths are likely better suited as potential features two separate between the three flower classes. In practice, instead of reducing the dimensionality via a projection (here
Step6: <a name="sample_data"></a>
Step7: <a name="sample_data"></a>
Step 2
Step8: <br>
Step9: <br>
Step 3
Step10: <br>
Step11: <br>
Step 4
Step12: <br>
Step13: <br>
Step14: <br>
Step 5
Step15: The scatter plot above represents our new feature subspace that we constructed via LDA. We can see that the first linear discriminant "LD1" separates the classes quite nicely. However, the second discriminant, "LD2", does not add much valuable information, which we've already concluded when we looked at the ranked eigenvalues is step 4.
Step16: <br>
|
7,067
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
# Temperatures at stations
T1 = 150
T4 = 10 # celcius
# define values for thermal conductivity
k = [0.07, 0.7, 0.07]
# Length of layers
L = [0.03, 0.1, 0.03]
AR = [] # initialize empty array
for i in range(0,len(k)):
AR.append(L[i]/k[i])
print(AR, "m^2 K/W")
q = float(T1 - T4)/np.sum(AR)
print('q = ', q, 'W/m^2') # W/m^2
T2 = -q*AR[0] + T1
T3 = q*AR[2] + T4
T = [T1, T2, T3, T4] # vectorize temps
x = [0, L[0], L[0]+L[1], L[0]+L[1]+L[2]]
# Plot Temperature distribution
plt.title('Temperature Distribution Across Brick Wall')
plt.xlabel('X-location')
plt.ylabel('Temperature (C)')
plt.grid()
plt.plot(x,T)
plt.show()
# Print Temperatures
print('T1 = ', T1, 'C')
print('T2 = ', T2, 'C')
print('T3 = ', T3, 'C')
print('T4 = ', T4, 'C')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Conduction
Step2: These values can be used to calculate $\dot{Q}$ the rate of heat transfer. This can be related to a more physically relevant value, $\dot{q}$, the heat flux. Where,
Step3: The temperature in the intermediary steps can be found using the fact that $\dot{Q}$ is constant throughout the slab.
|
7,068
|
<ASSISTANT_TASK:>
Python Code:
weightDict = {
'C':12,
'H':1,
'O':16,
'Cl':35
#add more if needed.
}
ethanol = {'C':2, 'H':6, 'O':1}
water = {'H':2, 'O':1}
HCl = {'H':1, 'Cl':1}
#Finish...
ethanol2 = [('C',1), ('H',3), ('C',1), ('H',2), ('O',1), ('H',1)]
acetic2 = [('C',1), ('H',3), ('C',1), ('O',1), ('O',1), ('H',1)]
#Finish
#Try to do it
molecule = input("Write a molecule: ")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Parsing the molecular formula is not a trivial task that we will do later. We start by assuming that the formula has been parsed.
Step2: From that, calculate the total weight
Step3: Now imagine we also accept formulas in an extended way, for example ethanol as $\mathrm{CH_3CH_2OH}$. In that case it makes sense that our parsing procedure returns a list of tuples such as
Step4: From that, we could also create a dictionary such as the previous one, but we can also calculate the weight directly
Step5: Parsing
|
7,069
|
<ASSISTANT_TASK:>
Python Code:
totalDomainsOccurrences = 0
for num in domains[1]:
totalDomainsOccurrences += num
length = 10
width = 0.8
fig = plt.figure()
plt.barh(range(length), np.asarray(domains[1][0:length] * 100 / totalDomainsOccurrences), width, align='center', color='b')
plt.grid(which='both')
plt.xlabel(r'$\%$ on the Total Number of Matches')
plt.ylim(-width)
plt.yticks(range(length), domains[0][0:length])
plt.tight_layout()
plt.savefig("./Figures/TopDomains.pdf", format="pdf")
plt.show()
# plotly_fig = tls.mpl_to_plotly( fig )
# print(plotly_fig)
# plot_url = py.plot_mpl(plotly_fig, filename='mpl-axes-labels')
# plot_url = py.plot_mpl(plotly_fig, filename='mpl-annotation-with-custom-font-size')
totalDataOccurrences = 0
for num in dataDomains['refHost']:
totalDataOccurrences += num
length = 10
width = 0.8
ax = plt.barh(range(length), np.asarray(dataDomains['refHost'][0:length] * 100 / totalDataOccurrences), width, align='center', color='#b60628', edgecolor='white', hatch="//")
plt.grid(which='both')
plt.xlabel(r'$\%$ on the Total Number of References')
plt.ylim(-width)
plt.yticks(range(length), dataDomains['index'][0:length])
plt.tight_layout()
plt.savefig("./Figures/TopDomainsData.pdf", format="pdf")
plt.show()
totalPediaOccurrences = 0
for num in pediaDomains['refHost']:
totalPediaOccurrences += num
length = 10
width = 0.8
ax = plt.barh(np.arange(length), np.asarray(pediaDomains['refHost'][0:length] * 100 / totalPediaOccurrences), width, align='center', color='#06b694', edgecolor='white', hatch='x')
plt.grid(which='both')
plt.xlabel(r'$\%$ on the Total Number of Citations')
plt.ylim(-width)
plt.yticks(np.arange(length), pediaDomains['index'][0:length])
plt.tight_layout()
plt.savefig("./Figures/TopDomainsPedia.pdf", format="pdf")
plt.show()
totalPediaOccurrences = 0
for num in topPediaDomains['citeTld']:
totalPediaOccurrences += num
length = 10
width = 0.8
ax = plt.barh(range(length), np.asarray(topPediaDomains['citeTld'][0:length] * 100 / totalPediaOccurrences), width, align='center', color='#06b694')
plt.grid(which='both')
plt.xlabel(r'$\%$ on the Total Number of Matches')
plt.ylim(-width)
plt.yticks(range(length), topPediaDomains['index'][0:length])
plt.tight_layout()
plt.savefig("./Figures/TopLevelDomainsPedia.pdf", format="pdf")
plt.show()
totalDataOccurrences = 0
for num in topDataDomains['refTld']:
totalDataOccurrences += num
length = 10
width = 0.8
ax1 = plt.barh(range(length), np.asarray(topDataDomains['refTld'][0:length] * 100 / totalDataOccurrences), width, align='center', color='#b60628')
plt.grid(which='both')
plt.xlabel(r'$\%$ on the Total Number of Matches')
plt.ylim(-width)
plt.yticks(np.arange(length), topDataDomains['index'][0:length])
plt.tight_layout()
plt.savefig("./Figures/TopLevelDomainsData.pdf", format="pdf")
plt.show()
tempPediaDomains = np.zeros(length)
for i in range(0, len(topDataDomains['index'][0:length])):
for j in range(0, len(topPediaDomains['citeTld'])):
if topDataDomains['index'][i] == topPediaDomains['index'][j]:
tempPediaDomains[i] = topPediaDomains['citeTld'][j]
length = 10
width = 0.4
ax1 = plt.barh(np.arange(length), np.asarray(topDataDomains['refTld'][0:length] * 100 / totalDataOccurrences), width, label = 'Wikidata', color='#b60628')
ax2 = plt.barh(np.arange(length) + width, tempPediaDomains * 100 / totalPediaOccurrences, width, label = 'Wikipedia', color='#06b694')
plt.legend()
plt.grid(which='both')
plt.ylim(-width)
plt.yticks(np.arange(length) + width, topDataDomains['index'][0:length])
plt.tight_layout()
plt.savefig("./Figures/TopLevelDomainsComparison.pdf", format="pdf")
plt.show()
# plt.plot(topLevelDomainsScatter['citeTld_pc'], topLevelDomainsScatter['refTld_pc'], "o")
# plt.plot(np.log(topLevelDomainsScatter['citeTld']), np.log(topLevelDomainsScatter['refTld']), "o")
plt.plot(topLevelDomainsScatter['citeTld'], topLevelDomainsScatter['refTld'], "o", color='b')
## Find the selected n-max points; the ones that are close to the far upper-right corner in the logarithmic scale.
selected = topLevelDomainsScatter.sort_values('refTld', ascending=False)
n = 10
for index in selected[:n].index:
# plt.text(selected['citeTld'][index], selected['refTld'][index], selected['index'][index])
# We put this IF here in order to avoid clutter with overlapping labels in the graph.
if selected['index'][index] in ['gov', 'pl']:
adjust_x = 0.5 * (10 ** np.log10(selected['citeTld'][index]))
adjust_y = 0.1 * (10 ** np.log10(selected['refTld'][index]))
plt.annotate(selected['index'][index], (selected['citeTld'][index] - adjust_x, selected['refTld'][index] + adjust_y))
print('Hi')
else:
adjust_x = 0.1 * (10 ** np.log10(selected['citeTld'][index]))
adjust_y = 0.1 * (10 ** np.log10(selected['refTld'][index]))
plt.annotate(selected['index'][index], (selected['citeTld'][index] + adjust_x, selected['refTld'][index] + adjust_y))
plt.grid(which='both', linewidth=0.2)
plt.xscale('log')
plt.yscale('log')
plt.xlabel('Wikipedia Citations')
plt.ylabel('Wikidata References')
plt.savefig("./Figures/ScatterPlot.pdf", format="pdf")
plt.show()
dataTypes = pandas.read_csv('./Data/item_type_all_wd.csv')
domainTypes = pandas.read_csv('./Data/item_types_matchdom.csv')
itemTypes = pandas.read_csv('./Data/item_types.csv')
dataTypes.sort_values(by='shareType', ascending=False, inplace=True)
dataTypes.reset_index(drop=True, inplace=True)
tempDomainTypes = np.zeros(len(dataTypes))
tempItemTypes = np.zeros(len(dataTypes))
for i in range(0, len(dataTypes)):
for j in range(0, len(domainTypes)):
if dataTypes['type'][i] == domainTypes['type'][j]:
tempDomainTypes[i] = domainTypes['shareType'][j]
for j in range(0, len(itemTypes)):
if dataTypes['type'][i] == itemTypes['type'][j]:
tempItemTypes[i] = itemTypes['shareType'][j]
width = 0.28
plt.figure(figsize=(10, 6))
ax1 = plt.barh(np.arange(0, len(dataTypes) - 1) - width, np.asarray(dataTypes['shareType'][1:len(dataTypes)]), width, label = 'Wikidata', color='k', edgecolor='white', hatch='//')
ax3 = plt.barh(np.arange(0, len(dataTypes) - 1), tempItemTypes[1:len(dataTypes)], width, label = 'Items In our Dataset', color='#06b694', edgecolor='white', hatch='|')
ax3 = plt.barh(np.arange(0, len(dataTypes) - 1) + width, tempDomainTypes[1:len(dataTypes)], width, label = 'Items with Matching Domains', color='r', edgecolor='white', hatch='x')
plt.legend()
plt.grid(which='both')
plt.xlabel(r'$\%$ on the Total Number of Matches')
plt.ylim(-2 * width)
plt.yticks(np.arange(0, len(dataTypes)) + 0.45 * width, dataTypes['type'][1:len(dataTypes)])
plt.tight_layout()
plt.savefig("./Figures/MatchedTypes.pdf", format="pdf")
plt.show()
referencesLanguagesSingle = pandas.read_csv('./Data/matching_refs_unique_lang_single_ref_count.csv', header=None)
referencesLanguages = pandas.read_csv('./Data/matching_refs_unique_lang_count.csv', header=None)
referencesLanguagesSingle.sort_values(by=1, ascending=False, inplace=True)
referencesLanguagesSingle.reset_index(drop=True, inplace=True)
totalReferencesSingleOccurrences = 0
for num in referencesLanguagesSingle[1]:
totalReferencesSingleOccurrences += num
totalReferencesOccurrences = 0
for num in referencesLanguages[1]:
totalReferencesOccurrences += num
length = 10
tempReferencesLanguages = np.zeros(length)
for i in range(0, len(referencesLanguagesSingle[1][0:length])):
for j in range(0, len(referencesLanguages[1])):
if referencesLanguages[0][j] == referencesLanguagesSingle[0][i]:
tempReferencesLanguages[i] = referencesLanguages[1][j]
width = 0.4
ax1 = plt.barh(np.arange(length), np.asarray(referencesLanguagesSingle[1][0:length] * 100 / totalReferencesSingleOccurrences), width, label = 'Total Page Matches', color='#b60628', edgecolor='white', hatch='//')
ax2 = plt.barh(np.arange(length) + width, tempReferencesLanguages * 100 / totalReferencesOccurrences, width, label = 'Unique Page Matches', color='#06b694', edgecolor='white', hatch='x')
plt.legend()
plt.grid(which='both')
plt.xlabel(r'$\%$ on the Total Number of Matches')
plt.ylim(-width)
plt.yticks(np.arange(length) + width, referencesLanguagesSingle[0][0:length])
plt.tight_layout()
plt.savefig("./Figures/ReferencesLanguages.pdf", format="pdf")
plt.show()
domainsLanguagesSingle = pandas.read_csv('./Data/matching_domain_lang_single_ref_count.csv', header=None)
domainsLanguages = pandas.read_csv('./Data/matching_domain_unique_lang_count.csv', header=None)
domainsLanguagesSingle.sort_values(by=1, ascending=False, inplace=True)
domainsLanguagesSingle.reset_index(drop=True, inplace=True)
totalDomainsSingleOccurrences = 0
for num in domainsLanguagesSingle[1]:
totalDomainsSingleOccurrences += num
totalDomainsOccurrences = 0
for num in domainsLanguages[1]:
totalDomainsOccurrences += num
length = 10
tempDomainsLanguages = np.zeros(length)
for i in range(0, len(domainsLanguagesSingle[1][0:length])):
for j in range(0, len(domainsLanguages[1])):
if domainsLanguages[0][j] == domainsLanguagesSingle[0][i]:
tempDomainsLanguages[i] = domainsLanguages[1][j]
width = 0.4
ax1 = plt.barh(np.arange(length), np.asarray(domainsLanguagesSingle[1][0:length] * 100 / totalDomainsSingleOccurrences), width, label = 'Total Domain Matches', color='#b60628', edgecolor='white', hatch='//')
ax2 = plt.barh(np.arange(length) + width, tempDomainsLanguages * 100 / totalDomainsOccurrences, width, label = 'Unique Domain Matches', color='#06b694', edgecolor='white', hatch='x')
plt.legend()
plt.grid(which='both')
plt.xlabel(r'$\%$ on the Total Number of Matches')
plt.ylim(-width)
plt.yticks(np.arange(length) + width, domainsLanguagesSingle[0][0:length])
plt.tight_layout()
plt.savefig("./Figures/DomainsLanguages.pdf", format="pdf")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Top used domains in Wikidata.
Step2: Top used domains in Wikipedia.
Step3: Matching domains across both Wikipedia and Wikidata.
Step4: Scatter Plot of Top-Level Domains
Step5: Check the type distribution across Wikidata, items in our dataset and items with matching domains.
Step6: Single web page matches between Wikidata items and corresponding Wikipedia articles, by Wikipedia language version.
Step7: Domain matches between Wikidata items and corresponding Wikipedia articles, by Wikipedia language version.
|
7,070
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import os
import pathlib
curr_dir = pathlib.Path("./")
rsfmri_basedir = str((curr_dir / "raw_data/autism/").resolve())
def parse_dataset():
_target_column_name = 'asd'
_prediction_label_names = [0, 1]
subject_id = pd.read_csv(os.path.join(rsfmri_basedir, 'data', 'train.csv'), header=None)
# read the list of the subjects
df_participants = pd.read_csv(os.path.join(rsfmri_basedir, 'data', 'participants.csv'), index_col=0)
df_participants.columns = ['participants_' + col for col in df_participants.columns]
# load the structural and functional MRI data
df_anatomy = pd.read_csv(os.path.join(rsfmri_basedir, 'data', 'anatomy.csv'), index_col=0)
df_anatomy.columns = ['anatomy_' + col for col in df_anatomy.columns]
df_fmri = pd.read_csv(os.path.join(rsfmri_basedir, 'data', 'fmri_filename.csv'), index_col=0)
df_fmri.columns = ['fmri_' + col for col in df_fmri.columns]
# load the QC for structural and functional MRI data
df_anatomy_qc = pd.read_csv(os.path.join(rsfmri_basedir, 'data', 'anatomy_qc.csv'), index_col=0)
df_fmri_qc = pd.read_csv(os.path.join(rsfmri_basedir, 'data', 'fmri_qc.csv'), index_col=0)
# rename the columns for the QC to have distinct names
df_anatomy_qc = df_anatomy_qc.rename(columns={"select": "anatomy_select"})
df_fmri_qc = df_fmri_qc.rename(columns={"select": "fmri_select"})
X = pd.concat([df_participants, df_anatomy, df_anatomy_qc, df_fmri, df_fmri_qc], axis=1)
X = X.loc[subject_id[0]]
y = X['participants_asd']
y.columns = [_target_column_name]
X = X.drop('participants_asd', axis=1)
return X, y.values
data, labels = parse_dataset()
fmri_data = data[[col for col in data.columns if col.startswith('fmri')]]
fmri_msdl_filenames = fmri_data['fmri_msdl']
fmri = np.array([pd.read_csv(rsfmri_basedir + "/" + subject_filename, header=None).values
for subject_filename in fmri_msdl_filenames])
anatomy = data[[col for col in data.columns if col.startswith('anatomy')]]
anatomy = anatomy.drop(columns='anatomy_select')
np.save('data/fmri_autism_ts.npy', fmri)
np.save('data/fmri_autism_anatomy.npy', anatomy)
np.save('data/fmri_autism_labels.npy', labels)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fetch the dataset
Step2: The following code is heavily based on the code provided by the competition's organizers.
Step3: Dump arrays
|
7,071
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pylab as pl
%matplotlib inline
filename = './titanic-data.csv'
titanic_df = pd.read_csv(filename)
titanic_df.describe()
titanic_df = titanic_df.fillna(method='pad')#用前一个数值填充
titanic_df.describe()
sort_pclass = titanic_df.groupby('Pclass').count()['PassengerId']
print sort_pclass
titanic_df.groupby('Pclass')['PassengerId'].count().plot(kind = 'pie',autopct = '%.0f%%')
plt.title('Pclass VS Count')
plt.show()
Pclass_survived = titanic_df.groupby('Pclass').mean()['Survived']
print Pclass_survived.plot.bar()
sort_sex = titanic_df.groupby('Sex').count()['PassengerId']
print sort_sex
Sex_survived = titanic_df.groupby('Sex').mean()['Survived']
print Sex_survived
print Sex_survived.plot.bar()
titanic_df['Age_bins'] = pd.cut(titanic_df['Age'],range(0,80,10))
Age_survived = titanic_df.groupby('Age_bins').mean()['Survived']
Sort_survived = titanic_df.groupby('Age_bins').count()['Survived']
print Age_survived
print Sort_survived
Age_survived.plot(kind='bar', stacked=True)
sort_SibSp = titanic_df.groupby('SibSp').count()['PassengerId']
print sort_SibSp
titanic_df.groupby('SibSp')['PassengerId'].count().plot(kind = 'pie',autopct = '%.0f%%')
plt.title('SibSp VS Count')
plt.show()
SibSp_survived = titanic_df.groupby('SibSp').mean()['Survived']
print SibSp_survived
SibSp_survived.plot.bar()
sort_Parch = titanic_df.groupby('Parch').count()['PassengerId']
print sort_Parch
titanic_df.groupby('Parch')['PassengerId'].count().plot(kind = 'pie',autopct = '%.0f%%')
plt.title('Parch VS Count')
plt.show()
Parch_survived = titanic_df.groupby('Parch').mean()['Survived']
print Parch_survived
Parch_survived.plot.bar()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 首先,我们观察一下几个比较重要的数值,初步得出一些结论,比如只有‘Age’这一列存在缺失值,整体的存活率只有0.383838。所以首先应该对年龄的缺失值进行填充。
Step2: 可以看出年龄这一列数据的总数正常了,为891,接下来可以进一步分析生存率了。
Step3: 根据以上不同舱位人数所占比例和关于生存率的直方图可以看出头等舱的生存率最高,经济舱的生存率最低。虽然头等舱的人数占总人数的比例很少,生存率却极高,三等舱的人数超过一半,而生存率确只有20%,间接的说明了一个现实问题:社会地位越高生存机率越高。或者说头等舱的安全措施很高
Step4: 我们可以清晰的看到虽然船上的男性人数显著多于女性人数,但是女性的存活率高达74%,而男性的存活率只有19%。这说明在逃生的时候会有男性保护女性的情况。一般是女先男后。
Step5: 可见0~10岁的儿童成活率是最高的,也说明了在家长陪同下的婴幼儿受到了很好的保护,超过60岁的老年人成活率非常低,由此我们可以推测老年人可能会因为年迈行动不便而导致在灾难中无法及时脱身。在10~60各个年龄阶段的生存率几本相等。
|
7,072
|
<ASSISTANT_TASK:>
Python Code:
import hail as hl
hl.init()
from hail.plot import show
from pprint import pprint
hl.plot.output_notebook()
hl.utils.get_1kg('data/')
hl.import_vcf('data/1kg.vcf.bgz').write('data/1kg.mt', overwrite=True)
mt = hl.read_matrix_table('data/1kg.mt')
mt.rows().select().show(5)
mt.row_key.show(5)
mt.s.show(5)
mt.entry.take(5)
table = (hl.import_table('data/1kg_annotations.txt', impute=True)
.key_by('Sample'))
table.describe()
table.show(width=100)
print(mt.col.dtype)
mt = mt.annotate_cols(pheno = table[mt.s])
mt.col.describe()
pprint(table.aggregate(hl.agg.counter(table.SuperPopulation)))
pprint(table.aggregate(hl.agg.stats(table.CaffeineConsumption)))
table.count()
mt.count_cols()
mt.aggregate_cols(hl.agg.counter(mt.pheno.SuperPopulation))
pprint(mt.aggregate_cols(hl.agg.stats(mt.pheno.CaffeineConsumption)))
snp_counts = mt.aggregate_rows(hl.agg.counter(hl.Struct(ref=mt.alleles[0], alt=mt.alleles[1])))
pprint(snp_counts)
from collections import Counter
counts = Counter(snp_counts)
counts.most_common()
p = hl.plot.histogram(mt.DP, range=(0,30), bins=30, title='DP Histogram', legend='DP')
show(p)
mt.col.describe()
mt = hl.sample_qc(mt)
mt.col.describe()
p = hl.plot.histogram(mt.sample_qc.call_rate, range=(.88,1), legend='Call Rate')
show(p)
p = hl.plot.histogram(mt.sample_qc.gq_stats.mean, range=(10,70), legend='Mean Sample GQ')
show(p)
p = hl.plot.scatter(mt.sample_qc.dp_stats.mean, mt.sample_qc.call_rate, xlabel='Mean DP', ylabel='Call Rate')
show(p)
mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97))
print('After filter, %d/284 samples remain.' % mt.count_cols())
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) |
(mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) |
(mt.GT.is_hom_var() & (ab >= 0.9)))
fraction_filtered = mt.aggregate_entries(hl.agg.fraction(~filter_condition_ab))
print(f'Filtering {fraction_filtered * 100:.2f}% entries out of downstream analysis.')
mt = mt.filter_entries(filter_condition_ab)
mt = hl.variant_qc(mt)
mt.row.describe()
mt = mt.filter_rows(mt.variant_qc.AF[1] > 0.01)
mt = mt.filter_rows(mt.variant_qc.p_value_hwe > 1e-6)
print('Samples: %d Variants: %d' % (mt.count_cols(), mt.count_rows()))
gwas = hl.linear_regression_rows(y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0])
gwas.row.describe()
p = hl.plot.manhattan(gwas.p_value)
show(p)
p = hl.plot.qq(gwas.p_value)
show(p)
eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)
pprint(eigenvalues)
pcs.show(5, width=100)
mt = mt.annotate_cols(scores = pcs[mt.s].scores)
p = hl.plot.scatter(mt.scores[0],
mt.scores[1],
label=mt.pheno.SuperPopulation,
title='PCA', xlabel='PC1', ylabel='PC2')
show(p)
gwas = hl.linear_regression_rows(
y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0, mt.pheno.isFemale, mt.scores[0], mt.scores[1], mt.scores[2]])
p = hl.plot.qq(gwas.p_value)
show(p)
p = hl.plot.manhattan(gwas.p_value)
show(p)
entries = mt.entries()
results = (entries.group_by(pop = entries.pheno.SuperPopulation, chromosome = entries.locus.contig)
.aggregate(n_het = hl.agg.count_where(entries.GT.is_het())))
results.show()
entries = entries.annotate(maf_bin = hl.if_else(entries.info.AF[0]<0.01, "< 1%",
hl.if_else(entries.info.AF[0]<0.05, "1%-5%", ">5%")))
results2 = (entries.group_by(af_bin = entries.maf_bin, purple_hair = entries.pheno.PurpleHair)
.aggregate(mean_gq = hl.agg.stats(entries.GQ).mean,
mean_dp = hl.agg.stats(entries.DP).mean))
results2.show()
table = hl.import_table('data/1kg_annotations.txt', impute=True).key_by('Sample')
mt = hl.read_matrix_table('data/1kg.mt')
mt = mt.annotate_cols(pheno = table[mt.s])
mt = hl.sample_qc(mt)
mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97))
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) |
(mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) |
(mt.GT.is_hom_var() & (ab >= 0.9)))
mt = mt.filter_entries(filter_condition_ab)
mt = hl.variant_qc(mt)
mt = mt.filter_rows(mt.variant_qc.AF[1] > 0.01)
eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)
mt = mt.annotate_cols(scores = pcs[mt.s].scores)
gwas = hl.linear_regression_rows(
y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0, mt.pheno.isFemale, mt.scores[0], mt.scores[1], mt.scores[2]])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If the above cell ran without error, we're ready to go!
Step2: Download public 1000 Genomes data
Step3: Importing data from VCF
Step4: Next we read the written file, assigning the variable mt (for matrix table).
Step5: Getting to know our data
Step6: Alternatively
Step7: Here is how to peek at the first few sample IDs
Step8: To look at the first few genotype calls, we can use entries along with select and take. The take method collects the first n rows into a list. Alternatively, we can use the show method, which prints the first n rows to the console in a table format.
Step9: Adding column fields
Step10: A good way to peek at the structure of a Table is to look at its schema.
Step11: To peek at the first few values, use the show method
Step12: Now we'll use this table to add sample annotations to our dataset, storing the annotations in column fields in our MatrixTable. First, we'll print the existing column schema
Step13: We use the annotate_cols method to join the table with the MatrixTable containing our dataset.
Step14: Query functions and the Hail Expression Language
Step15: stats is an aggregation function that produces some useful statistics about numeric collections. We can use this to see the distribution of the CaffeineConsumption phenotype.
Step16: However, these metrics aren't perfectly representative of the samples in our dataset. Here's why
Step17: Since there are fewer samples in our dataset than in the full thousand genomes cohort, we need to look at annotations on the dataset. We can use aggregate_cols to get the metrics for only the samples in our dataset.
Step18: The functionality demonstrated in the last few cells isn't anything especially new
Step19: We can list the counts in descending order using Python's Counter class.
Step20: It's nice to see that we can actually uncover something biological from this small dataset
Step21: Quality Control
Step22: Plotting the QC metrics is a good place to start.
Step23: Often, these metrics are correlated.
Step24: Removing outliers from the dataset will generally improve association results. We can make arbitrary cutoffs and use them to filter
Step25: Next is genotype QC. It's a good idea to filter out genotypes where the reads aren't where they should be
Step26: Variant QC is a bit more of the same
Step27: These statistics actually look pretty good
Step28: These filters removed about 15% of sites (we started with a bit over 10,000). This is NOT representative of most sequencing datasets! We have already downsampled the full thousand genomes dataset to include more common variants than we'd expect by chance.
Step29: Looking at the bottom of the above printout, you can see the linear regression adds new row fields for the beta, standard error, t-statistic, and p-value.
Step30: This doesn't look like much of a skyline. Let's check whether our GWAS was well controlled using a Q-Q (quantile-quantile) plot.
Step31: Confounded!
Step32: Now that we've got principal components per sample, we may as well plot them! Human history exerts a strong effect in genetic datasets. Even with a 50MB sequencing dataset, we can recover the major human populations.
Step33: Now we can rerun our linear regression, controlling for sample sex and the first few principal components. We'll do this with input variable the number of alternate alleles as before, and again with input variable the genotype dosage derived from the PL field.
Step34: We'll first make a Q-Q plot to assess inflation...
Step35: That's more like it! This shape is indicative of a well-controlled (but not especially well-powered) study. And now for the Manhattan plot
Step36: We have found a caffeine consumption locus! Now simply apply Hail's Nature paper function to publish the result.
Step37: What if we want to group by minor allele frequency bin and hair color, and calculate the mean GQ?
Step38: We've shown that it's easy to aggregate by a couple of arbitrary statistics. This specific examples may not provide especially useful pieces of information, but this same pattern can be used to detect effects of rare variation
|
7,073
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Sheraz Khan <sheraz@khansheraz.com>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets import sample
from mne.datasets.brainstorm import bst_raw
from mne import read_evokeds
from mne.viz import plot_arrowmap
print(__doc__)
path = sample.data_path()
fname = path + '/MEG/sample/sample_audvis-ave.fif'
# load evoked data
condition = 'Left Auditory'
evoked = read_evokeds(fname, condition=condition, baseline=(None, 0))
evoked_mag = evoked.copy().pick_types(meg='mag')
evoked_grad = evoked.copy().pick_types(meg='grad')
max_time_idx = np.abs(evoked_mag.data).mean(axis=0).argmax()
plot_arrowmap(evoked_mag.data[:, max_time_idx], evoked_mag.info)
# Since planar gradiometers takes gradients along latitude and longitude,
# they need to be projected to the flatten manifold span by magnetometer
# or radial gradiometers before taking the gradients in the 2D Cartesian
# coordinate system for visualization on the 2D topoplot. You can use the
# ``info_from`` and ``info_to`` parameters to interpolate from
# gradiometer data to magnetometer data.
plot_arrowmap(evoked_grad.data[:, max_time_idx], info_from=evoked_grad.info,
info_to=evoked_mag.info)
path = bst_raw.data_path()
raw_fname = path + '/MEG/bst_raw/' \
'subj001_somatosensory_20111109_01_AUX-f.ds'
raw_ctf = mne.io.read_raw_ctf(raw_fname)
raw_ctf_info = mne.pick_info(
raw_ctf.info, mne.pick_types(raw_ctf.info, meg=True, ref_meg=False))
plot_arrowmap(evoked_grad.data[:, max_time_idx], info_from=evoked_grad.info,
info_to=raw_ctf_info, scale=2e-10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot magnetometer data as an arrowmap along with the topoplot at the time
Step2: Plot gradiometer data as an arrowmap along with the topoplot at the time
Step3: Since Vectorview 102 system perform sparse spatial sampling of the magnetic
|
7,074
|
<ASSISTANT_TASK:>
Python Code:
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Daniel Strohmeier <daniel.strohmeier@tu-ilmenau.de>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets import sample
from mne.inverse_sparse import mixed_norm, make_stc_from_dipoles
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.viz import (plot_sparse_source_estimates,
plot_dipole_locations, plot_dipole_amplitudes)
print(__doc__)
data_path = sample.data_path()
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'
subjects_dir = data_path + '/subjects'
# Read noise covariance matrix
cov = mne.read_cov(cov_fname)
# Handling average file
condition = 'Left Auditory'
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked.crop(tmin=0, tmax=0.3)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname)
alpha = 55 # regularization parameter between 0 and 100 (100 is high)
loose, depth = 0.2, 0.9 # loose orientation & depth weighting
n_mxne_iter = 10 # if > 1 use L0.5/L2 reweighted mixed norm solver
# if n_mxne_iter > 1 dSPM weighting can be avoided.
# Compute dSPM solution to be used as weights in MxNE
inverse_operator = make_inverse_operator(evoked.info, forward, cov,
depth=depth, fixed=True,
use_cps=True)
stc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9.,
method='dSPM')
# Compute (ir)MxNE inverse solution with dipole output
dipoles, residual = mixed_norm(
evoked, forward, cov, alpha, loose=loose, depth=depth, maxit=3000,
tol=1e-4, active_set_size=10, debias=True, weights=stc_dspm,
weights_min=8., n_mxne_iter=n_mxne_iter, return_residual=True,
return_as_dipoles=True, verbose=True)
t = 0.083
tidx = evoked.time_as_index(t)
for di, dip in enumerate(dipoles, 1):
print(f'Dipole #{di} GOF at {1000 * t:0.1f} ms: '
f'{float(dip.gof[tidx]):0.1f}%')
plot_dipole_amplitudes(dipoles)
# Plot dipole location of the strongest dipole with MRI slices
idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles])
plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample',
subjects_dir=subjects_dir, mode='orthoview',
idx='amplitude')
# Plot dipole locations of all dipoles with MRI slices
for dip in dipoles:
plot_dipole_locations(dip, forward['mri_head_t'], 'sample',
subjects_dir=subjects_dir, mode='orthoview',
idx='amplitude')
ylim = dict(eeg=[-10, 10], grad=[-400, 400], mag=[-600, 600])
evoked.pick_types(meg=True, eeg=True, exclude='bads')
evoked.plot(ylim=ylim, proj=True, time_unit='s')
residual.pick_types(meg=True, eeg=True, exclude='bads')
residual.plot(ylim=ylim, proj=True, time_unit='s')
stc = make_stc_from_dipoles(dipoles, forward['src'])
solver = "MxNE" if n_mxne_iter == 1 else "irMxNE"
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
fig_name="%s (cond %s)" % (solver, condition),
opacity=0.1)
morph = mne.compute_source_morph(stc, subject_from='sample',
subject_to='fsaverage', spacing=None,
sparse=True, subjects_dir=subjects_dir)
stc_fsaverage = morph.apply(stc)
src_fsaverage_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
src_fsaverage = mne.read_source_spaces(src_fsaverage_fname)
plot_sparse_source_estimates(src_fsaverage, stc_fsaverage, bgcolor=(1, 1, 1),
fig_name="Morphed %s (cond %s)" % (solver,
condition), opacity=0.1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run solver
Step2: Plot dipole activations
Step3: Plot residual
Step4: Generate stc from dipoles
Step5: View in 2D and 3D ("glass" brain like 3D plot)
Step6: Morph onto fsaverage brain and view
|
7,075
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import pandas as pd
import sys
sys.path.append("../../../bayespy")
import bayespy
from bayespy.network import Builder as builder
import logging
import os
import matplotlib.pyplot as plt
from IPython.display import display
logger = logging.getLogger()
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.INFO)
bayespy.jni.attach(logger)
db_folder = bayespy.utils.get_path_to_parent_dir("")
iris = pd.read_csv(os.path.join(db_folder, "data/iris.csv"), index_col=False)
network = bayespy.network.create_network()
cluster = builder.create_cluster_variable(network, 4)
petal_length = builder.create_continuous_variable(network, "petal_length")
petal_width = builder.create_continuous_variable(network, "petal_width")
sepal_length = builder.create_continuous_variable(network, "sepal_length")
sepal_width = builder.create_continuous_variable(network, "sepal_width")
nodes = [petal_length, petal_width, sepal_length, sepal_width]
for i, node in enumerate(nodes):
builder.create_link(network, cluster, node)
for j in range(i+1, len(nodes)):
builder.create_link(network, node, nodes[j])
layout = bayespy.visual.NetworkLayout(network)
graph = layout.build_graph()
pos = layout.fruchterman_reingold_layout(graph)
layout.visualise(graph, pos)
# build the 'normal' model on two of the classes
model = bayespy.model.NetworkModel(network, logger)
with bayespy.data.DataSet(iris.drop('iris_class', axis=1), db_folder, logger) as dataset:
subset = dataset.subset(
iris[(iris.iris_class == "Iris-versicolor") | (iris.iris_class == "Iris-virginica")].index.tolist())
model.train(subset)
with bayespy.data.DataSet(iris.drop('iris_class', axis=1), db_folder, logger) as dataset:
# get the loglikelihood value for the whole model on each individual sample,
# the lower the loglikelihood value the less likely the data point has been
# generated by the model.
results = model.batch_query(dataset, [bayespy.model.QueryModelStatistics()])
display(results)
cmap = plt.cm.get_cmap('Blues_r')
fig = plt.figure(figsize=(10, 10))
k = 1
for i, v in enumerate(nodes):
for j in range(i+1, len(nodes)):
v_name = v.getName()
v1_name = nodes[j].getName()
ax = fig.add_subplot(3,2,k)
ax.set_title("{} vs {}".format(v_name, v1_name))
h = ax.scatter(x=iris[v_name].tolist(), y=iris[v1_name].tolist(), c=results['loglikelihood'].tolist(),
vmin=results.loglikelihood.min(), vmax=results.loglikelihood.max(), cmap=cmap
)
k+=1
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(h, cax=cbar_ax)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Rather than using a template to build the network, it's fairly easy to define it by hand. The network looks something like the following
Step2: Using the above network, train the model on only two of the classes (for clarity, I chose the two that are least separated, versicolor and virginica)
Step3: The network is then ready for anomaly detection; this entails applying the entire dataset to the trained model, and plotting the results. The Log Likelihood will always give a negative value, the closer to 0, the more normal the applied data sample is.
|
7,076
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
# read in the iris data
iris = load_iris()
# create X (features) and y (response)
X = iris.data
y = iris.target
# use train/test split with different random_state values
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=4)
# check classification accuracy of KNN with K=5
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred))
from sklearn.cross_validation import cross_val_score
# 10-fold cross-validation with K=5 for KNN (the n_neighbors parameter)
# First we initialize a knn model
knn = KNeighborsClassifier(n_neighbors=5)
# Secondly we use cross_val_scores to get all possible accuracies.
# It works like this, first we make the data into 10 chunks.
# Then we run KNN for 10 times and we make each chunk as testing data for each iteration.
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
print(scores)
# use average accuracy as an estimate of out-of-sample accuracy
print(scores.mean())
# search for an optimal value of K for KNN
# Suppose we set the range of K is from 1 to 31.
k_range = list(range(1, 31))
# An list that stores different accuracy scores.
k_scores = []
for k in k_range:
# Your code:
# First, initilize a knn model with number k
# Second, use 10-fold cross validation to get 10 scores with that model.
k_scores.append(scores.mean())
# Make a visuliazaion for it, and please check what is the best k for knn
import matplotlib.pyplot as plt
%matplotlib inline
# plot the value of K for KNN (x-axis) versus the cross-validated accuracy (y-axis)
plt.plot(k_range, k_scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Cross-Validated Accuracy')
# 10-fold cross-validation with the best KNN model
knn = KNeighborsClassifier(n_neighbors=20)
print(cross_val_score(knn, X, y, cv=10, scoring='accuracy').mean())
# How about logistic regression? Please finish the code below and make a comparison.
# Hint, please check how we make it by knn.
from sklearn.linear_model import LogisticRegression
# initialize a logistic regression model here.
# Then print the average score of logistic model.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question 1
Step2: From this example, we could see that if we just split training and testing data for just once, sometimes we could get a very "good model and sometimes we may have got a very ""bad" model. From the example, we could know that it is not about model itself. It is just because we use different set of training data and test data.
Step3: A new Cross-validation task
|
7,077
|
<ASSISTANT_TASK:>
Python Code:
import theano
import theano.tensor as T
x = T.scalar()
x
y = 3*(x**2) + x
type(y)
print(y)
theano.pprint(y)
theano.printing.debugprint(y)
from IPython.display import SVG
SVG(theano.printing.pydotprint(y, return_image=True, format='svg'))
y.eval({x: 2})
f = theano.function([x], y)
f(2)
SVG(theano.printing.pydotprint(f, return_image=True, format='svg'))
X = T.vector()
X = T.matrix()
X = T.tensor3()
X = T.tensor4()
X = T.vector()
X[1:-1:2]
X[ [1,2,3] ]
y = X.argmax()
y = T.cosh(X)
y = T.outer(X, X)
x = T.scalar()
y = T.log(x)
gradient = T.grad(y, x)
gradient.eval({x: 2})
import numpy as np
x = theano.shared(np.zeros((2, 3), dtype=theano.config.floatX))
x
values = x.get_value()
print(values.shape)
print(values)
x.set_value(values)
(x + 2) ** 2
((x + 2) ** 2).eval()
theano.function([], (x + 2) ** 2)()
count = theano.shared(0)
new_count = count + 1
updates = {count: new_count}
g = theano.function([], count, updates=updates)
g()
g()
g()
x = T.matrix()
y = 3*(x**2) + x
h = theano.function([x], y)
h( 2 )
h( np.array( [ [ 2. ] ] ) )
h( np.array( [ [ 2. ] ], dtype='float32' ) )
h( np.array( [ [ 2., 3. ], [ 4., 5. ] ], dtype='float32' ) )
h( np.array( [ [ 2., 3., 4. ], [ 6., 7., 8. ] ], dtype='float32' ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Symbolic variables
Step2: Variables can be used in expressions, but (IMPORTANT!) the result is symbolic as well
Step3: Investigating expressions
Step4: Evaluating expressions
Step5: Or compile a function
Step6: Compiled function has been transformed
Step7: Other tensor types
Step8: Numpy style indexing
Step9: Many functions/operations are available through theano.tensor or variable methods
Step10: But don't try to use numpy functions on Theano variables. Results may vary!
Step11: Shared Variables
Step12: We can get and set the variable's value
Step13: Shared variables can be used in expressions as well
Step14: Their value is used as input when evaluating
Step15: Updates
Step16: Debugging
Step17: THE ABOVE SHOULD BE AN ERROR!
Step18: THE ABOVE SHOULD BE AN ERROR!
|
7,078
|
<ASSISTANT_TASK:>
Python Code:
import arviz as az
import bambi as bmb
import numpy as np
import pandas as pd
az.style.use("arviz-darkgrid")
# Read in a tab-delimited file containing our data
data = pd.read_table("data/my_data.txt", sep="\t")
# Initialize the model
model = bmb.Model("y ~ x + z", data)
# Inspect model object
model
data = pd.read_csv("data/rrr_long.csv")
data.head(10)
# Number of rows with missing values
data.isna().any(axis=1).sum()
# Common (or fixed) effects only
bmb.Model("value ~ condition + age + gender", data, dropna=True)
# Common effects and group specific (or random) intercepts for subject
bmb.Model("value ~ condition + age + gender + (1|uid)", data, dropna=True)
# Multiple, complex group specific effects with both
# group specific slopes and group specific intercepts
bmb.Model("value ~ condition + age + gender + (1|uid) + (condition|study) + (condition|stimulus)", data, dropna=True)
model = bmb.Model("value ~ condition + age + gender + (1|uid)", data, dropna=True)
results = model.fit()
model = bmb.Model("value ~ condition + age + gender + (1|uid)", data, dropna=True)
model.build()
model.plot_priors();
# A Laplace prior with mean of 0 and scale of 10
my_favorite_prior = bmb.Prior("Laplace", mu=0, b=10)
# Set the prior when adding a term to the model; more details on this below.
priors = {"1|uid": my_favorite_prior}
bmb.Model("value ~ condition + (1|uid)", data, priors=priors, dropna=True)
subject_sd = bmb.Prior("HalfCauchy", beta=5)
subject_prior = bmb.Prior("Normal", mu=0, sd=subject_sd)
priors = {"1|uid": subject_prior}
bmb.Model("value ~ condition + (1|uid)", data, priors=priors, dropna=True)
data = bmb.load_data("admissions")
model = bmb.Model("admit ~ gre + gpa + rank", data, family="bernoulli")
results = model.fit()
from scipy import special
# Construct likelihood distribution ------------------------------
# This must use a valid PyMC3 distribution name.
# 'parent' is the name of the variable that represents the mean of the distribution.
# The mean of the Bernoulli family is given by 'p'.
likelihood = bmb.Likelihood("Bernoulli", parent="p")
# Set link function ----------------------------------------------
# There are two alternative approaches.
# 1. Pass a name that is known by Bambi
link = bmb.Link("logit")
# 2. Build everything from scratch
# link: A function that maps the response to the linear predictor
# linkinv: A function that maps the linear predictor to the response
# linkinv_backend: A function that maps the linear predictor to the response
# that works with Aesara tensors.
# bmb.math.sigmoid is a Aesara tensor function wrapped by PyMC3 and Bambi
link = bmb.Link(
"my_logit",
link=special.expit,
linkinv=special.logit,
linkinv_backend=bmb.math.sigmoid
)
# Construct the family -------------------------------------------
# Families are defined by a name, a Likelihood and a Link.
family = bmb.Family("bernoulli", likelihood, link)
# Now it's business as usual
model = bmb.Model("admit ~ gre + gpa + rank", data, family=family)
results = model.fit()
az.plot_trace(results, compact=False);
az.summary(results)
# show the names of all variables stored in the InferenceData object
list(results.posterior.data_vars)
type(model.backend.model)
model.backend.model
model.backend.model.observed_RVs
model.backend.model.unobserved_RVs
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating a model
Step2: Typically, we will initialize a Bambi Model by passing it a model formula and a pandas DataFrame. Other arguments such as family, priors, and link are available. By default, it uses family="gaussian" which implies a linear regression with normal error. We get back a model that we can immediately fit by calling model.fit().
Step3: We pass dropna=True to tell Bambi to drop rows containing missing values. The number of rows dropped is different from the number of rows with missing values because Bambi only considers columns involved in the model.
Step4: Each of the above examples specifies a full model that can be fitted using PyMC3 by doing
Step5: The above code obtains 1,000 draws (the default value) and return them as an InferenceData instance (for more details, see the ArviZ documentation). In this case, the fit() method accepts optional keyword arguments to pass onto PyMC3's sample() method, so any methods accepted by sample() can be specified here. We can also explicitly set the number of draws via the draws argument. For example, if we call fit(draws=2000, chains=2), the PyMC3 sampler will sample two chains in parallel, drawing 2,000 draws for each one. We could also specify starting parameter values, the step function to use, and so on (for full details, see the PyMC3 documentation).
Step6: Building without sampling can be useful if we want to inspect the internal PyMC3 model before we start the (potentially long) sampling process. Once we're satisfied, and wish to run the sampler, we can then simply call model.fit(), and the sampler will start running. Another good reason to build a model is to generate plot of the marginal priors using model.plot_priors().
Step7: Specifying priors
Step8: Priors specified using the Prior class can be nested to arbitrary depths--meaning, we can set any of a given prior's argument to point to another Prior instance. This is particularly useful when specifying hierarchical priors on group specific effects, where the individual group specific slopes or intercepts are constrained to share a common source distribution
Step9: The above prior specification indicates that the individual subject intercepts are to be treated as if they are randomly sampled from the same underlying normal distribution, where the variance of that normal distribution is parameterized by a separate hyperprior (a half-cauchy with beta = 5).
Step10: If no link argument is explicitly set (see below), the canonical link function (or an otherwise sensible default) will be used. The following table summarizes the currently available families and their associated links
Step11: The above example produces results identical to simply setting family='bernoulli'.
Step12: More details on this plot are available in the ArviZ documentation.
Step13: If you want to view summaries or plots for specific parameters, you can pass a list of its names
Step14: You can find detailed, worked examples of fitting Bambi models and working with the results in the example notebooks here.
|
7,079
|
<ASSISTANT_TASK:>
Python Code:
import sciunit
from sciunit.models import ConstModel # One of many dummy models included for illustration.
const_model_37 = ConstModel(37, name="Constant Model 37")
from sciunit.capabilities import ProducesNumber
from sciunit.scores import ZScore # One of many SciUnit score types.
from sciunit.comparators import compute_zscore # A function for computing raw z-scores.
from sciunit import ObservationError # An exception class raised when a test is instantiated
# with an invalid observation.
class MeanTest(sciunit.Test):
Tests if the model predicts
the same number as the observation.
required_capabilities = (ProducesNumber,) # The one capability required for a model to take this test.
score_type = ZScore # This test's 'judge' method will return a BooleanScore.
def validate_observation(self, observation):
if type(observation) is not dict:
raise sciunit.ObservationError("Observation must be a python dictionary")
if 'mean' not in observation:
raise sciunit.ObservationError("Observation must contain a 'mean' entry")
def generate_prediction(self, model):
return model.produce_number() # The model has this method if it inherits from the 'ProducesNumber' capability.
def compute_score(self, observation, prediction):
z = compute_zscore(observation,prediction) # Compute a z-score.
score = self.score_type(z.score) # Returns a ZScore object.
score.description = ("A z-score corresponding to the normalized location of the observation "
"relative to the predicted distribution.")
return score
observation = {'mean':37.8, 'std':2.1}
mean_37_test = MeanTest(observation, name='=37')
score = mean_37_test.judge(const_model_37)
score.summarize()
score.describe()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this chapter we will use the same toy model in Chapter 1 but write a more interesting test with additional features included in SciUnit.
Step3: Now let's write a test that validates the observation and returns more informative score type.
Step4: We've done two new things here
Step5: Instead of returning a BooleanScore, encoding a True/False value, we return a ZScore encoding a more quantitative summary of the relationship between the observation and the prediction. When we execute the test
Step6: Then we get a more quantitative summary of the results
|
7,080
|
<ASSISTANT_TASK:>
Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
print(raw.info)
info = mne.io.read_info(sample_data_raw_file)
print(info)
print(info.keys())
print() # insert a blank line
print(info['ch_names'])
print(info['chs'][0].keys())
print(mne.pick_channels(info['ch_names'], include=['MEG 0312', 'EEG 005']))
print(mne.pick_channels(info['ch_names'], include=[],
exclude=['MEG 0312', 'EEG 005']))
print(mne.pick_types(info, meg=False, eeg=True, exclude=[]))
print(mne.pick_channels_regexp(info['ch_names'], '^E.G'))
print(mne.channel_type(info, 25))
picks = (25, 76, 77, 319)
print([mne.channel_type(info, x) for x in picks])
print(raw.get_channel_types(picks=picks))
ch_idx_by_type = mne.channel_indices_by_type(info)
print(ch_idx_by_type.keys())
print(ch_idx_by_type['eog'])
print(info['nchan'])
eeg_indices = mne.pick_types(info, meg=False, eeg=True)
print(mne.pick_info(info, eeg_indices)['nchan'])
info
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As seen in the introductory tutorial <tut-overview>, when a
Step2: However, it is not strictly necessary to load the
Step3: As you can see, the
Step4: Most of the fields contain
Step5: Obtaining subsets of channels
Step6:
Step7: Note that the meg and fnirs parameters of
Step8:
Step9: To obtain several channel types at once, you could embed
Step10: Alternatively, you can get the indices of all channels of all channel types
Step11: Dropping channels from an Info object
Step12: We can also get a nice HTML representation in IPython like
|
7,081
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-2', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
7,082
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
tr_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw-trans.fif')
raw = mne.io.read_raw_fif(raw_fname)
mne.viz.plot_alignment(raw.info, trans=None, subject='sample',
subjects_dir=subjects_dir, surfaces=['head', 'brain'])
# mne.gui.coregistration(subject='sample', subjects_dir=subjects_dir)
trans = mne.read_trans(tr_fname)
src = mne.read_source_spaces(op.join(data_path, 'MEG', 'sample',
'sample_audvis-meg-oct-6-meg-inv.fif'))
mne.viz.plot_alignment(raw.info, trans=trans, subject='sample', src=src,
subjects_dir=subjects_dir, surfaces=['head', 'white'])
sphere = mne.make_sphere_model(info=raw.info, r0='auto', head_radius='auto')
mne.viz.plot_alignment(raw.info, subject='sample', eeg='projected',
meg='helmet', bem=sphere, dig=True,
surfaces=['brain', 'inner_skull', 'outer_skull',
'outer_skin'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2:
Step3: It is quite clear that things are not well aligned for estimating the
Step4: The previous is possible if you have the surfaces available from Freesurfer.
|
7,083
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
%matplotlib inline
#读取数据集
auto_df = pd.read_csv('data/Auto.csv', na_values = "?")
auto_df.dropna(inplace = True)
auto_df.head()
fig, ax = plt.subplots()
ax.scatter(x=auto_df['horsepower'],y=auto_df['mpg'])
ax.set_ylabel('mpg')
from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import LeaveOneOut
from sklearn.metrics import mean_squared_error
clf = LinearRegression()
loo = LeaveOneOut(len(auto_df))
#loo提供了训练和测试的索引
X = auto_df[['horsepower']].values
y = auto_df['mpg'].values
n = np.shape(X)[0]
mses =[]
for train, test in loo:
Xtrain,ytrain,Xtest,ytest = X[train],y[train],X[test],y[test]
clf.fit(Xtrain,ytrain)
ypred = clf.predict(Xtest)
mses.append(mean_squared_error(ytest,ypred))
np.mean(mses)
def loo_shortcut(X,y):
clf = LinearRegression()
clf.fit(X,y)
ypred = clf.predict(X)
xbar = np.mean(X,axis =0)
xsum = np.sum(np.power(X-xbar,2))
nrows = np.shape(X)[0]
mses = []
for row in range(0,nrows):
hi = (1 / nrows) + (np.sum(X[row] - xbar) ** 2 / xsum)
mse = ((y[row] - ypred[row])/(1-hi))**2
mses.append(mse)
return np.mean(mses)
loo_shortcut(auto_df[['horsepower']].values,auto_df['mpg'].values)
# LOOCV 应用于同一种模型不同复杂度的选择
auto_df['horsepower^2'] = auto_df['horsepower'] * auto_df['horsepower']
auto_df['horsepower^3'] = auto_df['horsepower^2'] * auto_df['horsepower']
auto_df['horsepower^4'] = auto_df['horsepower^3'] * auto_df['horsepower']
auto_df['horsepower^5'] = auto_df['horsepower^4'] * auto_df['horsepower']
auto_df['unit'] = 1
colnames = ["unit", "horsepower", "horsepower^2", "horsepower^3", "horsepower^4", "horsepower^5"]
cv_errors = []
for ncols in range(2,6):
X = auto_df[colnames[0:ncols]]
y = auto_df['mpg']
clf = LinearRegression()
clf.fit(X,y)
cv_errors.append(loo_shortcut(X.values,y.values))
plt.plot(range(1,5),cv_errors)
plt.xlabel('degree')
plt.ylabel('cv.error')
from sklearn.cross_validation import KFold
cv_errors = []
for ncols in range(2,6):
X = auto_df[colnames[0:ncols]].values
y = auto_df['mpg'].values
kfold = KFold(len(auto_df),n_folds = 10)
mses =[]
for train,test in kfold:
Xtrain,ytrain,Xtest,ytest = X[train],y[train],X[test],y[test]
clf.fit(X,y)
ypred = clf.predict(Xtest)
mses.append(mean_squared_error(ypred,ytest))
cv_errors.append(np.mean(mses))
plt.plot(range(1,5),cv_errors)
plt.xlabel("degree")
plt.ylabel('cv.error')
from sklearn.cross_validation import Bootstrap
cv_errors = []
for ncols in range(2,6):
X = auto_df[colnames[0:ncols]].values
y = auto_df['mpg'].values
n = len(auto_df)
bs = Bootstrap(n,train_size=int(0.9*n),test_size=int(0.1*n),n_iter=10,random_state=0)
mses = []
for train,test in bs:
Xtrain,ytrain,Xtest,ytest = X[train],y[train],X[test],y[test]
clf = LinearRegression()
clf.fit(X,y)
ypred = clf.predict(Xtest)
mses.append(mean_squared_error(ypred,ytest))
cv_errors.append(np.mean(mses))
plt.plot(range(1,5),cv_errors)
plt.xlabel('degree')
plt.ylabel('cv.error')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Leave One Out Cross Validation(LOOCV)
Step2: $$CV_{(n)} = \frac {1} {n} \sum_{i =1}^n (\frac{y_i - \hat y_i}{1- h_i})^2$$
Step3: K-Fold Cross Validation
Step4: Bootstrap
|
7,084
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function # only needed on py2
%matplotlib inline
import numpy as np
import tables
import matplotlib.pyplot as plt
def print_children(group):
Print all the sub-groups in `group` and leaf-nodes children of `group`.
Parameters:
group (pytables group): the group to be printed.
for name, value in group._v_children.items():
if isinstance(value, tables.Group):
content = '(Group)'
else:
content = value.read()
print(name)
print(' Content: %s' % content)
print(' Description: %s\n' % value._v_title.decode())
filename = '../data/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5'
h5file = tables.open_file(filename)
print_children(h5file.root)
print_children(h5file.root.sample)
photon_data = h5file.root.photon_data
photon_data.measurement_specs.measurement_type.read().decode()
timestamps = photon_data.timestamps.read()
timestamps_unit = photon_data.timestamps_specs.timestamps_unit.read()
detectors = photon_data.detectors.read()
print('Number of photons: %d' % timestamps.size)
print('Timestamps unit: %.2e seconds' % timestamps_unit)
print('Detectors: %s' % np.unique(detectors))
h5file.root.setup.excitation_wavelengths.read()
donor_ch = photon_data.measurement_specs.detectors_specs.spectral_ch1.read()
acceptor_ch = photon_data.measurement_specs.detectors_specs.spectral_ch2.read()
print('Donor CH: %d Acceptor CH: %d' % (donor_ch, acceptor_ch))
alex_period = photon_data.measurement_specs.alex_period.read()
offset = photon_data.measurement_specs.alex_offset.read()
donor_period = photon_data.measurement_specs.alex_excitation_period1.read()
acceptor_period = photon_data.measurement_specs.alex_excitation_period2.read()
print('ALEX period: %d \nOffset: %4d \nDonor period: %s \nAcceptor period: %s' % \
(alex_period, offset, donor_period, acceptor_period))
timestamps_donor = timestamps[detectors == donor_ch]
timestamps_acceptor = timestamps[detectors == acceptor_ch]
fig, ax = plt.subplots()
ax.hist((timestamps_acceptor - offset) % alex_period, bins=100, alpha=0.8, color='red', label='donor')
ax.hist((timestamps_donor - offset) % alex_period, bins=100, alpha=0.8, color='green', label='acceptor')
ax.axvspan(donor_period[0], donor_period[1], alpha=0.3, color='green')
ax.axvspan(acceptor_period[0], acceptor_period[1], alpha=0.3, color='red')
ax.set_xlabel('(timestamps - offset) MOD alex_period')
ax.set_title('ALEX histogram')
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False);
timestamps_mod = (timestamps - offset) % alex_period
donor_excitation = (timestamps_mod < donor_period[1])*(timestamps_mod > donor_period[0])
acceptor_excitation = (timestamps_mod < acceptor_period[1])*(timestamps_mod > acceptor_period[0])
timestamps_Dex = timestamps[donor_excitation]
timestamps_Aex = timestamps[acceptor_excitation]
fig, ax = plt.subplots()
ax.hist((timestamps_Dex - offset) % alex_period, bins=np.arange(0, alex_period, 40), alpha=0.8, color='green', label='D_ex')
ax.hist((timestamps_Aex - offset) % alex_period, bins=np.arange(0, alex_period, 40), alpha=0.8, color='red', label='A_ex')
ax.set_xlabel('(timestamps - offset) MOD alex_period')
ax.set_title('ALEX histogram (selected periods only)')
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False);
#plt.close('all')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 1. Utility functions
Step3: 2. Open the data file
Step4: We can open the file, as a normal HDF5 file
Step5: The object h5file is a pytables file reference. The root group is accessed with h5file.root.
Step6: We see the typical Photon-HDF5 structure. In particular the field description provides a short description of the measurement and acquisition_duration tells that the acquisition lasted 600 seconds.
Step7: Let's define a shortcut to the photon_data group to save some typing later
Step8: 4. Reading the data
Step9: Ok, tha's what we espect.
Step10: We may want to check the excitation wavelengths used in the measurement. This information is found in the setup group
Step11: Now, let's load the definitions of donor/acceptor channel and excitation periods
Step12: These numbers define the donor and acceptor alternation periods as shown below
Step13: Now that the data has been loaded we can plot an alternation histogram using matplotlib
Step14: 6. Timestamps in different excitation periods
|
7,085
|
<ASSISTANT_TASK:>
Python Code:
def metropolis_hastings(f, q, initial_state, num_iters):
Generate a Markov Chain Monte Carlo using
the Metropolis-Hastings algorithm.
Parameters
----------
f : function
the [relative] likelood function for
the distribution we would like to
approximate
q : function
The conditional distribution to be
sampled from (given an X_i, sample
from q(X_i) to get potential X_i+1)
initial_state : type accepted by f,q
The initial state. This state will
not be included as part of the
Markov Chain.
num_iters : int or float
the number of desired iterations
float is included to facilitate
1e5 type use
Returns
-------
out : Python Array
Array where out[i] = X_{i-1} because
X_0 (the initial state) is not included
MC = []
X_i = initial_state
for i in range(int(num_iters)):
Y = q(X_i)
r = min(f(Y)/f(X_i), 1)
a = np.random.uniform()
if a < r:
X_i = Y
MC.append(X_i)
return MC
def cauchy_dist(x):
return 1/(np.pi*(1 + x**2))
def q(scale):
return lambda x: np.random.normal(loc=x, scale=scale)
from scipy.stats import cauchy
CauchyInterval = np.linspace(cauchy.ppf(0.01),
cauchy.ppf(0.99),
100);
plt.figure(1);
plt.subplot(3,1,1);
plt.title("STD = 0.1")
std01 = metropolis_hastings(cauchy_dist, q(0.1), 0, 1000)
tmpHist = plt.hist(std01, bins=20, normed=True);
tmpLnSp = np.linspace(min(tmpHist[1]),
max(tmpHist[1]),100)
plt.plot(tmpLnSp, cauchy.pdf(tmpLnSp), 'r-')
plt.subplot(3,1,2)
plt.title("STD = 1")
std1 = metropolis_hastings(cauchy_dist, q(1), 0, 1000)
tmpHist = plt.hist(std1, bins=20, normed=True);
tmpLnSp = np.linspace(min(tmpHist[1]),
max(tmpHist[1]),100)
plt.plot(tmpLnSp, cauchy.pdf(tmpLnSp), 'r-')
plt.subplot(3,1,3)
plt.title("STD = 10")
std10 = metropolis_hastings(cauchy_dist, q(10), 0, 1000)
tmpHist = plt.hist(std10, bins=20, normed=True);
tmpLnSp = np.linspace(min(tmpHist[1]),
max(tmpHist[1]),100)
plt.plot(tmpLnSp, cauchy.pdf(tmpLnSp), 'r-')
plt.tight_layout()
plb.savefig("tmp/18-MCMC-Cauchy-Estimation.png")
plb.savefig("../../../pics/2017/04/18-MCMC-Cauchy-Estimation.png");
plt.show()
plt.figure(2)
plt.subplot(3,1,1)
plt.title("STD = 0.1")
plt.plot(std01)
plt.subplot(3,1,2)
plt.title("STD = 1")
plt.plot(std1)
plt.subplot(3,1,3)
plt.title("STD = 10")
plt.plot(std10)
plt.tight_layout()
plb.savefig("tmp/18-MCMC-Cauchy-Estimation_TS.png")
plb.savefig("../../../pics/2017/04/18-MCMC-Cauchy-Estimation_TS.png")
plt.show()
def psu_mcmc(X, q, numIters=10000):
theta, lambd, k, b1, b2 = 1, 1, 20, 1, 1
thetas, lambds, ks, b1s, b2s = [], [], [], [], []
n = len(X)
def f_k(theta, lambd, k, b1, b2):
if 0 <= k and k <= n:
return theta**sum(X[:k])*lambd**sum(X[k:])*np.exp(-k*theta-(n-k)*lambd)
elif k < 0:
return lambd**sum(X)*np.exp(-n*lambd)
elif k > n:
return theta**sum(X)*np.exp(-n*theta)
def f_t(theta, k, b1):
return theta**(sum(X[:k])+0.5)*np.exp(-theta*(k+1.0)/b1)
def f_l(lambd, k, b2):
return lambd**(sum(X[k:])+0.5)*np.exp(-lambd*((n-k)+1.0)/b2)
def f_b(b, par):
return np.exp(-(1 + par) / b) / (b*np.sqrt(b))
for i in range(numIters):
tmp = q(theta)
if tmp < np.infty:
r = min(1, f_t(tmp,k,b1)/f_t(theta,k,b1))
if np.random.uniform(0,1) < r:
theta = tmp
tmp = q(lambd)
if tmp < np.infty:
r = min(1, f_l(tmp,k,b2)/f_l(lambd,k,b2))
if np.random.uniform(0,1) < r:
lambd = tmp
tmp = q(b1)
if tmp < np.infty:
r = min(1, f_b(tmp, theta)/f_b(b1, theta))
if np.random.uniform(0,1) < r:
b1 = tmp
tmp = q(b2)
if tmp < np.infty:
r = min(1, f_b(tmp, lambd)/f_b(b2, lambd))
if np.random.uniform(0,1) < r:
b2 = tmp
tmp = q(k)
if tmp < np.infty:
r = min(1, f_k(theta, lambd, tmp, b1, b2) /
f_k(theta, lambd, k, b1,b2))
if np.random.uniform(0,1) < r:
k = tmp
thetas.append(theta)
lambds.append(lambd)
b1s.append(b1)
b2s.append(b2)
ks.append(k)
return np.array([thetas,lambds,ks,b1s,b2s])
%%bash
if [ ! -f tmp/psu_data.tsv ]
then
wget http://sites.stat.psu.edu/~mharan/MCMCtut/COUP551_rates.dat -O tmp/psu_data.tsv
fi
psu_data = []
with open("tmp/psu_data.tsv", "r") as f:
title = f.readline()
for line in f:
tmpArr = [x.strip() for x in line.split(" ")]
psu_data.append([int(x) for x in tmpArr if x != ""][1])
psu_data = np.array(psu_data)
psu_data
mcmc2 = psu_mcmc(psu_data, q(1), 1000)
plt.figure()
plt.subplot(2,1,1)
plt.hist(mcmc2[2] % len(psu_data), normed=True)
plt.subplot(2,1,2)
plt.plot(mcmc2[2])
plt.show();
fig = plt.figure()
fig.suptitle("MCMC values for Change Point")
plt.subplot(2,1,1)
plt.hist(mcmc2[2] % len(psu_data), normed=True)
plt.subplot(2,1,2)
plt.plot(mcmc2[2])
plb.savefig("tmp/psu_graphs1.png")
plt.show()
plt.plot(psu_data)
plt.title("PSU Data")
plb.savefig("tmp/psu_ts.png")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: $$\text{???} = (MC)^2$$
Step2: Estimation
Step3: Another Aspect
Step4:
|
7,086
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -u -v -d -p matplotlib,numpy
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
# Generating a Gaussion dataset:
# creating random vectors from the multivariate normal distribution
# given mean and covariance
mu_vec1 = np.array([0,0])
cov_mat1 = np.array([[2,0],[0,2]])
x1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100)
x2_samples = np.random.multivariate_normal(mu_vec1+0.2, cov_mat1+0.2, 100)
x3_samples = np.random.multivariate_normal(mu_vec1+0.4, cov_mat1+0.4, 100)
# x1_samples.shape -> (100, 2), 100 rows, 2 columns
plt.figure(figsize=(8,6))
plt.scatter(x1_samples[:,0], x1_samples[:,1], marker='x',
color='blue', alpha=0.7, label='x1 samples')
plt.scatter(x2_samples[:,0], x1_samples[:,1], marker='o',
color='green', alpha=0.7, label='x2 samples')
plt.scatter(x3_samples[:,0], x1_samples[:,1], marker='^',
color='red', alpha=0.7, label='x3 samples')
plt.title('Basic scatter plot')
plt.ylabel('variable X')
plt.xlabel('Variable Y')
plt.legend(loc='upper right')
plt.show()
import matplotlib.pyplot as plt
x_coords = [0.13, 0.22, 0.39, 0.59, 0.68, 0.74, 0.93]
y_coords = [0.75, 0.34, 0.44, 0.52, 0.80, 0.25, 0.55]
fig = plt.figure(figsize=(8,5))
plt.scatter(x_coords, y_coords, marker='s', s=50)
for x, y in zip(x_coords, y_coords):
plt.annotate(
'(%s, %s)' %(x, y),
xy=(x, y),
xytext=(0, -10),
textcoords='offset points',
ha='center',
va='top')
plt.xlim([0,1])
plt.ylim([0,1])
plt.show()
# 2-category classification with random 2D-sample data
# from a multivariate normal distribution
import numpy as np
from matplotlib import pyplot as plt
def decision_boundary(x_1):
Calculates the x_2 value for plotting the decision boundary.
return 4 - np.sqrt(-x_1**2 + 4*x_1 + 6 + np.log(16))
# Generating a Gaussion dataset:
# creating random vectors from the multivariate normal distribution
# given mean and covariance
mu_vec1 = np.array([0,0])
cov_mat1 = np.array([[2,0],[0,2]])
x1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100)
mu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector
mu_vec2 = np.array([1,2])
cov_mat2 = np.array([[1,0],[0,1]])
x2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100)
mu_vec2 = mu_vec2.reshape(1,2).T # to 1-col vector
# Main scatter plot and plot annotation
f, ax = plt.subplots(figsize=(7, 7))
ax.scatter(x1_samples[:,0], x1_samples[:,1], marker='o', color='green', s=40, alpha=0.5)
ax.scatter(x2_samples[:,0], x2_samples[:,1], marker='^', color='blue', s=40, alpha=0.5)
plt.legend(['Class1 (w1)', 'Class2 (w2)'], loc='upper right')
plt.title('Densities of 2 classes with 25 bivariate random patterns each')
plt.ylabel('x2')
plt.xlabel('x1')
ftext = 'p(x|w1) ~ N(mu1=(0,0)^t, cov1=I)\np(x|w2) ~ N(mu2=(1,1)^t, cov2=I)'
plt.figtext(.15,.8, ftext, fontsize=11, ha='left')
# Adding decision boundary to plot
x_1 = np.arange(-5, 5, 0.1)
bound = decision_boundary(x_1)
plt.plot(x_1, bound, 'r--', lw=3)
x_vec = np.linspace(*ax.get_xlim())
x_1 = np.arange(0, 100, 0.05)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8,6))
# Generating a Gaussion dataset:
# creating random vectors from the multivariate normal distribution
# given mean and covariance
mu_vec1 = np.array([0,0])
cov_mat1 = np.array([[1,0],[0,1]])
X = np.random.multivariate_normal(mu_vec1, cov_mat1, 500)
R = X**2
R_sum = R.sum(axis=1)
plt.scatter(X[:, 0], X[:, 1],
color='gray',
marker='o',
s=32. * R_sum,
edgecolor='black',
alpha=0.5)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <font size="1.5em">More info about the %watermark extension</font>
Step2: Scatter plots in matplotlib
Step3: <br>
Step5: <br>
Step6: <br>
|
7,087
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-sr5', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
7,088
|
<ASSISTANT_TASK:>
Python Code:
## Setup the path for our codebase
import sys
sys.path.append( '../code/' )
%matplotlib inline
import matplotlib.pyplot as plt
import neural_network.simple as simple
data = simple.generate_hill_data(100)
xs = map(lambda z: z[0], data)
ys = map(lambda z: z[1], data)
plt.plot( xs, ys )
nn = simple.SimpleScalarF_1Layer(hidden_layer_size=3)
simple.plot_fits( nn, data, train_epochs=200 )
nn.visualize_inputs_to_final_layer(data)
nn = simple.SimpleScalarF_1Layer(hidden_layer_size=10)
simple.plot_fits( nn, data, train_epochs=200 )
nn = simple.SimpleScalarF_1Layer(hidden_layer_size=10,learning_rate=0.001)
simple.plot_fits( nn, data, train_epochs=20 )
nn = simple.SimpleScalarF_1Layer(hidden_layer_size=10,learning_rate=0.001)
simple.plot_fits( nn, data, train_epochs=20 )
nn = simple.SimpleScalarF_1Layer(hidden_layer_size=10,learning_rate=0.001)
simple.plot_fits( nn, data, train_epochs=2000 )
nn.visualize_inputs_to_final_layer(data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example Data (centered Quadratic)
Step2: Simple Feed-Foward 1-Layer Neural Networks
Step3: Let's visualize the inputs to the final layer
Step4: hmm, maybe not enough hidden nodes to get the nonlinearity, try 10
Step5: Ok, it learned better and faster, let's see fewer training epochs per line
Step6: WTF? ... oh, right .... we are using a local optimizer so we get local optima
|
7,089
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.stats as stats
from sklearn import covariance
# Generate random values of x
X = np.random.normal(size = 1000)
epsilon = np.random.normal(0, 3, size = len(X))
Y = 5*X + epsilon
product = (X - np.mean(X))*(Y - np.mean(Y))
expected_value = np.mean(product)
print 'Value of the covariance between X and Y:', expected_value
np.cov([X, Y])
print np.var(X), np.var(Y)
# scatter plot of X and y
from statsmodels import regression
import statsmodels.api as sm
def linreg(X,Y):
# Running the linear regression
X = sm.add_constant(X)
model = regression.linear_model.OLS(Y, X).fit()
a = model.params[0]
b = model.params[1]
X = X[:, 1]
# Return summary of the regression and plot results
X2 = np.linspace(X.min(), X.max(), 100)
Y_hat = X2 * b + a
plt.scatter(X, Y, alpha=0.3) # Plot the raw data
plt.plot(X2, Y_hat, 'r', alpha=0.9); # Add the regression line, colored in red
plt.xlabel('X Value')
plt.ylabel('Y Value')
return model.summary()
linreg(X, Y)
plt.scatter(X, Y)
plt.title('Scatter plot and linear equation of x as a function of y')
plt.xlabel('X')
plt.ylabel('Y')
plt.legend(['Linear equation', 'Scatter Plot']);
# Four asset example of the covariance matrix.
start_date = '2016-01-01'
end_date = '2016-02-01'
returns = get_pricing(
['SBUX', 'AAPL', 'GS', 'GILD'],
start_date=start_date,
end_date=end_date,
fields='price'
).pct_change()[1:]
returns.columns = map(lambda x: x.symbol, returns.columns)
print 'Covariance matrix:'
print returns.cov()
# Getting the return data of assets.
start = '2016-01-01'
end = '2016-02-01'
symbols = ['AAPL', 'MSFT', 'BRK-A', 'GE', 'FDX', 'SBUX']
prices = get_pricing(symbols, start_date = start, end_date = end, fields = 'price')
prices.columns = map(lambda x: x.symbol, prices.columns)
returns = prices.pct_change()[1:]
returns.head()
in_sample_lw = covariance.ledoit_wolf(returns)[0]
print in_sample_lw
oos_start = '2016-02-01'
oos_end = '2016-03-01'
oos_prices = get_pricing(symbols, start_date = oos_start, end_date = oos_end, fields = 'price')
oos_prices.columns = map(lambda x: x.symbol, oos_prices.columns)
oos_returns = oos_prices.pct_change()[1:]
out_sample_lw = covariance.ledoit_wolf(oos_returns)[0]
lw_errors = sum(abs(np.subtract(in_sample_lw, out_sample_lw)))
print "Average Ledoit-Wolf error: ", np.mean(lw_errors)
sample_errors = sum(abs(np.subtract(returns.cov().values, oos_returns.cov().values)))
print 'Average sample covariance error: ', np.mean(sample_errors)
print 'Error improvement of LW over sample: {0:.2f}%'.format((np.mean(sample_errors/lw_errors)-1)*100)
sns.boxplot(
data = pd.DataFrame({
'Sample Covariance Error': sample_errors,
'Ledoit-Wolf Error': lw_errors
})
)
plt.title('Box Plot of Errors')
plt.ylabel('Error');
start_date = '2016-01-01'
end_date = '2017-06-01'
symbols = [
'SPY', 'XLF', 'XLE', 'XLU','XLK', 'XLI', 'XLB', 'GE', 'GS', 'BRK-A', 'JPM', 'AAPL', 'MMM', 'BA',
'CSCO','KO', 'DIS','DD', 'XOM', 'INTC', 'IBM', 'NKE', 'MSFT', 'PG', 'UTX', 'HD', 'MCD', 'CVX',
'AXP','JNJ', 'MRK', 'CAT', 'PFE', 'TRV', 'UNH', 'WMT', 'VZ', 'QQQ', 'BAC', 'F', 'C', 'CMCSA',
'MS', 'ORCL', 'PEP', 'HON', 'GILD', 'LMT', 'UPS', 'HP', 'FDX', 'GD', 'SBUX'
]
prices = get_pricing(symbols, start_date=start_date, end_date=end_date, fields='price')
prices.columns = map(lambda x: x.symbol, prices.columns)
returns = prices.pct_change()[1:]
dates = returns.resample('M').first().index
sample_covs = []
lw_covs = []
for i in range(1, len(dates)):
sample_cov = returns[dates[i-1]:dates[i]].cov().values
sample_covs.append(sample_cov)
lw_cov = covariance.ledoit_wolf(returns[dates[i-1]:dates[i]])[0]
lw_covs.append(lw_cov)
lw_diffs = []
for pair in zip(lw_covs[:-1], lw_covs[1:]):
diff = np.mean(np.sum(np.abs(pair[0] - pair[1])))
lw_diffs.append(diff)
sample_diffs = []
for pair in zip(sample_covs[:-1], sample_covs[1:]):
diff = np.mean(np.sum(np.abs(pair[0] - pair[1])))
sample_diffs.append(diff)
plt.plot(dates[2:], lw_diffs)
plt.plot(dates[2:], sample_diffs)
plt.xlabel('Time')
plt.ylabel('Mean Error')
plt.legend(['Ledoit-Wolf Errors', 'Sample Covariance Errors']);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's take the covariance of two closely related variables, $X$ and $Y$. Say that $X$ is some randomly drawn set and that $Y = 5X + \epsilon$, where $\epsilon$ is some extra noise. We can compute the covariance using the formula above to get a clearer picture of how $X$ evolves with respect to asset $Y$.
Step2: We can also compute the covariance between $X$ and $Y$ with a single function.
Step3: This gives us the covariance matrix between $X$ and $Y$. The diagonals are their respective variances and the indices $(i, j)$ refer to the covariance between assets indexed $i$ and $j$.
Step4: In this case, we only have two assets so we only have indices $(0, 1)$ and $(1, 0)$. Covariance matrices are symmetric, since $COV(X, Y) = COV(Y, X)$, which is why the off-diagonals mirror each other.
Step5: Between the covariance, the linear regression, and our knowledge of how $X$ and $Y$ are related, we can easily assess the relationship between our toy variables. With real data, there are two main complicating factors. The first is that we are exmaining significantly more relationships. The second is that we do not know any of their underlying relationships. These hindrances speak to the benefit of having accurate estimates of covariance matrices.
Step6: Why does all this matter?
Step7: Here we calculate the in-sample Ledoit-Wolf estimator.
Step8: Calculating Errors
Step9: Comparing to Sample Matrix
Step10: We can see that the improvement of Ledoit-Wolf over the sample covariance is pretty solid. This translates into decreased volatility and turnover rate in our portfolio, and thus increased returns when using the shrunk covariance matrix.
Step11: Adding More Assets
Step12: Here we calculate our different covariance estimates.
Step13: Here we calculate the error for each time period.
Step14: And here we plot the errors over time!
|
7,090
|
<ASSISTANT_TASK:>
Python Code:
from SPARQLWrapper import SPARQLWrapper, JSON
# Specify the DBPedia endpoint
sparql = SPARQLWrapper("http://dbpedia.org/sparql")
# Query for the description of "Capsaicin", filtered by language
sparql.setQuery(
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
SELECT ?comment
WHERE { <http://dbpedia.org/resource/Capsaicin> rdfs:comment ?comment
FILTER (LANG(?comment)='en')
}
)
# Convert results to JSON format
sparql.setReturnFormat(JSON)
result = sparql.query().convert()
# The return data contains "bindings" (a list of dictionaries)
for hit in result["results"]["bindings"]:
# We want the "value" attribute of the "comment" field
print(hit["comment"]["value"])
sparql = SPARQLWrapper("https://query.wikidata.org/sparql")
# Below we SELECT both the hot sauce items & their labels
# in the WHERE clause we specify that we want labels as well as items
sparql.setQuery(
SELECT ?item ?itemLabel
WHERE {
?item wdt:P279 wd:Q522171.
SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
}
)
sparql.setReturnFormat(JSON)
results = sparql.query().convert()
import pandas as pd
results_df = pd.io.json.json_normalize(results['results']['bindings'])
results_df[['item.value', 'itemLabel.value']]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: SPARQL from Python
Step4: Querying Wikidata
Step5: Let's use pandas to review the results as a dataframe
|
7,091
|
<ASSISTANT_TASK:>
Python Code:
from atmPy.instruments.DMA import smps
from atmPy.instruments.DMA import dma
from matplotlib import colors
import matplotlib.pyplot as plt
from numpy import meshgrid
import numpy as np
import pandas as pd
from matplotlib.dates import date2num
from matplotlib import dates
from atmPy import sizedistribution as sd
%matplotlib inline
hagis = smps.SMPS(dma.NoaaWide(),scan_folder="C:/Users/mrichardson/Documents/HAGIS/SMPS/Scans")
hagis.getLag(10, delta=10)
hagis.lag = 10
hagis.proc_files()
hagis.date
index = []
for i,e in enumerate(hagis.date):
if e is None:
index.append(i)
print(index)
if index:
hagis.date = np.delete(hagis.date, index)
hagis.dn_interp = np.delete(hagis.dn_interp,index, axis=0)
xfmt = dates.DateFormatter('%m/%d %H:%M')
xi = date2num(hagis.date)
XI, YI = meshgrid(xi, hagis.diam_interp)
#XI = dates.datetime.datetime.fromtimestamp(XI)
Z = hagis.dn_interp.transpose()
Z[np.where(Z <= 0)] = np.nan
pmax = 1e6 # 10**np.ceil(np.log10(np.amax(Z[np.where(Z > 0)])))
pmin = 1 #10**np.floor(np.log10(np.amin(Z[np.where(Z > 0)])))
fig, ax = plt.subplots()
pc = ax.pcolor(XI, YI, Z, cmap=plt.cm.jet, norm=colors.LogNorm(pmin, pmax, clip=False), alpha=0.8)
plt.colorbar(pc)
plt.yscale('log')
plt.ylim(5, 1000)
ax.xaxis.set_major_formatter(xfmt)
fig.autofmt_xdate()
fig.tight_layout()
dataframe = pd.DataFrame(hagis.dn_interp)
dataframe.index = hagis.date
binedges = (hagis.diam_interp[1:]+hagis.diam_interp[:-1])/2
first = hagis.diam_interp[0] -(binedges[0]-hagis.diam_interp[0])
last = hagis.diam_interp[-1]+ (hagis.diam_interp[-1]-binedges[-1])
binedges = np.append([first],binedges)
binedges=np.append(binedges,[last])
sizeDistr = sd.SizeDist_TS(dataframe,binedges, 'dNdlogDp')
f,a,b,c = sizeDistr.plot(vmax = pmax, vmin = pmin, norm='log', showMinorTickLabels=False, cmap=plt.cm.jet)
a.set_ylim((5,1000))
sfSD = sizeDistr.convert2dSdlogDp()
from imp import reload
reload(sd)
f,a,b,c = sfSD.plot(vmax = 1e10, vmin = 1e4, norm='log', showMinorTickLabels=False,removeTickLabels=['200','300','400',] ,cmap =plt.cm.jet)
a.set_ylim((5,1000))
avgAt = sizeDistr.average_overAllTime()
f,a = avgAt.plot(norm='log')
# a.set_yscale('log')
avgAtS = sfSD.average_overAllTime()
f,a= avgAtS.plot(norm='log')
a.set_yscale('log')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The first thing we do in the analysis is we create a new SMPS object with the DMA instance we wish to use. Here, we also set the initial directory to search for SMPS data. When a new SMPS object is created, an open file dialog window will be produced and the user may select one or many files to analyze. The file names will be stored in the SMPS attribute files.
Step2: Determining the Lag
Step3: Processing the Files
Step4: Charge Correction
Step5: Use of the SizeDistr Object
Step6: In addition, we will need to convert the bin centers produced by the SMPS object to bin edges. To do this, we will make the simple assumption that the bin edges are just the halfway points between the centers. For the edge cases, we will simply take the difference between the smallest bin center and the halfway point between the first and second bin centers and subtract this value from the smallest diameter. Similarly, for the largest diameter, we will take the difference between the halfway point between the largest and second largest bin centers and the largest bin center and add it to the largest bin center.
Step7: Once we have the corresponding SizeDistr object, we can now change the current distribution which is in $dN/d\log D_p$ space and change this to a surface area distribution in log space. This will produce a new object that we will call sfSD.
Step8: To get an overall view, we can further manipulate the data to produce average distributions from the entire time series.
|
7,092
|
<ASSISTANT_TASK:>
Python Code:
import sqlite3
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import CGATPipelines.Pipeline as P
import os
import statistics
import collections
#load R and the R packages required
# use these functions to display tables nicely as html
from IPython.display import display, HTML
plt.style.use('ggplot')
%load_ext rpy2.ipython
%R require(ggplot2)
!pwd
!date
database_path = '../csvdb'
output_path = '.'
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
def getTableNamesFromDB(database_path):
# Create a SQL connection to our SQLite database
con = sqlite3.connect(database_path)
cur = con.cursor()
# the result of a "cursor.execute" can be iterated over by row
cur.execute("SELECT name FROM sqlite_master WHERE type='table' ORDER BY name;")
available_tables = (cur.fetchall())
#Be sure to close the connection.
con.close()
return available_tables
db_tables = getTableNamesFromDB(database_path)
print('Tables contained by the database:')
for x in db_tables:
print('\t\t%s' % x[0])
#This function retrieves a table from sql database and indexes it with track name
def getTableFromDB(statement,database_path):
'''gets table from sql database depending on statement
and set track as index if contains track in column names'''
conn = sqlite3.connect(database_path)
df = pd.read_sql_query(statement,conn)
if 'track' in df.columns:
df.index = df['track']
return df
#get table of bam file size
filtering_stats = getTableFromDB('select * from post_filtering_read_counts;',database_path)
filtering_stats.index = filtering_stats['Input_Bam']
filtering_stats.drop('Input_Bam',1,inplace=True)
#sort dataframe by values in rows to get order filters were applied
#this is based on the number of reads in each row
new_cols = filtering_stats.columns[filtering_stats.ix[filtering_stats.last_valid_index()].argsort()]
filtering_stats = filtering_stats[new_cols[::-1]]
#get number of reads in the bams before and after filtering - smallest_col = last filtering step applied
smallest_col = filtering_stats.idxmin(axis=1)[1]
#plot bar graph of pre vs post filtering sizes
ax = filtering_stats[['pre_filtering',smallest_col]].divide(1000000).plot.bar()
ax.set_ylabel('Million Reads (not pairs)')
ax.legend(['pre_filtering','post_filtering'], loc=2,bbox_to_anchor=(1.05, 1),borderaxespad=0. )
ax.set_title('number of reads (not pairs) pre and post filtering')
#get the order of filters applied
def get_filter_order(dataframe):
'''function to print out the order of filters in dataframe'''
print('order of filters applied to bam file:')
for x in list(dataframe):
if x != 'pre_filtering':
print ('\t%s' % x)
return list(dataframe)
filter_order = get_filter_order(filtering_stats)
print('Table of number of reads remaining at each state of filtering ')
display(filtering_stats.T)
#plot how the reads have been filtered
ax = filtering_stats.T.divide(1000000).plot(rot=90)
ax.set_xlabel('filters')
ax.set_ylabel('million reads (not pairs)')
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set_title('number of reads remaining at\neach stage of filtering')
filtered_df = filtering_stats.copy()
filtered_df = filtered_df.divide(1000000)
filtered_df['Name'] = filtered_df.index
%%R -i filtered_df -w 600 -h 600 -u px
library("reshape2")
filtered_df$Name <- factor(filtered_df$Name)
df.m = melt(filtered_df)
cbPalette <- c("#999999", "#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7")
## pink #CC79A7 # orange #D55E00 # 0072B2 blue # yellow #F0E442 # green #009E73 # light blue
g = ggplot(data=df.m, aes(factor(Name), y=value,fill=variable)) + labs(title="Number of individual reads remaining after each filtering step") + geom_bar(stat="identity",position="dodge", width=0.7) + scale_fill_manual(values=cbPalette) + theme_bw()
g + scale_y_continuous(name="million reads remaining \n (individual reads not pairs)") + theme(plot.title=element_text(size=16, hjust=0.5, face='bold'), legend.position="top",axis.text=element_text(size=15,face='bold'),axis.text.x=element_text(size=15,face='bold',angle=90),axis.title.x=element_blank(),axis.title.y=element_text(size=10,face='bold'))
#Make percentage of reads dataframe
percentage_filtered_df = filtering_stats.copy()
percentage_filtered_df = percentage_filtered_df.div(percentage_filtered_df.pre_filtering, axis='index')*100
percentage_filtered_df = percentage_filtered_df.round(3)
percentage_filtered_df['Name']= percentage_filtered_df.index
print('Table showing the percentage of reads remaining at each filtering step')
percentage_filtered_df.T
%%R -i percentage_filtered_df -w 600 -h 600 -u px
library("reshape2")
percentage_filtered_df$Name <- factor(percentage_filtered_df$Name)
df.m = melt(percentage_filtered_df)
cbPalette <- c("#999999", "#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7")
## pink #CC79A7 # orange #D55E00 # 0072B2 blue # yellow #F0E442 # green #009E73 # light blue
g = ggplot(data=df.m, aes(factor(Name), y=value,fill=variable)) + labs(title="Percentage of reads remaining after each filtering step") + geom_bar(stat="identity",position="dodge", width=0.7) + scale_fill_manual(values=cbPalette) + theme_bw()
g + scale_y_continuous(name="Percentage reads remaining") + theme(plot.title=element_text(size=16, hjust=0.5, face='bold'), legend.position="top",axis.text=element_text(size=15,face='bold'),axis.text.x=element_text(size=15,face='bold',angle=90),axis.title.x=element_blank(),axis.title.y=element_text(size=10,face='bold'))
#Get number of reads removed by each stage of filtering
order_of_filters = get_filter_order(filtering_stats)
df_reads_removed = pd.DataFrame(index=filtering_stats.index)
for loc in range(len(order_of_filters)):
filt = order_of_filters[loc]
if filt == 'pre_filtering':
df_reads_removed['total_reads'] = filtering_stats['pre_filtering']
else:
previous_filter_step = order_of_filters[loc-1]
#print("calcultation number removed by %s filtering step by doing number of reads in %s - number of reads in %s column \n" % (filt, previous_filter_step, filt))
df_reads_removed['removed_by_%s_filter' % filt] = filtering_stats[previous_filter_step] - filtering_stats[filt]
print('\n\nTable shown as million reads removed by each filter:')
display(df_reads_removed.T.divide(1000000))
#plot how the reads have been filtered
ax = df_reads_removed.divide(1000000).plot(rot=90)
ax.set_xlabel('filters')
ax.set_ylabel('million reads removed (not pairs)')
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set_title('number of reads removed at each stage of filtering')
ax = df_reads_removed.T.divide(1000000).plot(rot=90)
ax.set_xlabel('filters')
ax.set_ylabel('million reads removed (not pairs)')
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set_title('number of reads removed at each stage of filtering')
ax = df_reads_removed.T.divide(1000000).drop('total_reads').plot(rot=90,kind='bar')
ax.set_xlabel('filters')
ax.set_ylabel('million reads removed (not pairs)')
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set_title('number of reads removed at each stage of filtering')
df_reads_removed_mills = df_reads_removed.divide(1000000)
df_reads_removed_mills['Name'] = df_reads_removed_mills.index
%%R -i df_reads_removed_mills -w 900 -h 800 -u px
library("reshape2")
df_reads_removed_mills$Name <- factor(df_reads_removed_mills$Name)
df.m = melt(df_reads_removed_mills)
cbPalette <- c("#999999", "#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7")
## pink #CC79A7 # orange #D55E00 # 0072B2 blue # yellow #F0E442 # green #009E73 # light blue
g = ggplot(data=df.m, aes(factor(Name), y=value,fill=variable)) + labs(title="Number of reads remaining after each filtering step") + geom_bar(stat="identity",position="dodge", width=0.7) + scale_fill_manual(values=cbPalette) + theme_bw()
g + scale_y_continuous(name="Number of reads filtered at each step") + theme(plot.title=element_text(size=16, hjust=0.5, face='bold'), legend.position='top',axis.text=element_text(size=15,face='bold'),axis.text.x=element_text(size=15,face='bold',angle=90),axis.title.x=element_blank(),axis.title.y=element_text(size=10,face='bold'))
#Get number of reads removed by each stage of filtering
percentage_filtered_df = percentage_filtered_df.drop('Name',axis=1)
order_of_filters = get_filter_order(percentage_filtered_df)
df_percentreads_removed = pd.DataFrame(index=percentage_filtered_df.index)
for loc in range(len(order_of_filters)):
filt = order_of_filters[loc]
if filt == 'pre_filtering':
df_percentreads_removed['total_reads'] = percentage_filtered_df['pre_filtering']
else:
previous_filter_step = order_of_filters[loc-1]
#print("calcultation number removed by %s filtering step by doing number of reads in %s - number of reads in %s column \n" % (filt, previous_filter_step, filt))
df_percentreads_removed['removed_by_%s_filter' % filt] = percentage_filtered_df[previous_filter_step] - percentage_filtered_df[filt]
print('\n\nTable shown as million reads removed by each filter:')
display(df_percentreads_removed.T)
#plot how the reads have been filtered
ax = df_percentreads_removed.plot(rot=90)
ax.set_xlabel('bam file')
ax.set_ylabel('percentage reads removed (not pairs)')
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set_title('percentage of reads removed at each stage of filtering')
ax = df_percentreads_removed.T.plot(rot=90)
ax.set_xlabel('filters')
ax.set_ylabel('percentage reads removed (not pairs)')
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set_title('percentage of reads removed at each stage of filtering')
ax = df_percentreads_removed.T.drop('total_reads').plot(rot=90,kind='bar')
ax.set_xlabel('filters')
ax.set_ylabel('percentage reads removed (not pairs)')
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set_title('percentage of reads removed at each stage of filtering')
df_percentreads_removed['Name'] = df_percentreads_removed.index
%%R -i df_percentreads_removed -w 900 -h 800 -u px
library("reshape2")
df_percentreads_removed$Name <- factor(df_percentreads_removed$Name)
df.m = melt(df_percentreads_removed)
cbPalette <- c("#999999", "#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7")
## pink #CC79A7 # orange #D55E00 # 0072B2 blue # yellow #F0E442 # green #009E73 # light blue
g = ggplot(data=df.m, aes(factor(Name), y=value,fill=variable)) + labs(title="Percentage of reads remaining after each filtering step") + geom_bar(stat="identity",position="dodge", width=0.7) + scale_fill_manual(values=cbPalette) + theme_bw()
g + scale_y_continuous(name="Number of reads filtered at each step") + theme(plot.title=element_text(size=16, hjust=0.5, face='bold'), legend.position='top',axis.text=element_text(size=15,face='bold'),axis.text.x=element_text(size=15,face='bold',angle=90),axis.title.x=element_blank(),axis.title.y=element_text(size=10,face='bold'))
filtering_stats['NRF'] = filtering_stats.duplicates/filtering_stats.pre_filtering
filtering_stats
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is where and when the notebook was run
Step2: First lets set the output path for where we want our plots to be saved and the database path and see what tables it contains
Step3: This code adds a button to see/hide code in html
Step4: The code below provides functions for accessing the project database and extract a table names so you can see what tables have been loaded into the database and are available for plotting. It also has a function for geting table from the database and indexing the table with the track name
Step5: Number of reads per samples
Step6: This should give you a good idea of
Step7: Lets graph the number of reads that remain at each step for each bam file
Step8: Now lets look at the number of reads filtered at each step side by side - this uses R for plotting
Step9: Now have a look at the percentage of reads remaining at each stage of filtering
Step10: Now lets get the number of reads that are filtered out at each stage of the filtering by subtracting the number of reads at the filtering stage of interest from the number of reads in the stage prior to the filter of interst being applied
Step11: Now lets plot the number of reads reamining at each filtering step side by side
Step12: Now lets get the percentage of reads removed at each filtering step
Step13: Great thats all the filtering stats done by now you should have a good idea about
|
7,093
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
import random
def enumerate_row(nb=10000, n=10):
for i in range(nb):
# on retourne un tuple, les données sont
# plus souvent recopiées car le type est immuable
yield tuple(random.random() for k in range(n))
# on retourne une liste, ces listes ne sont pas
# recopiées en général, seule la liste qui les tient
# l'est
# yield list(random.random() for k in range(n))
list(enumerate_row(2))
import pandas
nb, n = 10, 10
df = pandas.DataFrame(enumerate_row(nb=nb, n=n), columns=["c%d" % i for i in range(n)])
df.head()
nb, n =100000, 10
print(nb,n)
%timeit pandas.DataFrame(enumerate_row(nb=nb,n=n), columns=["c%d" % i for i in range(n)])
print(nb,n)
%timeit pandas.DataFrame(list(enumerate_row(nb=nb,n=n)), columns=["c%d" % i for i in range(n)])
def cache():
return list(enumerate_row(nb=nb,n=n))
print(nb,n)
%timeit cache()
print(nb,n)
l = list(enumerate_row(nb=nb,n=n))
%timeit pandas.DataFrame(l, columns=["c%d" % i for i in range(n)])
%timeit pandas.DataFrame(columns=["c%d" % i for i in range(n)], index=range(n))
def create_df3():
return pandas.DataFrame.from_records(enumerate_row(nb=nb,n=n),
columns=["c%d" % i for i in range(n)])
print(nb,n)
%timeit create_df3()
def enumerate_row2(nb=10000, n=10):
for i in range(nb):
for k in range(n):
yield random.random()
import numpy
nb,n = 100000, 10
# on précise la taille du tableau car cela évite à numpy d'agrandir le tableau
# au fur et à mesure, ceci ne fonctionne pas
print(nb,n)
m = numpy.fromiter(enumerate_row2(nb=nb,n=n), float, nb*n)
m.resize((nb,n))
m [:5,:]
def create_array():
m = numpy.fromiter(enumerate_row2(nb=nb,n=n), float, nb*n)
m.resize((nb,n))
return m
print(nb,n)
%timeit create_array()
def create_array2():
m = list(enumerate_row(nb=nb,n=n))
ml = numpy.array(m, float)
return ml
print(nb,n)
%timeit create_array2()
def create_array3():
m = numpy.fromiter(enumerate_row2(nb=nb,n=n), float)
m.resize((nb,n))
return m
print(nb,n)
%timeit create_array3()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Création d'un dataframe à partir d'un itérateur
Step2: On compare plusieurs constructions
Step3: On décompose
Step4: D'après ces temps, pandas convertit probablement l'itérateur en liste. On essaye de créer le dataframe vide, puis avec la méthode from_records.
Step5: Création d'un array à partir d'un itérateur
Step6: Et si on ne précise pas la taille du tableau créé avec la fonction fromiter
|
7,094
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
# load MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
train_images = np.reshape(mnist.train.images, [-1, 28, 28, 1])
train_labels = mnist.train.labels
test_images = np.reshape(mnist.test.images, [-1, 28, 28, 1])
test_labels = mnist.test.labels
from BasicNets import BatchnormNet
def buildSectionNet(sm):
batch_size = 250
image, label = tf.train.slice_input_producer([train_images, train_labels])
image_batch, label_batch = tf.train.batch([image, label], batch_size=batch_size)
# flag for batch normalization layers
is_training = tf.placeholder(name='is_training', shape=[], dtype=tf.bool)
net = BatchnormNet(is_training, image_batch)
# first network section with initial convolution and three residual blocks
with sm.new_section():
with tf.variable_scope('initial_conv'):
stream = net.add_conv(net._inputs, n_filters=16)
stream = net.add_bn(stream)
stream = tf.nn.relu(stream)
with tf.variable_scope('scale0'):
for i in range(3):
with tf.variable_scope('block_%d' % i):
stream = net.res_block(stream)
# second network section strided convolution to decrease the input resolution
#with sm.new_section():
with tf.variable_scope('scale1'):
stream = net.res_block(stream, filters_factor=2, first_stride=2)
for i in range(2):
with tf.variable_scope('block_%d' % i):
stream = net.res_block(stream)
# third network section
with sm.new_section():
with tf.variable_scope('scale2'):
stream = net.res_block(stream, filters_factor=2, first_stride=2)
for i in range(4):
with tf.variable_scope('block_%d' % i):
stream = net.res_block(stream)
# fourth network section with final pooling and cross-entropy loss
#with sm.new_section():
with tf.variable_scope('final_pool'):
# global average pooling over image dimensions
stream = tf.reduce_mean(stream, axis=2)
stream = tf.reduce_mean(stream, axis=1)
# final conv for classification
stream = net.add_fc(stream, out_dims=10)
with tf.variable_scope('loss'):
loss = tf.nn.softmax_cross_entropy_with_logits(stream, label_batch)
loss = tf.reduce_mean(loss)
return loss, is_training
def buildBasicNet():
batch_size = 250
image, label = tf.train.slice_input_producer([train_images, train_labels])
image_batch, label_batch = tf.train.batch([image, label], batch_size=batch_size)
# flag for batch normalization layers
is_training = tf.placeholder(name='is_training', shape=[], dtype=tf.bool)
net = BatchnormNet(is_training, image_batch)
# first network section with initial convolution and three residual blocks
with tf.variable_scope('initial_conv'):
stream = net.add_conv(net._inputs, n_filters=16)
stream = net.add_bn(stream)
stream = tf.nn.relu(stream)
with tf.variable_scope('scale0'):
for i in range(3):
with tf.variable_scope('block_%d' % i):
stream = net.res_block(stream)
# second network section strided convolution to decrease the input resolution
with tf.variable_scope('scale1'):
stream = net.res_block(stream, filters_factor=2, first_stride=2)
for i in range(2):
with tf.variable_scope('block_%d' % i):
stream = net.res_block(stream)
# third network section
with tf.variable_scope('scale2'):
stream = net.res_block(stream, filters_factor=2, first_stride=2)
for i in range(4):
with tf.variable_scope('block_%d' % i):
stream = net.res_block(stream)
# fourth network section with final pooling and cross-entropy loss
with tf.variable_scope('final_pool'):
# global average pooling over image dimensions
stream = tf.reduce_mean(stream, axis=2)
stream = tf.reduce_mean(stream, axis=1)
# final conv for classification
stream = net.add_fc(stream, out_dims=10)
with tf.variable_scope('loss'):
loss = tf.nn.softmax_cross_entropy_with_logits(stream, label_batch)
loss = tf.reduce_mean(loss)
return loss, is_training
from partialflow import GraphSectionManager
# construct network with splits
sm = GraphSectionManager()
with tf.variable_scope('section_net'):
loss_sec, is_training_sec = buildSectionNet(sm)
opt = tf.train.AdamOptimizer(learning_rate=0.0001)
sm.add_training_ops(opt, loss_sec, var_list=tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES), verbose=False)
sm.prepare_training()
# construct same network without splits
with tf.variable_scope('basic_net'):
loss_basic, is_training_basic = buildBasicNet()
opt = tf.train.AdamOptimizer(learning_rate=0.0001)
grads = opt.compute_gradients(loss_basic, var_list=tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES))
train_op = opt.apply_gradients(grads)
# initialize the session
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
_ = tf.train.start_queue_runners(sess=sess)
from time import time
N = 500
losses = np.zeros([2,N], dtype=np.float32)
times = np.zeros([2,N], dtype=np.float32)
for i in range(N):
start = time()
losses[0, i] = sm.run_full_cycle(sess, fetches=loss_sec, basic_feed={is_training_sec:True})
times[0, i] = time() - start
start = time()
_, losses[1, i] = sess.run([train_op, loss_basic], feed_dict={is_training_basic:True})
times[1, i] = time() - start
if i%100 == 0:
print('Processed %d/%d batches' % (i,N))
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(losses.T)
plt.xlabel('Batch')
plt.ylabel('Loss')
_ = plt.legend(['with PartialFlow', 'without PartialFlow'])
plt.plot(times.T)
plt.xlabel('Batch')
plt.ylabel('Duration of Batch [s]')
_ = plt.legend(['with PartialFlow', 'without PartialFlow'])
plt.plot(times[0]/times[1] - 1)
plt.xlabel('Batch')
_ = plt.ylabel('Relative Overhead')
print('Mean relative overhead: %.5f' % (np.mean(times[0]/times[1])-1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define Network Architectures
Step2: Setup
Step3: Training
Step4: Evaluation
Step5: If PartialFlow works correctly, the losses should be very similar for both networks. The training processes mainly differ in the initializations and the order of the inputs.
Step6: PartialFlow trades additional computation time for lower memory consumption. The time overhead depends on the number and position of splits in the graph. Here, we compare the duration of update operations.
|
7,095
|
<ASSISTANT_TASK:>
Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Criando uma classe chamada Circulo
class Circulo():
# O valor de pi é constante
pi = 3.14
# Quando um objeto desta classe for criado, este método será executado e o valor default do raio será 5.
def __init__(self, raio = 5):
self.raio = raio
# Esse método calcula a área. Self utiliza os atributos deste mesmo objeto
def area(self):
return (self.raio * self.raio) * Circulo.pi
# Método para gerar um novo raio
def setRaio(self, novo_raio):
self.raio = novo_raio
# Método para obter o raio do círculo
def getRaio(self):
return self.raio
# Criando o objeto circ. Uma instância da classe Circulo()
circ = Circulo()
# Executando um método da classe Circulo
circ.getRaio()
# Criando outro objeto chamado circ1. Uma instância da classe Circulo()
# Agora sobrescrevendo o valor do atributo
circ1 = Circulo(7)
# Executando um método da classe Circulo
circ1.getRaio()
# Imprimindo o raio
print ('O raio é: ', circ.getRaio())
# Imprimindo a area
print('Area igual a: ', circ.area())
# Gerando um novo valor para o raio do círculo
circ.setRaio(3)
# Imprimindo o novo raio
print ('Novo raio igual a: ', circ.getRaio())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Métodos
|
7,096
|
<ASSISTANT_TASK:>
Python Code:
from timeit import default_timer as timer
from bokeh.plotting import figure, show, output_notebook
from bokeh.models import GlyphRenderer, LinearColorMapper
from bokeh.io import push_notebook
from numba import jit, njit
from ipywidgets import interact
import numpy as np
import scipy.misc
output_notebook()
# smaller image
img_blur = (scipy.misc.ascent()[::-1,:]/255.0)[:250, :250].copy(order='C')
palette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)]
width, height = img_blur.shape
p_blur = figure(x_range=(0, width), y_range=(0, height))
r_blur = p_blur.image(image=[img_blur], x=[0], y=[0], dw=[width], dh=[height], palette=palette, name='blur')
@njit
def blur(outimg, img, amt):
iw, ih = img.shape
for i in range(amt, iw-amt):
for j in range(amt, ih-amt):
px = 0.
for w in range(-amt//2, amt//2):
for h in range(-amt//2, amt//2):
px += img[i+w, j+h]
outimg[i, j]= px/(amt*amt)
def update(i=0):
level = 2*i + 1
out = img_blur.copy()
ts = timer()
blur(out, img_blur, level)
te = timer()
print('blur takes:', te - ts)
renderer = p_blur.select(dict(name="blur", type=GlyphRenderer))
r_blur.data_source.data['image'] = [out]
push_notebook(handle=t_blur)
t_blur = show(p_blur, notebook_handle=True)
interact(update, i=(0, 10))
@jit
def getitem(img, x, y):
w, h = img.shape
if x >= w:
x = w - 1 - (x - w)
if y >= h:
y = h - 1 - (y - h)
return img[x, y]
def filter_factory(kernel):
ksum = np.sum(kernel)
if ksum == 0:
ksum = 1
k9 = kernel / ksum
@jit
def kernel_apply(img, out, x, y):
tmp = 0
for i in range(3):
for j in range(3):
tmp += img[x+i-1, y+j-1] * k9[i, j]
out[x, y] = tmp
@jit
def kernel_apply_edge(img, out, x, y):
tmp = 0
for i in range(3):
for j in range(3):
tmp += getitem(img, x+i-1, y+j-1) * k9[i, j]
out[x, y] = tmp
@jit
def kernel_k9(img, out):
# Loop through all internals
for x in range(1, img.shape[0] -1):
for y in range(1, img.shape[1] -1):
kernel_apply(img, out, x, y)
# Loop through all the edges
for x in range(img.shape[0]):
kernel_apply_edge(img, out, x, 0)
kernel_apply_edge(img, out, x, img.shape[1] - 1)
for y in range(img.shape[1]):
kernel_apply_edge(img, out, 0, y)
kernel_apply_edge(img, out, img.shape[0] - 1, y)
return kernel_k9
average = np.array([
[1, 1, 1],
[1, 1, 1],
[1, 1, 1],
], dtype=np.float32)
sharpen = np.array([
[-1, -1, -1],
[-1, 12, -1],
[-1, -1, -1],
], dtype=np.float32)
edge = np.array([
[ 0, -1, 0],
[-1, 4, -1],
[ 0, -1, 0],
], dtype=np.float32)
edge_h = np.array([
[ 0, 0, 0],
[-1, 2, -1],
[ 0, 0, 0],
], dtype=np.float32)
edge_v = np.array([
[0, -1, 0],
[0, 2, 0],
[0, -1, 0],
], dtype=np.float32)
gradient_h = np.array([
[-1, -1, -1],
[ 0, 0, 0],
[ 1, 1, 1],
], dtype=np.float32)
gradient_v = np.array([
[-1, 0, 1],
[-1, 0, 1],
[-1, 0, 1],
], dtype=np.float32)
sobol_h = np.array([
[ 1, 2, 1],
[ 0, 0, 0],
[-1, -2, -1],
], dtype=np.float32)
sobol_v = np.array([
[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1],
], dtype=np.float32)
emboss = np.array([
[-2, -1, 0],
[-1, 1, 1],
[ 0, 1, 2],
], dtype=np.float32)
kernels = {
"average" : filter_factory(average),
"sharpen" : filter_factory(sharpen),
"edge (both)" : filter_factory(edge),
"edge (horizontal)" : filter_factory(edge_h),
"edge (vertical)" : filter_factory(edge_v),
"gradient (horizontal)" : filter_factory(gradient_h),
"gradient (vertical)" : filter_factory(gradient_v),
"sobol (horizontal)" : filter_factory(sobol_h),
"sobol (vertical)" : filter_factory(sobol_v),
"emboss" : filter_factory(emboss),
}
images = {
"ascent" : np.copy(scipy.misc.ascent().astype(np.float32)[::-1, :]),
"face" : np.copy(scipy.misc.face(gray=True).astype(np.float32)[::-1, :]),
}
palette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)]
cm = LinearColorMapper(palette=palette, low=0, high=256)
width, height = images['ascent'].shape
p_kernel = figure(x_range=(0, width), y_range=(0, height))
r_kernel = p_kernel.image(image=[images['ascent']], x=[0], y=[0], dw=[width], dh=[height], color_mapper=cm, name="kernel")
def update(image="ascent", kernel_name="none", scale=100, bias=0):
global _last_kname
global _last_out
img_kernel = images.get(image)
kernel = kernels.get(kernel_name, None)
if kernel == None:
out = np.copy(img_kernel)
else:
out = np.zeros_like(img_kernel)
ts = timer()
kernel(img_kernel, out)
te = timer()
print('kernel takes:', te - ts)
out *= scale / 100.0
out += bias
r_kernel.data_source.data['image'] = [out]
push_notebook(handle=t_kernel)
t_kernel = show(p_kernel, notebook_handle=True)
knames = ["none"] + sorted(kernels.keys())
interact(update, image=["ascent" ,"face"], kernel_name=knames, scale=(10, 100, 10), bias=(0, 255))
@njit
def wavelet_decomposition(img, tmp):
Perform inplace wavelet decomposition on `img` with `tmp` as
a temporarily buffer.
This is a very simple wavelet for demonstration
w, h = img.shape
halfwidth, halfheight = w//2, h//2
lefthalf, righthalf = tmp[:halfwidth, :], tmp[halfwidth:, :]
# Along first dimension
for x in range(halfwidth):
for y in range(h):
lefthalf[x, y] = (img[2 * x, y] + img[2 * x + 1, y]) / 2
righthalf[x, y] = img[2 * x, y] - img[2 * x + 1, y]
# Swap buffer
img, tmp = tmp, img
tophalf, bottomhalf = tmp[:, :halfheight], tmp[:, halfheight:]
# Along second dimension
for y in range(halfheight):
for x in range(w):
tophalf[x, y] = (img[x, 2 * y] + img[x, 2 * y + 1]) / 2
bottomhalf[x, y] = img[x, 2 * y] - img[x, 2 * y + 1]
return halfwidth, halfheight
img_wavelet = np.copy(scipy.misc.face(gray=True)[::-1, :])
palette = ['#%02x%02x%02x' %(i,i,i) for i in range(256)]
width, height = img_wavelet.shape
p_wavelet = figure(x_range=(0, width), y_range=(0, height))
r_wavelet = p_wavelet.image(image=[img_wavelet], x=[0], y=[0], dw=[width], dh=[height], palette=palette, name="wavelet")
def update(level=0):
out = np.copy(img_wavelet)
tmp = np.zeros_like(img_wavelet)
ts = timer()
hw, hh = img_wavelet.shape
while level > 0 and hw > 1 and hh > 1:
hw, hh = wavelet_decomposition(out[:hw, :hh], tmp[:hw, :hh])
level -= 1
te = timer()
print('wavelet takes:', te - ts)
r_wavelet.data_source.data['image'] = [out]
push_notebook(handle=t_wavelet)
t_wavelet = show(p_wavelet, notebook_handle=True)
interact(update, level=(0, 7))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Gaussian Blur
Step2: 3x3 Image Kernels
Step4: Wavelet Decomposition
|
7,097
|
<ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pandas as pd
column_names = [
'sepal length',
'sepal width',
'petal length',
'petal width',
'class'
]
pd.read_csv('iris.data', names=column_names)
import urllib.request
import os
urllib.request.urlretrieve(
'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.names',
'iris.names')
os.listdir()
import pandas as pd
column_names = [
'sepal length',
'sepal width',
'petal length',
'petal width',
'class'
]
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
pd.read_csv(url, names=column_names)
import pandas as pd
pd.read_csv('oranges-vs-grapefruit.zip')
import zipfile
import os
with zipfile.ZipFile('oranges-vs-grapefruit.zip','r') as z:
z.extractall('./')
os.listdir()
! KAGGLE_CONFIG_DIR=/content/ kaggle datasets download joshmcadams/oranges-vs-grapefruit
! chmod 600 kaggle.json
! ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle
! mv kaggle.json ~/.kaggle
! kaggle datasets download joshmcadams/oranges-vs-grapefruit
# Your code goes here
# Your code goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Acquiring Data
Step2: You should see a DataFrame containing information about iris flowers.
Step3: Downloading With Pandas
Step4: Kaggle Data
Step5: Notice that the file that we loaded was oranges-vs-grapefruit.zip, which is a zip file, not a csv file. Zip files are 'compressed' files. We do this to save space. However, if you were to open oranges-vs-grapefruit.zip in a text editor, you wouldn't be able to read it. Lucky for us, read_csv knows what to do when it receives a compressed file.
Step6: Zip is one of many file compression formats, and it is actually more than just a compression format. Remember when we mentioned above that a zip file might contain multiple files? The combining of one or more files is known as archiving. The reduction in size of files is known as compression. Zip is actually an archiving and compression algorithm.
Step7: You should see text similar to
Step8: You might also be wondering what that KAGGLE_CONFIG_DIR=/content/ in front of the kaggle command was.
Step9: And then move the file.
Step10: Now you can run the kaggle command without having to set the configuration directory.
Step11: Note that you'll have to repeat this process every time your virtual machine resets. The setup will live through reloads though.
Step12: Exercise 2
|
7,098
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.formula.api import logit, probit, poisson, ols
print(sm.datasets.fair.SOURCE)
print( sm.datasets.fair.NOTE)
dta = sm.datasets.fair.load_pandas().data
dta['affair'] = (dta['affairs'] > 0).astype(float)
print(dta.head(10))
print(dta.describe())
affair_mod = logit("affair ~ occupation + educ + occupation_husb"
"+ rate_marriage + age + yrs_married + children"
" + religious", dta).fit()
print(affair_mod.summary())
affair_mod.pred_table()
mfx = affair_mod.get_margeff()
print(mfx.summary())
respondent1000 = dta.ix[1000]
print(respondent1000)
resp = dict(zip(range(1,9), respondent1000[["occupation", "educ",
"occupation_husb", "rate_marriage",
"age", "yrs_married", "children",
"religious"]].tolist()))
resp.update({0 : 1})
print(resp)
mfx = affair_mod.get_margeff(atexog=resp)
print(mfx.summary())
affair_mod.predict(respondent1000)
affair_mod.fittedvalues[1000]
affair_mod.model.cdf(affair_mod.fittedvalues[1000])
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
support = np.linspace(-6, 6, 1000)
ax.plot(support, stats.logistic.cdf(support), 'r-', label='Logistic')
ax.plot(support, stats.norm.cdf(support), label='Probit')
ax.legend();
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
support = np.linspace(-6, 6, 1000)
ax.plot(support, stats.logistic.pdf(support), 'r-', label='Logistic')
ax.plot(support, stats.norm.pdf(support), label='Probit')
ax.legend();
print(sm.datasets.star98.SOURCE)
print(sm.datasets.star98.DESCRLONG)
print(sm.datasets.star98.NOTE)
dta = sm.datasets.star98.load_pandas().data
print(dta.columns)
print(dta[['NABOVE', 'NBELOW', 'LOWINC', 'PERASIAN', 'PERBLACK', 'PERHISP', 'PERMINTE']].head(10))
print(dta[['AVYRSEXP', 'AVSALK', 'PERSPENK', 'PTRATIO', 'PCTAF', 'PCTCHRT', 'PCTYRRND']].head(10))
formula = 'NABOVE + NBELOW ~ LOWINC + PERASIAN + PERBLACK + PERHISP + PCTCHRT '
formula += '+ PCTYRRND + PERMINTE*AVYRSEXP*AVSALK + PERSPENK*PTRATIO*PCTAF'
stats.binom(5, 1./6).pmf(2)
from scipy.misc import comb
comb(5,2) * (1/6.)**2 * (5/6.)**3
from statsmodels.formula.api import glm
glm_mod = glm(formula, dta, family=sm.families.Binomial()).fit()
print(glm_mod.summary())
glm_mod.model.data.orig_endog.sum(1)
glm_mod.fittedvalues * glm_mod.model.data.orig_endog.sum(1)
exog = glm_mod.model.data.orig_exog # get the dataframe
means25 = exog.mean()
print(means25)
means25['LOWINC'] = exog['LOWINC'].quantile(.25)
print(means25)
means75 = exog.mean()
means75['LOWINC'] = exog['LOWINC'].quantile(.75)
print(means75)
resp25 = glm_mod.predict(means25)
resp75 = glm_mod.predict(means75)
diff = resp75 - resp25
print("%2.4f%%" % (diff[0]*100))
nobs = glm_mod.nobs
y = glm_mod.model.endog
yhat = glm_mod.mu
from statsmodels.graphics.api import abline_plot
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111, ylabel='Observed Values', xlabel='Fitted Values')
ax.scatter(yhat, y)
y_vs_yhat = sm.OLS(y, sm.add_constant(yhat, prepend=True)).fit()
fig = abline_plot(model_results=y_vs_yhat, ax=ax)
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111, title='Residual Dependence Plot', xlabel='Fitted Values',
ylabel='Pearson Residuals')
ax.scatter(yhat, stats.zscore(glm_mod.resid_pearson))
ax.axis('tight')
ax.plot([0.0, 1.0],[0.0, 0.0], 'k-');
resid = glm_mod.resid_deviance
resid_std = stats.zscore(resid)
kde_resid = sm.nonparametric.KDEUnivariate(resid_std)
kde_resid.fit()
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111, title="Standardized Deviance Residuals")
ax.hist(resid_std, bins=25, normed=True);
ax.plot(kde_resid.support, kde_resid.density, 'r');
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
fig = sm.graphics.qqplot(resid, line='r', ax=ax)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: How well are we predicting?
Step2: The coefficients of the discrete choice model do not tell us much. What we're after is marginal effects.
Step3: The "correct" model here is likely the Tobit model. We have an work in progress branch "tobit-model" on github, if anyone is interested in censored regression models.
Step4: Compare the estimates of the Logit Fair model above to a Probit model. Does the prediction table look better? Much difference in marginal effects?
Step5: Aside
Step6: The number of trials
Step7: First differences
Step8: The interquartile first difference for the percentage of low income households in a school district is
Step9: Plot fitted values vs Pearson residuals
Step10: Histogram of standardized deviance residuals with Kernel Density Estimate overlayed
Step11: QQ-plot of deviance residuals
|
7,099
|
<ASSISTANT_TASK:>
Python Code:
from pathlib import Path
import requests
data_path = Path('data')
path = data_path/'mnist'
path.mkdir(parents=True, exist_ok=True)
url = 'http://deeplearning.net/data/mnist/'
filename = 'mnist.pkl.gz'
(path/filename)
if not (path/filename).exists():
content = requests.get(url+filename).content
(path/filename).open('wb').write(content)
import pickle, gzip
with gzip.open(path/filename, 'rb') as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
%matplotlib inline
from matplotlib import pyplot
import numpy as np
pyplot.imshow(x_train[0].reshape((28,28)), cmap="gray")
x_train.shape
import torch
x_train,y_train,x_valid,y_valid = map(torch.tensor, (x_train,y_train,x_valid,y_valid))
n,c = x_train.shape
x_train, x_train.shape, y_train.min(), y_train.max()
import math
weights = torch.rand(784, 10)/math.sqrt(784)
weights.requires_grad_()
bias = torch.zeros(10, requires_grad=True)
def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1)
def model(xb): return log_softmax(xb @ weights + bias)
xb.shape, xb.sum(-1).shape
bs = 64
xb = x_train[0:bs] # a mini-batch from x
preds = model(xb)
preds[0], preds.shape
def nll(input, target): return -input[range(target.shape[0]), target].mean()
loss_func = nll
yb = y_train[0:bs]
loss_func(preds, yb)
preds[0]
((x_train[0:bs]@weights+bias) - (x_train[0:bs]@weights+bias).exp().sum(-1).log().unsqueeze(-1))[0]
preds[0]
nll(preds, yb)
-preds[range(yb.shape[0]), yb].mean()
type(preds)
preds[range(0)]
preds[0]
preds[range(1)]
preds[range(2)]
preds[:2]
type(preds)
np.array([[range(10)]])[range(1)]
A = np.array([[range(10)]])
A.shape
A[range(2)]
A.shape
len(A[0])
A.shape[0]
A[0]
A[range(1)]
xb.sum()
xb.numpy().sum(-1)
xb.sum(-1)
xb.sum(-1)
xb[0].sum()
xb.exp().sum(-1).log()
xb.exp().sum(-1).log()[0]
(xb.exp().sum(-1).log())[0]
xb.exp().sum(-1).log().unsqueeze(-1)[:10]
np.array([i for i in range(10)]).shape
torch.Tensor([i for i in range(10)]).shape
xb.exp().sum(-1).log().unsqueeze(-1).numpy().shape
xb.exp().sum(-1).log()[:10]
preds.unsqueeze(-1).shape
preds.unsqueeze(-1)[:2]
# logsoftmax(xb)
ls_xb = log_softmax(xb)
log_softmax(xb@weights+bias)[0]
(xb@weights).shape
xb.shape
(xb@weights).shape
# for reference:
xb = x_train[0:bs]
yb = y_train[0:bs]
def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1)
def model(xb): return log_softmax(xb @ weights + bias)
preds = model(xb)
def nll(input, target): return -input[range(target.shape[0]), target].mean()
loss = nll(preds, yb)
loss
xb, xb.shape
(xb @ weights + bias)[:2]
(xb @ weights + bias).shape
log_softmax(xb@weights+bias)[:2]
log_softmax(xb@weights+bias).shape
nll(log_softmax(xb@weights+bias), yb)
[range(yb.shape[0]), yb]
xb[yb]
xb.shape, yb.shape
array_1 = np.array([[str(j)+str(i) for i in range(10)] for j in range(5)])
array_1
array_2 = np.array([i for i in range(len(array_1[0]))])
array_2
array_1[range(array_2.shape[0]), array_2]
# for reference (again):
xb = x_train[0:bs]
yb = y_train[0:bs]
def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1)
def model(xb): return log_softmax(xb @ weights + bias)
preds = model(xb)
def nll(input, target): return -input[range(target.shape[0]), target].mean()
loss = nll(preds, yb)
xb.shape, weights.shape
np.array([[1,1,1],[2,2,2],[3,3,3]]) @ np.array([[1],[2],[3]])
np.array([[1,1,1],[2,2,2],[-11,0,3]]) @ np.array([[1],[2],[3]])
yb.type()
# batch size of 3
xb_tmp = np.array([[1,1,1,1,1],[2,2,2,2,2],[3,3,3,3,3]])
yb_tmp = np.array([0,1,2])
# 4 classes
c = 4
w_tmp = np.array([[i for i in range(c)] for j in range(xb_tmp.shape[1])])
xb_tmp = torch.Tensor(xb_tmp)
yb_tmp = torch.tensor(yb_tmp, dtype=torch.int64) # see: https://pytorch.org/docs/stable/tensors.html#torch-tensor
w_tmp = torch.Tensor(w_tmp)
torch.tensor([[1, 2, 3]],dtype=torch.int32)
xb_tmp.shape, yb_tmp.shape, w_tmp.shape
xb.shape, yb.shape, weights.shape
actv_tmp = log_softmax(xb_tmp @ w_tmp)
actv_tmp
nll(actv_tmp, yb_tmp)
# batch size of 3
xb_tmp = np.array([[0,1,1,0,0]])
yb_tmp = np.array([1])
# 4 classes
c = 4
w_tmp = np.array([[i for i in range(c)] for j in range(xb_tmp.shape[1])])
xb_tmp = torch.Tensor(xb_tmp)
yb_tmp = torch.tensor(yb_tmp, dtype=torch.int64) # see: https://pytorch.org/docs/stable/tensors.html#torch-tensor
w_tmp = torch.Tensor(w_tmp)
xb_tmp @ w_tmp
# LogSoftmax(activations)
actv_tmp = log_softmax(xb_tmp @ w_tmp)
actv_tmp
# NLL Loss
loss = nll(actv_tmp, yb_tmp)
loss
def cross_test(x, y):
# batch size of 3
xb_tmp = np.array(x)
yb_tmp = np.array(y)
# 4 classes
c = 4
w_tmp = np.array([[i for i in range(c)] for j in range(xb_tmp.shape[1])])
xb_tmp = torch.Tensor(xb_tmp)
yb_tmp = torch.tensor(yb_tmp, dtype=torch.int64) # see: https://pytorch.org/docs/stable/tensors.html#torch-tensor
w_tmp = torch.Tensor(w_tmp)
print(f'Activation: {xb_tmp @ w_tmp}')
# LogSoftmax(activations)
actv_tmp = log_softmax(xb_tmp @ w_tmp)
print(f'Log Softmax: {actv_tmp}')
# NLL Loss
loss = nll(actv_tmp, yb_tmp)
print(f'NLL Loss: {loss}')
w_tmp
cross_test([[1,1,1,1,1]], [1])
cross_test([[1,1,1,1,0]], [1])
cross_test([[1,1,1,0,0]], [1])
cross_test([[1,1,1,1,0]], [1])
cross_test([[1,1,0,0,0]], [1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: the torch.Tensor.sum(dim) call takes an integer argument as the axis along which to sum. This applies to NumPy arrays as well.
Step2: torch.unsqueeze returns a tensor with a dimension of size 1 inserted at the specified position.
Step3: taking a look at what .unsqueeze does; what does the tensor look like right before unsqueeze is applied to it?
Step4: making sure I didn't need parentheses there
Step5: Okay so .unsqueeze turns the size 64 tensor into a 64x1 tensor, so it's nicely packaged up with the first element being the 64-long vector ... or something like that right?
Step6: The unsqueezed tensor doesn't look as 'nice'.. I guess. So it's packaged into a single column vector because we'll need that for the linear algebra we'll do to it later yeah?
Step7: Oh this is cool. I was wondering how .unsqeeze worked for tensors with multiple items in multiple dimensions (ie
Step8: So .unsqueeze turns our size 64x10 ... ohhhhhhhh I misread
Step9: Oh this is where I was confused. I'm not throwing xb into Log Softmax. I'm throwing xb • w + bias. The shape going into the log softmax function is not 64x784, it's 64x10. Yeah that makes sense. well duh it has to. Each value in the tensor is an activation for a class, for each image in the minibatch. So by the magic of machine learning, each activation encapsulates the effect of the weights and biases on that input element with respect to that class.
Step10: Note the loss equals that in cell Out[25] above as it should.
Step11: The minibatch's activations as they head into the Log Softmax
Step12: The minibatch activations after the Log Softmax and before heading into Negative Log Likelihood
Step13: The loss value computed via NLL on the Log Softmax activations
Step14: Okay. Now questions. What is indexing input by [range(target.shape[0]), target] supposed to be doing? I established before that A[range(n)] is valid if n ≤ A.shape[0]. So what's going on is I'm range-indexing the 1st dimension of the LogSoftmax activations with the length of the target tensor, and the rest of the dimension indices being the ..target tensor itself?
Step15: Okay. What does it look like when I index a tensor – forget range-idx for now – with another tensor?
Step16: Okay..
Step17: Uh, moment of truth
Step18: Oof course. What happened. Is it.. yes. I'm indexing the wrong array. Also no value in target is greater than the number of classes ... oh... oh ffs. Okay.
Step19: When the activations are activating, only the weights and biases are having a say. Right?
Step20: Right.
Step21: umm....
Step22: Good it works. Now to change things. The question was if any of the dropped values (non-target index) had any effect on the loss - since the loss was only calculated on error from the correct target. Basically
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.