text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's do a basic statistical calculation on the data using code! Run the block of code below to calculate the average "Food Pinching Efficiency" for all 31 participants and all chopstick lengths.
Step2: This number is helpful, but the number doesn't let us know which of the chopstick lengths performed best for the thirty-one male junior college students. Let's break down the data by chopstick length. The next block of code will generate the average "Food Pinching Effeciency" for each chopstick length. Run the block of code below.
Step3: 5. Which chopstick length performed the best for the group of thirty-one male junior college students?
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
# pandas is a software library for data manipulation and analysis
# We commonly use shorter nicknames for certain packages. Pandas is often abbreviated to pd.
# hit shift + enter to run this cell or block of code
path = r'chopstick-effectiveness.csv'
# Change the path to the location where the chopstick-effectiveness.csv file is located on your computer.
# If you get an error when running this block of code, be sure the chopstick-effectiveness.csv is located at the path on your computer.
dataFrame = pd.read_csv(path)
dataFrame
dataFrame['Food.Pinching.Efficiency'].mean()
meansByChopstickLength = dataFrame.groupby('Chopstick.Length')['Food.Pinching.Efficiency'].mean().reset_index()
meansByChopstickLength
# reset_index() changes Chopstick.Length from an index to column. Instead of the index being the length of the chopsticks, the index is the row numbers 0, 1, 2, 3, 4, 5.
# Causes plots to display within the notebook rather than in a new window
%pylab inline
import matplotlib.pyplot as plt
plt.scatter(x=meansByChopstickLength['Chopstick.Length'], y=meansByChopstickLength['Food.Pinching.Efficiency'])
# title="")
plt.xlabel("Length in mm")
plt.ylabel("Efficiency in PPPC")
plt.title("Average Food Pinching Efficiency by Chopstick Length")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The first step in implementing any problem in FiPy is to define the mesh. For Problem 1a the solution domain is just a square domain, but the boundary conditions are periodic, so a PeriodicGrid2D object is used. No other boundary conditions are required.
Step2: The next step is to define the parameters and create a solution variable.
Step3: Now we need to define the initial conditions given by,
Step4: Define $f_0$
Step5: Define the Equation
Step6: Solve the Equation
Step7: Run the Example Locally
Step8: Movie of Evolution
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import sympy
import fipy as fp
import numpy as np
A, c, c_m, B, c_alpha, c_beta = sympy.symbols("A c_var c_m B c_alpha c_beta")
f_0 = - A / 2 * (c - c_m)**2 + B / 4 * (c - c_m)**4 + c_alpha / 4 * (c - c_alpha)**4 + c_beta / 4 * (c - c_beta)**4
print f_0
sympy.diff(f_0, c, 2)
mesh = fp.Grid2D(dx=0.5, dy=0.5, nx=40, ny=200) + (fp.Grid2D(dx=0.5, dy=0.5, nx=200, ny=40) + [[-40],[100]])
c_alpha = 0.05
c_beta = 0.95
A = 2.0
kappa = 2.0
c_m = (c_alpha + c_beta) / 2.
B = A / (c_alpha - c_m)**2
D = D_alpha = D_beta = 2. / (c_beta - c_alpha)
c_0 = 0.45
q = np.sqrt((2., 3.))
epsilon = 0.01
c_var = fp.CellVariable(mesh=mesh, name=r"$c$", hasOld=True)
r = np.array((mesh.x, mesh.y))
c_var[:] = c_0 + epsilon * np.cos((q[:, None] * r).sum(0))
viewer = fp.Viewer(c_var)
out = sympy.diff(f_0, c, 2)
exec "f_0_var = " + repr(out)
#f_0_var = -A + 3*B*(c_var - c_m)**2 + 3*c_alpha*(c_var - c_alpha)**2 + 3*c_beta*(c_var - c_beta)**2
f_0_var
eqn = fp.TransientTerm(coeff=1.) == fp.DiffusionTerm(D * f_0_var) - fp.DiffusionTerm((D, kappa))
eqn
elapsed = 0.0
steps = 0
dt = 0.01
total_sweeps = 2
tolerance = 1e-1
total_steps = 10
c_var[:] = c_0 + epsilon * np.cos((q[:, None] * r).sum(0))
c_var.updateOld()
from fipy.solvers.pysparse import LinearLUSolver as Solver
solver = Solver()
while steps < total_steps:
res0 = eqn.sweep(c_var, dt=dt, solver=solver)
for sweeps in range(total_sweeps):
res = eqn.sweep(c_var, dt=dt, solver=solver)
if res < res0 * tolerance:
steps += 1
elapsed += dt
dt *= 1.1
c_var.updateOld()
else:
dt *= 0.8
c_var[:] = c_var.old
viewer.plot()
print 'elapsed_time:',elapsed
%%writefile fipy_hackathon_1c.py
import fipy as fp
import numpy as np
mesh = fp.Grid2D(dx=0.5, dy=0.5, nx=40, ny=200) + (fp.Grid2D(dx=0.5, dy=0.5, nx=200, ny=40) + [[-40],[100]])
c_alpha = 0.05
c_beta = 0.95
A = 2.0
kappa = 2.0
c_m = (c_alpha + c_beta) / 2.
B = A / (c_alpha - c_m)**2
D = D_alpha = D_beta = 2. / (c_beta - c_alpha)
c_0 = 0.45
q = np.sqrt((2., 3.))
epsilon = 0.01
c_var = fp.CellVariable(mesh=mesh, name=r"$c$", hasOld=True)
r = np.array((mesh.x, mesh.y))
c_var[:] = c_0 + epsilon * np.cos((q[:, None] * r).sum(0))
f_0_var = -A + 3*B*(c_var - c_m)**2 + 3*c_alpha*(c_var - c_alpha)**2 + 3*c_beta*(c_var - c_beta)**2
eqn = fp.TransientTerm(coeff=1.) == fp.DiffusionTerm(D * f_0_var) - fp.DiffusionTerm((D, kappa))
elapsed = 0.0
steps = 0
dt = 0.01
total_sweeps = 2
tolerance = 1e-1
total_steps = 600
c_var[:] = c_0 + epsilon * np.cos((q[:, None] * r).sum(0))
c_var.updateOld()
from fipy.solvers.pysparse import LinearLUSolver as Solver
solver = Solver()
viewer = fp.Viewer(c_var)
while steps < total_steps:
res0 = eqn.sweep(c_var, dt=dt, solver=solver)
for sweeps in range(total_sweeps):
res = eqn.sweep(c_var, dt=dt, solver=solver)
print ' '
print 'steps',steps
print 'res',res
print 'sweeps',sweeps
print 'dt',dt
if res < res0 * tolerance:
steps += 1
elapsed += dt
dt *= 1.1
if steps % 1 == 0:
viewer.plot('image{0}.png'.format(steps))
c_var.updateOld()
else:
dt *= 0.8
c_var[:] = c_var.old
from IPython.display import YouTubeVideo
scale = 1.5
YouTubeVideo('aZk38E7OxcQ', width=420 * scale, height=315 * scale, rel=0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Input Normalization
Step2: Word2Vec Model
Step3: Training CBOW model
Step4: Model Details
Step5: Word Similarity
Step6: Training skip-gram model
|
<ASSISTANT_TASK:>
Python Code:
import re # Regular Expressions
import pandas as pd # DataFrames & Manipulation
from gensim.models.word2vec import Word2Vec
train_input = "../data/recipes.tsv.bz2"
# preserve empty strings (http://pandas-docs.github.io/pandas-docs-travis/io.html#na-values)
train = pd.read_csv(train_input, delimiter="\t", quoting=3, encoding="utf-8", keep_default_na=False)
print "loaded %d documents." % len(train)
train[['title', 'instructions']].head()
def normalize(text):
norm_text = text.lower()
for char in ['.', '"', ',', '(', ')', '!', '?', ';', ':']:
norm_text = norm_text.replace(char, ' ' + char + ' ')
return norm_text
sentences = [normalize(text).split() for text in train['instructions']]
print "%d documents in corpus" % len(sentences)
num_features = 100 # Word vector dimensionality
min_word_count = 10 # Minimum word count
num_workers = 4 # Number of threads to run in parallel
context = 10 # Context window size
downsampling = 1e-3 # Downsample setting for frequent words
# Import the built-in logging module and configure it so that Word2Vec creates nice output messages
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
print "Training CBOW model..."
model = Word2Vec(
sentences,
workers=num_workers,
size=num_features,
min_count = min_word_count,
window = context,
sample = downsampling)
# make the model much more memory-efficient.
model.init_sims(replace=True)
model_name = "model-w2v_cbow_%dfeatures_%dminwords_%dcontext" % (num_features, min_word_count, context)
model.save(model_name)
print "%d words in vocabulary." % len(model.wv.vocab)
vocab = [(k, v.count) for k, v in model.wv.vocab.items()]
pd.DataFrame.from_records(vocab, columns=['word', 'count']).sort_values('count', ascending=False).reset_index(drop=True)
model.most_similar("pasta", topn=20)
model.most_similar("ofen")
print "Training skip-gram model..."
model2 = Word2Vec(
sentences,
sg = 1,
hs = 1,
workers=num_workers,
size=num_features,
min_count = min_word_count,
window = context,
sample = downsampling)
# make the model much more memory-efficient.
model2.init_sims(replace=True)
model_name = "recipes_skip-gram_%dfeatures_%dminwords_%dcontext" % (num_features, min_word_count, context)
model2.save(model_name)
model2.most_similar("pasta")
model2.most_similar("ofen")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: General information on the Gapminder data
Step2: Variables distribution
Step3: Income per person
Step4: From the distribution graph, we can see that the distribution is skewed-right and bimodal. There is also a singular case with high income per capita. That country is
Step5: For this assignment the explanatory variable income per person will have to be centered as its mean is 8784.5.
Step6: The residential electricity consumption is also skewed-right. And there are also a couple of countries presenting unusual higher values. Those countries are
Step7: Bivariate distribution
Step8: Centering the explanatory variable
Step9: OLS regression model
|
<ASSISTANT_TASK:>
Python Code:
# Magic command to insert the graph directly in the notebook
%matplotlib inline
# Load a useful Python libraries for handling data
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
import scipy.stats as stats
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import Markdown, display
# Read the data
data_filename = r'gapminder.csv'
data = pd.read_csv(data_filename, low_memory=False)
data = data.set_index('country')
display(Markdown("Number of countries: {}".format(len(data))))
display(Markdown("Number of variables: {}".format(len(data.columns))))
subdata2 = (data[['incomeperperson', 'relectricperperson']]
.assign(income=lambda x: pd.to_numeric(data['incomeperperson'], errors='coerce'),
electricity=lambda x: pd.to_numeric(data['relectricperperson'], errors='coerce'))
.dropna())
sns.distplot(subdata2['income'], kde=False)
plt.xlabel('Income per person (2000 US$)');
subdata2.loc[subdata2['income'] > 45000]
subdata2['income'].describe()
sns.distplot(subdata2['electricity'], kde=False)
plt.xlabel('Residential electricity consumption (kWh)');
subdata2['electricity'].describe()
subdata2.loc[subdata2['electricity'] > 6000]
sns.regplot(x='income', y='electricity', data=subdata2)
plt.xlabel('Income per person (2000 US$)')
plt.ylabel('Residential electricity consumption (kWh)')
plt.title('Scatterplot for the association between the income and the residential electricity consumption');
subdata3 = subdata2.assign(income_centered=lambda x: x['income']-subdata2['income'].mean())
display(Markdown("Income mean after centereing : {:3g}".format(subdata3['income_centered'].mean())))
sns.regplot(x='income_centered', y='electricity', data=subdata3)
plt.xlabel('Centered income per person (2000 US$)')
plt.ylabel('Residential electricity consumption (kWh)')
plt.title('Scatterplot for the association between the income and the residential electricity consumption');
reg1 = smf.ols('electricity ~ income_centered', data=subdata3).fit()
reg1.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
def isBalanced(s ) :
st = list()
n = len(s )
for i in range(n ) :
if s[i ] == '(' :
st . append(s[i ] )
else :
if len(st ) == 0 :
return False
else :
st . pop()
if len(st ) > 0 :
return False
return True
def isBalancedSeq(s1 , s2 ) :
if(isBalanced(s1 + s2 ) ) :
return True
return isBalanced(s2 + s1 )
if __name__== "__main __": s1 = ")()(( ) ) ) ) "
s2 = "(( )(( )("
if isBalancedSeq(s1 , s2 ) :
print("Balanced ")
else :
print("Not ▁ Balanced ")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get TFIDF scores for corpus without pivoted document length normalisation
Step2: Get TFIDF scores for corpus with pivoted document length normalisation testing on various values of alpha.
Step3: Visualizing the pivoted normalization
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from sklearn.linear_model import LogisticRegression
from gensim.corpora import Dictionary
from gensim.sklearn_api.tfidf import TfIdfTransformer
from gensim.matutils import corpus2csc
import numpy as np
import matplotlib.pyplot as py
import gensim.downloader as api
# This function returns the model accuracy and indivitual document prob values using
# gensim's TfIdfTransformer and sklearn's LogisticRegression
def get_tfidf_scores(kwargs):
tfidf_transformer = TfIdfTransformer(**kwargs).fit(train_corpus)
X_train_tfidf = corpus2csc(tfidf_transformer.transform(train_corpus), num_terms=len(id2word)).T
X_test_tfidf = corpus2csc(tfidf_transformer.transform(test_corpus), num_terms=len(id2word)).T
clf = LogisticRegression().fit(X_train_tfidf, y_train)
model_accuracy = clf.score(X_test_tfidf, y_test)
doc_scores = clf.decision_function(X_test_tfidf)
return model_accuracy, doc_scores
# Sort the document scores by their scores and return a sorted list
# of document score and corresponding document lengths.
def sort_length_by_score(doc_scores, X_test):
doc_scores = sorted(enumerate(doc_scores), key=lambda x: x[1])
doc_leng = np.empty(len(doc_scores))
ds = np.empty(len(doc_scores))
for i, _ in enumerate(doc_scores):
doc_leng[i] = len(X_test[_[0]])
ds[i] = _[1]
return ds, doc_leng
nws = api.load("20-newsgroups")
cat1, cat2 = ('sci.electronics', 'sci.space')
X_train = []
X_test = []
y_train = []
y_test = []
for i in nws:
if i["set"] == "train" and i["topic"] == cat1:
X_train.append(i["data"])
y_train.append(0)
elif i["set"] == "train" and i["topic"] == cat2:
X_train.append(i["data"])
y_train.append(1)
elif i["set"] == "test" and i["topic"] == cat1:
X_test.append(i["data"])
y_test.append(0)
elif i["set"] == "test" and i["topic"] == cat2:
X_test.append(i["data"])
y_test.append(1)
id2word = Dictionary([_.split() for _ in X_train])
train_corpus = [id2word.doc2bow(i.split()) for i in X_train]
test_corpus = [id2word.doc2bow(i.split()) for i in X_test]
print(len(X_train), len(X_test))
# We perform our analysis on top k documents which is almost top 10% most scored documents
k = len(X_test) / 10
params = {}
model_accuracy, doc_scores = get_tfidf_scores(params)
print(model_accuracy)
print(
"Normal cosine normalisation favors short documents as our top {} "
"docs have a smaller mean doc length of {:.3f} compared to the corpus mean doc length of {:.3f}"
.format(
k, sort_length_by_score(doc_scores, X_test)[1][:k].mean(),
sort_length_by_score(doc_scores, X_test)[1].mean()
)
)
best_model_accuracy = 0
optimum_slope = 0
for slope in np.arange(0, 1.1, 0.1):
params = {"pivot": 10, "slope": slope}
model_accuracy, doc_scores = get_tfidf_scores(params)
if model_accuracy > best_model_accuracy:
best_model_accuracy = model_accuracy
optimum_slope = slope
print("Score for slope {} is {}".format(slope, model_accuracy))
print("We get best score of {} at slope {}".format(best_model_accuracy, optimum_slope))
params = {"pivot": 10, "slope": optimum_slope}
model_accuracy, doc_scores = get_tfidf_scores(params)
print(model_accuracy)
print(
"With pivoted normalisation top {} docs have mean length of {:.3f} "
"which is much closer to the corpus mean doc length of {:.3f}"
.format(
k, sort_length_by_score(doc_scores, X_test)[1][:k].mean(),
sort_length_by_score(doc_scores, X_test)[1].mean()
)
)
best_model_accuracy = 0
optimum_slope = 0
w = 2
h = 2
f, axarr = py.subplots(h, w, figsize=(15, 7))
it = 0
for slope in [1, 0.2]:
params = {"pivot": 10, "slope": slope}
model_accuracy, doc_scores = get_tfidf_scores(params)
if model_accuracy > best_model_accuracy:
best_model_accuracy = model_accuracy
optimum_slope = slope
doc_scores, doc_leng = sort_length_by_score(doc_scores, X_test)
y = abs(doc_scores[:k, np.newaxis])
x = doc_leng[:k, np.newaxis]
py.subplot(1, 2, it+1).bar(x, y, linewidth=10.)
py.title("slope = " + str(slope) + " Model accuracy = " + str(model_accuracy))
py.ylim([0, 4.5])
py.xlim([0, 3200])
py.xlabel("document length")
py.ylabel("confidence score")
it += 1
py.tight_layout()
py.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Encoding the words
Step3: Encoding the labels
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Step11: Embedding
Step12: LSTM cell
Step13: RNN forward pass
Step14: Output
Step15: Validation accuracy
Step16: Batching
Step17: Training
Step18: Testing
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
unique = set(words)
# Create your dictionary that maps vocab words to integers here
vocab_to_int = {v: k + 1 for k, v in enumerate(unique)}
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = [[vocab_to_int[word] for word in words.split()] for words in reviews]
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = [1 if label == "positive" else 0 for label in labels.split()]
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
# Filter out that review with 0 length
reviews_ints = [review for review in reviews_ints if len(review) > 0]
seq_len = 200
features = np.array([[0 for i in range(0, seq_len - len(review))] + review if len(review) < seq_len \
else review[0:seq_len] for review in reviews_ints])
features[:10,:100]
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ =
labels_ =
keep_prob =
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding =
embed =
with graph.as_default():
# Your basic LSTM cell
lstm =
# Add dropout to the cell
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
outputs, final_state =
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Terminology
Step2: Import the Good Stuff
Step3: Feature Exploration with RadViz
Step4: Setosas tend to have the largest septal-width. This can could be a great predictor.
Step5: Try the Covariance Visualizer
Step6: This covariance chart is not intereptatble as they don't have labels. Also there shouldn't be half numbers in labels.
Step7: This clearly demonstrates the separation between features - especially petal_length and petal_width. One concern is that this demonstraction data might be obsured by the scaling of the features and add noise to the intepretation.
Step8: The scaled dataset makes it easier to see the separation between classes for each of the features.
Step9: Note
Step10: Model Selection
|
<ASSISTANT_TASK:>
Python Code:
# read the iris data into a DataFrame
import pandas as pd
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
col_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
iris = pd.read_csv(url, header=None, names=col_names)
iris.head()
# map each iris species to a number
iris['species_num'] = iris.species.map({'Iris-setosa':0, 'Iris-versicolor':1, 'Iris-virginica':2})
import yellowbrick as yb
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (12, 8)
from yellowbrick.features.rankd import Rank2D
from yellowbrick.features.radviz import RadViz
from yellowbrick.features.pcoords import ParallelCoordinates
# Specify the features of interest and the classes of the target
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
visualizer = RadViz(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
# Specify the features of interest and the classes of the target
iris_subset = iris[iris.species_num!=0]
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa','Iris-versicolor', 'Iris-virginica'] # but have to leave in more than two classes
# Extract the numpy arrays from the data frame
X = iris_subset[features].as_matrix()
y = iris_subset.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
visualizer = RadViz(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
# Specify the features of interest and the classes of the target
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
visualizer = Rank2D(features=features, algorithm='covariance')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.values
assert y.shape[0] == X.shape[0]
visualizer = ParallelCoordinates(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
from sklearn import preprocessing
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
X_scaled = preprocessing.scale(X)
y = iris.species_num.values
assert y.shape[0] == X.shape[0]
visualizer = ParallelCoordinates(classes=classes, features=features)
visualizer.fit(X_scaled, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof()
# Classifier Evaluation Imports
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ClassificationReport, ClassBalance
features = ['petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
assert y_train.shape[0] == X_train.shape[0]
assert y_test.shape[0] == X_test.shape[0]
# Instantiate the classification model and visualizer
bayes = GaussianNB()
visualizer = ClassificationReport(bayes, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
visualizer
# Classifier Evaluation Imports
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ClassificationReport, ClassBalance
features = ['petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
assert y_train.shape[0] == X_train.shape[0]
assert y_test.shape[0] == X_test.shape[0]
# Instantiate the classification model and visualizer
bayes = MultinomialNB()
visualizer = ClassificationReport(bayes)# classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
features = ['petal_length', 'petal_width']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
# Extract the numpy arrays from the data frame
X = iris[features].as_matrix()
y = iris.species_num.as_matrix()
assert y.shape[0] == X.shape[0]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
assert y_train.shape[0] == X_train.shape[0]
assert y_test.shape[0] == X_test.shape[0]
test = pd.DataFrame(y_test, columns=['species'])
test.species.value_counts() # The test train split provides unbalanced classes
from sklearn.ensemble import RandomForestClassifier
from yellowbrick.classifier import ClassificationReport
# Instantiate the classification model and visualizer
forest = RandomForestClassifier()
visualizer = ClassBalance(forest, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: we’ll have a list of things and we’ll want to repeat a transformation over and over again to each item in the list.
Step2: For example you may want to cube every item
Step3: Exercises
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
x = range(0,10)
x
def cube(num):
return num ** 3
for item in x:
print(cube(item))
new_list = []
for item in x:
new_list.append(cube(item))
print(new_list)
map_list = map(cube, x)
print(list(map_list))
fx = map(float, range(10))
print(list(fx))
fx = map(float, range(10)) # have to create it again in python 3
print(list(map(int, fx)))
from math import sqrt
print(list(map(sqrt, x)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can visualize the process to see how it works.
Step2: Finally, let's see how our estimate gets better as we increase $n$. We'll do this by computing the estimate for $\pi$ at each step and plotting that estimate to see how it converges.
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries
import math
import numpy as np
import matplotlib.pyplot as plt
in_circle = 0
outside_circle = 0
n = 10 ** 4
# Draw many random points
X = np.random.rand(n)
Y = np.random.rand(n)
for i in range(n):
if X[i]**2 + Y[i]**2 > 1:
outside_circle += 1
else:
in_circle += 1
area_of_quarter_circle = float(in_circle)/(in_circle + outside_circle)
pi_estimate = area_of_circle = area_of_quarter_circle * 4
pi_estimate
# Plot a circle for reference
circle1=plt.Circle((0,0),1,color='r', fill=False, lw=2)
fig = plt.gcf()
fig.gca().add_artist(circle1)
# Set the axis limits so the circle doesn't look skewed
plt.xlim((0, 1.8))
plt.ylim((0, 1.2))
plt.scatter(X, Y)
in_circle = 0
outside_circle = 0
n = 10 ** 3
# Draw many random points
X = np.random.rand(n)
Y = np.random.rand(n)
# Make a new array
pi = np.ndarray(n)
for i in range(n):
if X[i]**2 + Y[i]**2 > 1:
outside_circle += 1
else:
in_circle += 1
area_of_quarter_circle = float(in_circle)/(in_circle + outside_circle)
pi_estimate = area_of_circle = area_of_quarter_circle * 4
pi[i] = pi_estimate
plt.plot(range(n), pi)
plt.xlabel('n')
plt.ylabel('pi estimate')
plt.plot(range(n), [math.pi] * n)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this model D-Fructose-6-phosphate is an essential metabolite. We will remove all the reactions using it, and at them to a separate model.
Step2: Now, because of these gaps, the model won't grow.
Step3: We will use GrowMatch to add back the minimal number of reactions from this set of "universal" reactions (in this case just the ones we removed) to allow it to grow.
Step4: We can obtain multiple possible reaction sets by having the algorithm go through multiple iterations.
|
<ASSISTANT_TASK:>
Python Code:
import cobra.test
model = cobra.test.create_test_model("salmonella")
# remove some reactions and add them to the universal reactions
Universal = cobra.Model("Universal_Reactions")
for i in [i.id for i in model.metabolites.f6p_c.reactions]:
reaction = model.reactions.get_by_id(i)
Universal.add_reaction(reaction.copy())
reaction.remove_from_model()
model.optimize().f
cobra.flux_analysis.growMatch(model, Universal)
result = cobra.flux_analysis.growMatch(model, Universal, iterations=4)
for i, entries in enumerate(result):
print("---- Run %d ----" % (i + 1))
for e in entries:
print(e.id)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compute and Save<a id="compute"></a>
Step2: Statistics???<a id="statistics"></a>
Step3: Plot File<a id="plot"></a>
Step4: Plot All Items
Step5: Computing Test<a id="test"></a>
Step6: Compute and Plot All Items
|
<ASSISTANT_TASK:>
Python Code:
import json
from utils import load_items
with open('parameters.json', 'r') as infile:
params = json.load(infile)
RESIZE_X = params['resize']['x']
RESIZE_Y = params['resize']['y']
ITEM_FOLDER = params['item_folder']
items = load_items(ITEM_FOLDER)
import cv2, glob
from utils import imread_rgb, imread_gray, compute_colors
def worker(item):
folder = ITEM_FOLDER + '/' + item + '/'
files = glob.glob(folder + '*.png')
for filename in files:
image_RGB = imread_rgb(filename)
if not image_RGB is None:
image_RGB = cv2.resize(image_RGB,(RESIZE_X,RESIZE_Y))
file_mask = filename[:-4] + '_mask.pgm'
mask = imread_gray(file_mask)
hist, cc = compute_colors(image_RGB, mask)
dominant_colors = {'hist': hist,
'cluster_centers': map(lambda t: list(t), cc)}
with open(filename[:-4] + '_dc.json', 'w') as outfile:
json.dump(dominant_colors, outfile)
%%time
from multiprocessing import Pool
print('Computing Dominant colors of images')
print('* resized to %d x %d' % (RESIZE_X,RESIZE_Y))
print('* LAB space')
print('* KMeans')
#print('* MiniBatchKMeans')
pool_size = 6
pool = Pool(pool_size)
result = []
for item in items:
result.append( pool.apply_async(worker, (item,)) )
pool.close()
pool.join()
for r in result:
r.get()
import glob
from utils import ...
item_view = []
VVV = []
for item in items:
folder = ITEM_FOLDER + '/' + item + '/'
files = glob.glob(folder + '*_XXX.YYY')
for filename in files:
...
item_view.append(filename)
VVV.append(len(des))
from matplotlib import pyplot as plt
%matplotlib inline
plt.hist(VVV,bins=60);
[(ns, str(iv.split('/')[-1][:-9])) for ns, iv in sorted(zip(VVV,item_view), reverse=True) if ns>2000]
[(ns, str(iv.split('/')[-1][:-9])) for ns, iv in sorted(zip(VVV,item_view), reverse=True) if ns<50]
from matplotlib import pyplot as plt
%matplotlib inline
import cv2, numpy as np
from ipywidgets import interact
from utils import imread_rgb, plot_colors
def load_and_plot(item,view):
try:
prefix = ITEM_FOLDER + '/' + item + '/' + item + '_' + view
filename = prefix + '.png'
image_RGB = imread_rgb(filename)
if not image_RGB is None:
with open(filename[:-4] + '_dc.json', 'r') as infile:
dc = json.load(infile)
hist = dc['hist']
cc = dc['cluster_centers']
bar = plot_colors(hist, cc)
plt.subplot(121); plt.imshow(image_RGB); plt.axis('off');
plt.subplot(122); plt.imshow(bar); plt.axis('off');
except (IOError, OSError):
print('File not found')
views = ['top_01','top-side_01','top-side_02','bottom_01','bottom-side_01','bottom-side_02']
interact(load_and_plot,item=items,view=views);
for item in items:
for view in views:
print(item + '_' + view)
load_and_plot(item,view)
plt.show()
from matplotlib import pyplot as plt
%matplotlib inline
import cv2, numpy as np
from ipywidgets import interact
from utils import imread_rgb, imread_gray, compute_colors, plot_colors
def compute_and_plot(item,view):
prefix = ITEM_FOLDER + '/' + item + '/' + item + '_' + view
filename = prefix + '.png'
image_RGB = imread_rgb(filename)
if not image_RGB is None:
image_RGB = cv2.resize(image_RGB,(RESIZE_X,RESIZE_Y))
filename = prefix + '_mask.pgm'
mask = imread_gray(filename)
plt.subplot(121); plt.imshow(image_RGB); plt.axis('off');
hist, cc = compute_colors(image_RGB, mask)
bar = plot_colors(hist, cc)
plt.subplot(122); plt.imshow(bar); plt.axis('off'); plt.title(item + '_' + view);
views = ['top_01','top-side_01','top-side_02','bottom_01','bottom-side_01','bottom-side_02']
interact(compute_and_plot,item=items,view=views);
for item in items:
for view in views:
print(item + '_' + view)
plot(item,view)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compute SSP projections
Step2: Now let's do EOG. Here we compute an EEG projector, and need to pass
Step3: Apply SSP projections
Step4: Yes this was it. Now MNE will apply the projs on demand at any later stage,
Step5: Looks cool right? It is however often not clear how many components you
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import mne
from mne.datasets import sample
from mne.preprocessing import compute_proj_ecg, compute_proj_eog
# getting some data ready
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
projs, events = compute_proj_ecg(raw, n_grad=1, n_mag=1, n_eeg=0, average=True)
print(projs)
ecg_projs = projs[-2:]
mne.viz.plot_projs_topomap(ecg_projs)
projs, events = compute_proj_eog(raw, n_grad=1, n_mag=1, n_eeg=1, average=True)
print(projs)
eog_projs = projs[-3:]
mne.viz.plot_projs_topomap(eog_projs, info=raw.info)
raw.info['projs'] += eog_projs + ecg_projs
events = mne.find_events(raw, stim_channel='STI 014')
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
# this can be highly data dependent
event_id = {'auditory/left': 1}
epochs_no_proj = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5,
proj=False, baseline=(None, 0), reject=reject)
epochs_no_proj.average().plot(spatial_colors=True, time_unit='s')
epochs_proj = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5, proj=True,
baseline=(None, 0), reject=reject)
epochs_proj.average().plot(spatial_colors=True, time_unit='s')
evoked = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5,
proj='delayed', baseline=(None, 0),
reject=reject).average()
# set time instants in seconds (from 50 to 150ms in a step of 10ms)
times = np.arange(0.05, 0.15, 0.01)
fig = evoked.plot_topomap(times, proj='interactive', time_unit='s')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 변수 소개
Step2: 변수 만들기
Step3: 변수는 텐서처럼 보이고 작동하며, 실제로 tf.Tensor에서 지원되는 데이터 구조입니다. 텐서와 마찬가지로, dtype과 형상을 가지며 NumPy로 내보낼 수 있습니다.
Step4: 변수를 재구성할 수는 없지만, 대부분의 텐서 연산은 예상대로 변수에 대해 작동합니다.
Step5: 위에서 언급했듯이, 변수는 텐서에 의해 지원됩니다. tf.Variable.assign을 사용하여 텐서를 재할당할 수 있습니다. assign을 호출해도 (일반적으로) 새로운 텐서를 할당하지 않고, 대신 기존 텐서의 메모리가 재사용됩니다.
Step6: 연산에서 텐서와 같은 변수를 사용하는 경우, 일반적으로 지원 텐서에서 작동합니다.
Step7: 수명 주기, 이름 지정 및 감시
Step8: 모델을 저장하고 로드할 때 변수 이름이 유지됩니다. 기본적으로 모델의 변수는 고유한 변수 이름이 자동으로 지정되므로 원치 않는 한 직접 할당할 필요가 없습니다.
Step9: 변수 및 텐서 배치하기
Step10: 한 기기에서 변수 또는 텐서의 위치를 설정하고 다른 기기에서 계산을 수행할 수 있습니다. 이 경우, 기기 간에 데이터를 복사해야 하므로 지연이 발생합니다.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
# Uncomment to see where your variables get placed (see below)
# tf.debugging.set_log_device_placement(True)
my_tensor = tf.constant([[1.0, 2.0], [3.0, 4.0]])
my_variable = tf.Variable(my_tensor)
# Variables can be all kinds of types, just like tensors
bool_variable = tf.Variable([False, False, False, True])
complex_variable = tf.Variable([5 + 4j, 6 + 1j])
print("Shape: ",my_variable.shape)
print("DType: ",my_variable.dtype)
print("As NumPy: ", my_variable.numpy)
print("A variable:",my_variable)
print("\nViewed as a tensor:", tf.convert_to_tensor(my_variable))
print("\nIndex of highest value:", tf.argmax(my_variable))
# This creates a new tensor; it does not reshape the variable.
print("\nCopying and reshaping: ", tf.reshape(my_variable, ([1,4])))
a = tf.Variable([2.0, 3.0])
# This will keep the same dtype, float32
a.assign([1, 2])
# Not allowed as it resizes the variable:
try:
a.assign([1.0, 2.0, 3.0])
except Exception as e:
print(f"{type(e).__name__}: {e}")
a = tf.Variable([2.0, 3.0])
# Create b based on the value of a
b = tf.Variable(a)
a.assign([5, 6])
# a and b are different
print(a.numpy())
print(b.numpy())
# There are other versions of assign
print(a.assign_add([2,3]).numpy()) # [7. 9.]
print(a.assign_sub([7,9]).numpy()) # [0. 0.]
# Create a and b; they have the same value but are backed by different tensors.
a = tf.Variable(my_tensor, name="Mark")
# A new variable with the same name, but different value
# Note that the scalar add is broadcast
b = tf.Variable(my_tensor + 1, name="Mark")
# These are elementwise-unequal, despite having the same name
print(a == b)
step_counter = tf.Variable(1, trainable=False)
with tf.device('CPU:0'):
# Create some tensors
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
with tf.device('CPU:0'):
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.Variable([[1.0, 2.0, 3.0]])
with tf.device('GPU:0'):
# Element-wise multiply
k = a * b
print(k)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Rn2 distribution
Step2: Testing for rn2(x) == 0 gives a $1/x$ probability
Step3: Rne distribution
Step4: In the NetHack game, the player's experience is used as default value of the truncation parameter...
Step5: Let's check what this page says about rne(4)
Step6: Seems true !
Step7: Examples
Step8: For x=350
|
<ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -v -m -p numpy,matplotlib
import random
import numpy as np
import matplotlib.pyplot as plt
def rn2(x):
return random.randint(0, x-1)
np.asarray([rn2(10) for _ in range(100)])
from collections import Counter
Counter([rn2(10) == 0 for _ in range(100)])
Counter([rn2(10) == 0 for _ in range(1000)])
Counter([rn2(10) == 0 for _ in range(10000)])
def rne(x, truncation=5):
truncation = max(truncation, 1)
tmp = 1
while tmp < truncation and rn2(x) == 0:
tmp += 1
return tmp
np.asarray([rne(3) for _ in range(50)])
plt.hist(np.asarray([rne(3) for _ in range(10000)]), bins=5)
np.asarray([rne(4, truncation=10) for _ in range(50)])
plt.hist(np.asarray([rne(4, truncation=10) for _ in range(10000)]), bins=10)
ref_table = {1: 3/4, 2: 3/16, 3: 3/64, 4: 3/256, 5: 1/256}
ref_table
N = 100000
table = Counter([rne(4, truncation=5) for _ in range(N)])
for k in table:
table[k] /= N
table = dict(table)
table
rel_diff = lambda x, y: abs(x - y) / x
for k in ref_table:
x, y = ref_table[k], table[k]
r = rel_diff(x, y)
print(f"For k={k}: relative difference is {r:.3g} between {x:.3g} (expectation) and {y:.3g} (with N={N} samples).")
def rnz(i, truncation=10):
x = i
tmp = 1000
tmp += rn2(1000)
tmp *= rne(4, truncation=truncation)
flip = rn2(2)
if flip:
x *= tmp
x /= 1000
else:
x *= 1000
x /= tmp
return int(x)
np.asarray([rnz(3) for _ in range(100)])
np.asarray([rnz(3, truncation=10) for _ in range(100)])
np.asarray([rnz(350) for _ in range(100)])
_ = plt.hist(np.asarray([rnz(350) for _ in range(100000)]), bins=200)
np.asarray([rnz(350, truncation=10) for _ in range(100)])
_ = plt.hist(np.asarray([rnz(350, truncation=10) for _ in range(10000)]), bins=200)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can now do nearly everything that can be done with a regular "pg" type ginga web widget.
Step2: Now press and release space bar in the viewer and draw the shape. Or you can use the right mouse button.
Step3: Now you can click on the object to select it and edit it. Editing control points should appear
|
<ASSISTANT_TASK:>
Python Code:
# create a Jupyter image that will be our display surface
# format can be 'jpeg' or 'png'; specify width and height to set viewer size
# PNG will be a little clearer, especially with overlaid graphics, but
# JPEG is faster to update
import ipywidgets as widgets
jup_img = widgets.Image(format='jpeg', width=500, height=500)
# Boilerplate to create a Ginga viewer connected to this widget
# this could be simplified, or hidden behind a class or convenience
# method
# NOTE: you don't have to specify a log file--and if you are not interested
# in the log just specify null=True for a null logger
# level=10 will give you the most debugging information
from ginga.misc.log import get_logger
logger = get_logger("my viewer", log_stderr=False, log_file='/tmp/ginga.log', level=40)
from ginga.web.jupyterw.ImageViewJpw import EnhancedCanvasView
v1 = EnhancedCanvasView(logger=logger)
# set our linkage between the jupyter widget at ginga
v1.set_widget(jup_img)
# enable all possible keyboard and pointer operations
bd = v1.get_bindings()
bd.enable_all(True)
coordinates = widgets.HTML('<h3>coordinates show up here</h3>')
# callback to display position in RA/DEC deg
def mouse_move(viewer, button, data_x, data_y, w):
image = viewer.get_image()
if image is not None:
ra, dec = image.pixtoradec(data_x, data_y)
w.value = "cursor at %f, %f" % (ra, dec)
v1.add_callback('cursor-changed', mouse_move, coordinates)
%%javascript
/* some magic to keep the cell contents from scrolling
(when we embed the viewer)
*/
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
# embed the viewer here
widgets.VBox([jup_img, coordinates])
# Load a FITS file. You should see the embedded image update.
v1.load_fits("/home/eric/testdata/SPCAM/SUPA01118766.fits")
# grab a screenshot.
v1.show()
# add a canvas to the image and set the draw type
canvas = v1.add_canvas()
canvas.set_drawtype('ellipse', color='orange', fill=True, fillcolor='cyan', fillalpha=0.3)
v1.show()
# put the canvas in edit mode
canvas.enable_edit(True)
canvas.set_draw_mode('edit')
canvas.get_objects()
# let's get the object that we drew--what is it's location?
ell = canvas.get_objects()[0]
ell.x, ell.y
ell.xradius, ell.yradius
# delete everything on the canvas that we drew
canvas.delete_all_objects()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import Django Model for Code Lexicon Information
Step2: DATA FETCHING CODE
Step3: Lexical Analyzer
Step4: <a name="data_analysis"></a>
Step5: Jaccard Coefficient Statistics
Step6: TFIDF Statistics
Step7: (TEST) Spare Analysis on the resulting structures (IR Index) --> SKIP to <a href="#tf_stats">Tf Statistics</a>
Step8: JHotDraw
Step9: <a name="tf_stats"></a>
Step10: Optimization
Step11: TFIDF Statistics separated by Agreement Rates
Step12: TFIDF Values Distribution (Separated by Agreement Rates)
|
<ASSISTANT_TASK:>
Python Code:
%load preamble_directives.py
from source_code_analysis.models import CodeLexiconInfo
from lexical_analysis import LINSENnormalizer
from lexical_analysis import LexicalAnalyzer
from source_code_analysis.models import SoftwareProject
target_sw_project = SoftwareProject.objects.get(name__iexact='CoffeeMaker')
# Use RelatedManager to get all the code methods associated to the target project
code_methods = target_sw_project.code_methods.all()
total_methods = code_methods.count()
coefficients = list()
for i, method in enumerate(code_methods):
print('Analyzing Method {0} out of {1}: {2}'.format(i+1, total_methods, method.method_name))
analyzer = LexicalAnalyzer(method)
analyzer.analyse_textual_information()
coefficients.append(analyzer.code_lexical_info.jaccard_coeff)
from scipy import median
from scipy import mean
from scipy import var, std
import numpy as np
from source_code_analysis.models import SoftwareProject
projects = list()
projects.append(SoftwareProject.objects.get(name__iexact='CoffeeMaker', version__exact='1.0'))
projects.append(SoftwareProject.objects.get(name__iexact='Jfreechart', version__exact='0.6.0'))
projects.append(SoftwareProject.objects.get(name__iexact='Jfreechart', version__exact='0.7.1'))
projects.append(SoftwareProject.objects.get(name__iexact='JHotDraw', version__exact='7.4.1'))
print(projects)
for project in projects:
code_methods = project.code_methods.all()
coefficients = list()
for method in code_methods:
# Check that this method has no "wrong_association"
n_evaluations = method.agreement_evaluations.count()
n_eval_wrong_assocation = method.agreement_evaluations.filter(wrong_association=True).count()
if n_evaluations == n_eval_wrong_assocation:
# if **all** the evaluations for the current method mark it as a wrong_association
# exclude it from the statistics
continue
clexicon_info = method.lexical_info
coefficients.append(clexicon_info.jaccard_coeff)
coeff = np.array(coefficients)
print('{proj} ({ver}) & {total} & {min:.3} & {max:.3} & {median:.3} & {mean:.3} & {variance:.3} & {devstd:.3} \\\\'.format(
proj = project.name.title(), ver=project.version,
total=coeff.size, min=coeff.min(), max=coeff.max(),
median=median(coeff), mean=coeff.mean(),
variance=var(coeff), devstd=std(coeff)))
# Import Scikit-Learn
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import linear_kernel
for project in projects:
# Populate the Doc Collection
document_collection = list()
# Get Methods
code_methods = project.code_methods.all()
for method in code_methods:
# Check that this method has no "wrong_association"
n_evaluations = method.agreement_evaluations.count()
n_eval_wrong_assocation = method.agreement_evaluations.filter(wrong_association=True).count()
if n_evaluations == n_eval_wrong_assocation:
# if **all** the evaluations for the current method mark it as a wrong_association
# exclude it from the statistics
continue
clexicon_info = method.lexical_info
document_collection.append(clexicon_info.normalized_comment)
document_collection.append(clexicon_info.normalized_code)
vectorizer = TfidfVectorizer(input='content', sublinear_tf=True, lowercase=False)
tfidf_values = vectorizer.fit_transform(document_collection)
#cosine_sim_vals = list()
#rows, cols = tfidf_values.shape
#for i in range(0, rows, 2):
# cosine_sim_vals.append(tfidf_values[i].dot(tfidf_values[i+1].T)[0,0])
#cosine_sim_vals = np.array(cosine_sim_vals)
comments, code = tfidf_values[::2], tfidf_values[1::2]
kernel_matrix = linear_kernel(comments, code) # arrays are still L2 (length) normalized
cosine_sim_vals = np.diag(kernel_matrix)
print('{proj} ({ver}) & {tot} & {min:.3} & {max:.3} & {med:.3} & {mu:.3} & {var:.3} & {sigma:.3} \\\\'.format(
proj=project.name.title(), ver=project.version, tot=cosine_sim_vals.size, min=cosine_sim_vals.min(),
max=cosine_sim_vals.max(), med=median(cosine_sim_vals), mu=cosine_sim_vals.mean(),
var=var(cosine_sim_vals), sigma=std(cosine_sim_vals)))
coff_maker = projects[0]
methods = coff_maker.code_methods.all()
methods = methods[0:2]
docs = list()
for method in methods:
lex_info = method.lexical_info
docs.append(lex_info.normalized_comment)
docs.append(lex_info.normalized_code)
print('Methods: ', len(methods))
print('Docs: ', len(docs))
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(input='content', sublinear_tf=True, lowercase=False)
X = vectorizer.fit_transform(docs)
vectorizer.get_feature_names()
x = X[0].toarray()
from scipy.sparse import issparse
print(issparse(x))
x = x.ravel()
np.where(x>0)
np.take(x, np.where(x>0))
x[np.where(x>0)]
print(vectorizer.get_feature_names())
docs[0]
jhotdraw = projects[-1]
methods = jhotdraw.code_methods.all()
methods = methods[0:2]
docs = list()
for method in methods:
lex_info = method.lexical_info
docs.append(lex_info.normalized_comment)
docs.append(lex_info.normalized_code)
print('Methods: ', len(methods))
print('Docs: ', len(docs))
docs[0], docs[1]
methods[0].lexical_info.normalized_comment
methods[0].lexical_info.normalized_code
methods[0].example.target
# Import Scikit-Learn
from sklearn.feature_extraction.text import TfidfVectorizer
## TODO: See the following "Optimization" subsections to see tests
from sklearn.metrics.pairwise import linear_kernel # array are still L2 normalized
for project in projects:
# Get Methods
code_methods = project.code_methods.all()
# Populate the Doc Collection
document_collection = list()
for method in code_methods:
# Check that this method has no "wrong_association"
n_evaluations = method.agreement_evaluations.count()
n_eval_wrong_assocation = method.agreement_evaluations.filter(wrong_association=True).count()
if n_evaluations == n_eval_wrong_assocation:
# if **all** the evaluations for the current method mark it as a wrong_association
# exclude it from the statistics
continue
clexicon_info = method.lexical_info
document_collection.append(clexicon_info.normalized_comment)
document_collection.append(clexicon_info.normalized_code)
vectorizer = TfidfVectorizer(input='content', sublinear_tf=False, lowercase=False, use_idf=False)
tf_values = vectorizer.fit_transform(document_collection)
#cosine_sim_vals = list()
#rows, cols = tf_values.shape
#for i in range(0, rows, 2):
# cosine_sim_vals.append(tf_values[i].dot(tf_values[i+1].T)[0,0])
#cosine_sim_vals = np.array(cosine_sim_vals)
comments, code = tf_values[::2], tf_values[1::2]
kernel_matrix = linear_kernel(comments, code)
cosine_sim_vals = np.diag(kernel_matrix)
print('{proj} ({ver}) & {total} & {min:.3} & {max:.3} & {median:.3} & {mean:.3} & {variance:.3} & {devstd:.3} \\\\'.format(
proj = project.name.title(), ver=project.version,
total=cosine_sim_vals.size,
min=cosine_sim_vals.min(),
max=cosine_sim_vals.max(),
median=median(cosine_sim_vals),
mean=cosine_sim_vals.mean(),
variance=var(cosine_sim_vals),
devstd=std(cosine_sim_vals)))
from sklearn.feature_extraction.text import TfidfVectorizer
# Target Project (as this is just an example)
project = projects[0]
# Get Methods
code_methods = project.code_methods.all()
# Populate the Doc Collection
document_collection = list()
for method in code_methods:
clexicon_info = method.lexical_info
document_collection.append(clexicon_info.normalized_comment)
document_collection.append(clexicon_info.normalized_code)
vectorizer = TfidfVectorizer(input='content', sublinear_tf=True, lowercase=False)
tfidf_values = vectorizer.fit_transform(document_collection)
rows, cols = tfidf_values.shape
cosine_sim_vals = tfidf_values[0].dot(tfidf_values[1].T)[0,0]
for i in range(2, rows, 2):
cosine_sim_vals = np.vstack((cosine_sim_vals, tfidf_values[i].dot(tfidf_values[i+1].T)[0,0]))
cosine_sim_vals.ravel()
alt_method = np.einsum('ij,ij->i', tfidf_values[::2,].toarray(), tfidf_values[1::2,].toarray())
alt_method
alt_method.shape
cosine_sim_vals.ravel().shape
np.testing.assert_allclose(cosine_sim_vals.ravel(), alt_method)
from sklearn.metrics.pairwise import cosine_similarity, linear_kernel
comments, code = tfidf_values[::2], tfidf_values[1::2]
print(comments.shape, code.shape)
kernel = linear_kernel(comments, code)
np.diag(kernel)
from numpy.testing import assert_array_almost_equal
assert_array_almost_equal(alt_method, np.diag(kernel))
alt_method
cossim = cosine_similarity(comments, code)
np.diag(cossim)
assert_array_almost_equal(alt_method, np.diag(cossim))
assert_array_almost_equal(np.diag(cossim), np.diag(kernel))
from sklearn.feature_extraction.text import TfidfVectorizer
from evaluations import Judge
judges_combinations = (('leonardo.nole', 'rossella.linsalata'),
('leonardo.nole', 'antonio.petrone'),
('leonardo.nole', 'antonio.petrone'),
('leonardo.nole', 'rossella.linsalata'),)
CODES_Labels = ('NC', 'DK', 'CO')
from collections import defaultdict
stats_results = defaultdict(list)
for pno, project in enumerate(projects):
# Get Methods
code_methods = project.code_methods.all()
# Populate the Doc Collection
document_collection = list()
method_ids_map = dict() # Map (dict) to store the association method.pk --> Row index in Tfidf Matrix
for mno, method in enumerate(code_methods):
clexicon_info = method.lexical_info
document_collection.append(clexicon_info.normalized_comment)
document_collection.append(clexicon_info.normalized_code)
method_ids_map[method.id] = mno*2
vectorizer = TfidfVectorizer(input='content', sublinear_tf=True, lowercase=False)
tfidf_values = vectorizer.fit_transform(document_collection)
j1_usrname, j2_usrname = judges_combinations[pno]
j1 = Judge(j1_usrname, project.name, project.version)
j2 = Judge(j2_usrname, project.name, project.version)
j1_evals = j1.three_codes_evaluations
j2_evals = j2.three_codes_evaluations
project_stats = list()
for code in range(3):
j1_evals_code = j1_evals[code]
j2_evals_code = j2_evals[code]
method_ids = j1_evals_code.intersection(j2_evals_code)
cosine_sim_vals = list()
for mid in method_ids:
i = method_ids_map[mid]
cosine_sim_vals.append(tfidf_values[i].dot(tfidf_values[i+1].T)[0,0])
cosine_sim_vals = np.array(cosine_sim_vals)
project_stats.append(cosine_sim_vals)
for code in range(3):
vals = project_stats[code]
label = CODES_Labels[code]
if vals.size > 0:
stats_results[label].append('{proj} ({ver}) & {total} & {min:.3} & {max:.3} & {median:.3} & {mean:.3} & {variance:.3} & {devstd:.3} \\\\'.format(
proj = project.name.title(),
ver=project.version,
total=vals.size,
min=vals.min(),
max=vals.max(),
median=median(vals),
mean=vals.mean(),
variance=var(vals),
devstd=std(vals)))
else:
stats_results[label].append('{proj} ({ver}) & \multicolumn{{7}}{{c|}}{{N.A.}} \\\\'.format(proj = project.name.title(),
ver=project.version))
for label in stats_results:
print('\n{0}\n'.format(label))
for value in stats_results[label]:
print(value)
judges_combinations = (('leonardo.nole', 'rossella.linsalata'),
('leonardo.nole', 'antonio.petrone'),
('leonardo.nole', 'antonio.petrone'),
('leonardo.nole', 'rossella.linsalata'),)
CODES_Labels = ('NC', 'DK', 'CO')
from collections import defaultdict
stats_results_paths = defaultdict(list)
pwd_out = !pwd
current_dir = pwd_out[0]
folder_path = os.path.join(current_dir, 'distributions_per_rate_tfidf')
if not os.path.exists(folder_path):
os.makedirs(folder_path)
for pno, project in enumerate(projects):
# Get Methods
code_methods = project.code_methods.all()
# Populate the Doc Collection
document_collection = list()
method_ids_map = dict() # Map (dict) to store the association method.pk --> Row index in Tfidf Matrix
for mno, method in enumerate(code_methods):
clexicon_info = method.lexical_info
document_collection.append(clexicon_info.normalized_comment)
document_collection.append(clexicon_info.normalized_code)
method_ids_map[method.id] = mno*2
vectorizer = TfidfVectorizer(input='content', sublinear_tf=True, lowercase=False)
tfidf_values = vectorizer.fit_transform(document_collection)
j1_usrname, j2_usrname = judges_combinations[pno]
j1 = Judge(j1_usrname, project.name, project.version)
j2 = Judge(j2_usrname, project.name, project.version)
j1_evals = j1.three_codes_evaluations
j2_evals = j2.three_codes_evaluations
project_stats = list()
for code in range(3):
j1_evals_code = j1_evals[code]
j2_evals_code = j2_evals[code]
method_ids = j1_evals_code.intersection(j2_evals_code)
cosine_sim_vals = list()
for mid in method_ids:
i = method_ids_map[mid]
cosine_sim_vals.append(tfidf_values[i].dot(tfidf_values[i+1].T)[0,0])
cosine_sim_vals = np.array(cosine_sim_vals)
project_stats.append(cosine_sim_vals)
for code in range(3):
vals = project_stats[code]
label = CODES_Labels[code]
if vals.size > 0:
filename = '{label}_{proj}_({ver})_{total}.txt'.format(label=label,
proj=project.name.title(),
ver=project.version,
total=vals.size)
filepath = os.path.join(folder_path, filename)
np.savetxt(filepath, vals)
stats_results_paths[label].append(filepath)
for label in stats_results:
print('\n{0}\n'.format(label))
for path in stats_results_paths[label]:
print('Saved Filepath:', path)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-2', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset
Step3: With data splited we should start building our model!
Step4: Last epoch results
Step5: Weight vector
Step6: Code Description
Step7: The weight vector of the SVM including the bias term after 100 epochs is $(-0.012, -0.0144, 0.024, -0.029)$.
|
<ASSISTANT_TASK:>
Python Code:
def __init__(self,lmbd,D):
self.lmbd = lmbd
self.D = D + 1
self.w = [0.] * self.D
def sign(self, x):
return -1. if x <= 0 else 1.
def hinge_loss(self,target,y):
return max(0, 1 - target*y)
def train(self,x,y,alpha):
if y*self.predict(x) < 1:
for i in xrange(len(x)):
self.w[i] = self.w[i] + alpha *( (y*x[i]) + (-2 * (self.lmbd)*self.w[i]) )
else:
for i in xrange(len(x)):
self.w[i] = self.w[i] + alpha * (-2 * (self.lmbd)*self.w[i])
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
%matplotlib inline
from random import shuffle
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/00229/Skin_NonSkin.txt"
data = pd.read_csv(url, header = None, sep='\t')
np.random.seed(1)
msk = np.random.rand(len(data)) < 0.8
data[msk].to_csv('svm-gradient-descent/train.csv', header=False, index=False)
data[~msk].to_csv('svm-gradient-descent/test.csv', header=False, index=False)
class SVM():
Implementation of SVM with SGD
def __init__(self,lmbd,D):
self.lmbd = lmbd
self.D = D + 1
self.w = [0.] * self.D
def sign(self, x):
return -1. if x <= 0 else 1.
def hinge_loss(self,target,y):
return max(0, 1 - target*y)
def data(self,test=False):
if test:
with open('svm-gradient-descent/test.csv','r') as f:
samples = f.readlines()
for t,row in enumerate(samples):
row = row.replace('\n','')
row = row.split(',')
target = -1.
if row[3] == '1':
target = 1.
del row[3]
x = [float(c) for c in row] + [1.] #inputs + bias
yield t, x,target
else:
with open('svm-gradient-descent/train.csv','r') as f:
samples = f.readlines()
shuffle(samples)
for t,row in enumerate(samples):
row = row.replace('\n','')
row = row.split(',')
target = -1.
if row[3] == '1':
target = 1.
del row[3]
x = [float(c) for c in row] + [1.] #inputs + bias
yield t, x,target
def train(self,x,y,alpha):
if y*self.predict(x) < 1:
for i in xrange(len(x)):
self.w[i] = self.w[i] + alpha *( (y*x[i]) + (-2 * (self.lmbd)*self.w[i]) )
else:
for i in xrange(len(x)):
self.w[i] = self.w[i] + alpha * (-2 * (self.lmbd)*self.w[i])
return self.w
def predict(self,x):
wTx = 0.
for i in xrange(len(x)):
wTx += self.w[i]*x[i]
return wTx
def fit(self):
test_count = 0.
tn = 0.
tp = 0.
total_positive = 0.
total_negative = 0.
accuracy = 0.
loss = 0.
last = 0
for t, x,target in self.data(test=False):
if target == last:
continue
alpha = 1./(self.lmbd*(t+1.))
w = self.train(x,target,alpha)
last = target
for t,x,target in self.data(test=True):
pred = self.predict(x)
loss += self.hinge_loss(target,pred)
pred = self.sign(pred)
if target == 1:
total_positive += 1.
else:
total_negative += 1.
if pred == target:
accuracy += 1.
if pred == 1:
tp += 1.
else:
tn += 1.
loss = loss / (total_positive+total_negative)
acc = accuracy/(total_positive+total_negative)
# print 'Loss', loss, '\nTrue Negatives', tn/total_negative * 100, '%', '\nTrue Positives', tp/total_positive * 100, '%','\nPrecision', accuracy/(total_positive+total_negative) * 100, '%', '\n'
return loss, acc, tp/total_positive,tn/total_negative, w
loss_list = []
acc_list = []
tp_list = []
tn_list = []
w_list = []
for i in range(100):
#print '\nSeed',i
np.random.seed(i)
svm = SVM(1,3)
l,acc,tp,tn,w = svm.fit()
loss_list.append(l)
acc_list.append(acc)
tp_list.append(tp)
tn_list.append(tn)
w_list.append(w)
print 'Loss', sum(loss_list)/len(loss_list)
print 'Accuracy', sum(acc_list)/len(acc_list) * 100, '%'
print 'True Positives', sum(tp_list)/len(tp_list) * 100, '%'
print 'True Negatives',sum(tn_list)/len(tn_list) * 100, '%'
print 'Loss', loss_list[-1]
print 'Accuracy', acc_list[-1] * 100, '%'
print 'True Positives', tp_list[-1] * 100, '%'
print 'True Negatives', tn_list[-1] * 100, '%'
w_list[-1]
w_list[-1]
w_list[-1]
with open('svm-gradient-descent/train.csv','r') as f:
samples = f.readlines()
shuffle(samples)
samples[0:10]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plotting with parameters
Step2: Then use interact to create a user interface for exploring your function
Step3: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument
Step4: Use interact to create a UI for plot_sine2.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
def plot_sine1(a, b):
f = plt.figure(figsize=(16,2))
x = np.linspace(0, 4*np.pi, 1000)
plt.plot(x, np.sin(a*x + b))
plt.xlim(right=4*np.pi)
plt.tick_params(direction='out')
plt.xticks([np.pi, 2*np.pi, 3*np.pi, 4*np.pi],['$\pi$','$2\pi$','$3\pi$','$4\pi$'])
plot_sine1(5, 3.4)
interact(plot_sine1, a=(0.0,5.0,0.1), b=(-5.0,5.0,0.1));
assert True # leave this for grading the plot_sine1 exercise
def plot_sine2(a, b, style):
f = plt.figure(figsize=(16,2))
x = np.linspace(0, 4*np.pi, 100)
plt.plot(x, np.sin(a*x + b), style)
plt.xlim(right=4*np.pi)
plt.tick_params(direction='out')
plt.xticks([np.pi, 2*np.pi, 3*np.pi, 4*np.pi],['$\pi$','$2\pi$','$3\pi$','$4\pi$'])
plot_sine2(4.0, -1.0, 'r--')
interact(plot_sine2, a=(0.0,5.0,0.1), b=(-5.0,5.0,0.1), style={'dotted blue line': 'b.', 'black circles': 'ko', 'red triangles': 'r^'});
assert True # leave this for grading the plot_sine2 exercise
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, let's load the data. This week, we're going to load the Auto MPG data set, which is available online at the UC Irvine Machine Learning Repository. The dataset is in fixed width format, but fortunately this is supported out of the box by pandas' read_fwf function
Step2: Exploratory data analysis
Step3: As the car name is unique for each instance (according to the dataset documentation), it cannot be used to predict the MPG by itself so let's drop it as a feature and use it as the index instead
Step4: According to the documentation, the horsepower column contains a small number of missing values, each of which is denoted by the string '?'. Again, for simplicity, let's just drop these from the data set
Step5: Usually, pandas is smart enough to recognise that a column is numeric and will convert it to the appropriate data type automatically. However, in this case, because there were strings present initially, the value type of the horsepower column isn't numeric
Step6: We can correct this by converting the column values numbers manually, using pandas' to_numeric function
Step7: As can be seen, the data type of the horsepower column is now float64, i.e. a 64 bit floating point value.
Step8: As can be seen, one hot encoding converts the origin column into separate binary columns, each representing the presence or absence of the given category. Because we're going to use a decsion tree regression model, we don't need to worry about the effects of multicollinearity, and so there's no need to drop one of the encoded variable columns as we did in the case of linear regression.
Step9: Print a matrix of pairwise Pearson correlation values
Step10: Let's also create a scatter plot matrix
Step11: Based on the above information, we can conclude the following
Step12: You can find a more detailed description of each parameter in the scikit-learn documentation.
Step13: Our decision tree regression model predicts the MPG with an average error of approximately ±2.32 with a standard deviation of 3.16, which is similar to our final linear regression model from Lab 06. It's also worth noting that we were able to achieve this level of accuracy with very little feature engineering effort. This is because decision tree regression does not rely on the same set of assumptions (e.g. linearity) as linear regression, and so is able to learn from data with less manual tuning.
Step14: Random forest regression
Step15: As before, you can find a more detailed description of each parameter in the scikit-learn documentation.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import GridSearchCV, KFold, cross_val_predict
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
df = pd.read_fwf(url, header=None, names=['mpg', 'cylinders', 'displacement', 'horsepower', 'weight',
'acceleration', 'model year', 'origin', 'car name'])
df.head()
df = df.set_index('car name')
df.head()
df = df[df['horsepower'] != '?']
df.dtypes
df['horsepower'] = pd.to_numeric(df['horsepower'])
# Check the data types again
df.dtypes
df = pd.get_dummies(df, columns=['origin'])
df.head()
df.describe()
df.corr()
pd.plotting.scatter_matrix(df, s=50, hist_kwds={'bins': 10}, figsize=(16, 16));
DecisionTreeRegressor().get_params()
X = df.drop('mpg', axis='columns') # X = features
y = df['mpg'] # y = prediction target
algorithm = DecisionTreeRegressor(random_state=0)
# Build models for different values of min_samples_leaf and min_samples_split
parameters = {
'min_samples_leaf': [1, 10, 20],
'min_samples_split': [2, 10, 20] # Min value is 2
}
# Use inner CV to select the best model
inner_cv = KFold(n_splits=5, shuffle=True, random_state=0) # K = 5
clf = GridSearchCV(algorithm, parameters, cv=inner_cv, n_jobs=-1) # n_jobs=-1 uses all available CPUs = faster
clf.fit(X, y)
# Use outer CV to evaluate the error of the best model
outer_cv = KFold(n_splits=10, shuffle=True, random_state=0) # K = 10, doesn't have to be the same
y_pred = cross_val_predict(clf, X, y, cv=outer_cv)
# Print the results
print('Mean absolute error: %f' % mean_absolute_error(y, y_pred))
print('Standard deviation of the error: %f' % (y - y_pred).std())
ax = (y - y_pred).hist()
ax.set(
title='Distribution of errors for the decision tree regression model',
xlabel='Error'
);
clf.best_params_
RandomForestRegressor().get_params()
X = df.drop('mpg', axis='columns') # X = features
y = df['mpg'] # y = prediction target
algorithm = RandomForestRegressor(random_state=0)
# Build models for different values of n_estimators, min_samples_leaf and min_samples_split
parameters = {
'n_estimators': [2, 5, 10],
'min_samples_leaf': [1, 10, 20],
'min_samples_split': [2, 10, 20] # Min value is 2
}
# Use inner CV to select the best model
inner_cv = KFold(n_splits=5, shuffle=True, random_state=0) # K = 5
clf = GridSearchCV(algorithm, parameters, cv=inner_cv, n_jobs=-1) # n_jobs=-1 uses all available CPUs = faster
clf.fit(X, y)
# Use outer CV to evaluate the error of the best model
outer_cv = KFold(n_splits=10, shuffle=True, random_state=0) # K = 10, doesn't have to be the same
y_pred = cross_val_predict(clf, X, y, cv=outer_cv)
# Print the results
print('Mean absolute error: %f' % mean_absolute_error(y, y_pred))
print('Standard deviation of the error: %f' % (y - y_pred).std())
ax = (y - y_pred).hist()
ax.set(
title='Distribution of errors for the random forest regression model',
xlabel='Error'
);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: And we'll add a single light curve dataset to expose all the passband-dependent options.
Step3: Relevant Parameters
Step4: Note that if you change the value of 'atm' to anything other than 'ck2004', the corresponding 'ld_func' will need to be changed to something other than 'interp' (warnings and errors will be raised to remind you of this).
Step5: A 'passband' parameter exists for each passband-dependent-dataset (i.e. not meshes or orbits, but light curves and radial velocities). This parameter dictates which passband should be used for the computation of all intensities.
Step6: The available choices will include both locally installed passbands as well as passbands currently available from the online PHOEBE repository. If you choose an online-passband, it will be downloaded and installed locally as soon as required by b.run_compute.
Step7: To see your current locally-installed passbands, call phoebe.list_installed_passbands().
Step8: These installed passbands can be in any of a number of directories, which can be accessed via phoebe.list_passband_directories().
Step9: To see the passbands available from the online repository, call phoebe.list_online_passbands().
Step10: Lastly, to manually download and install one of these online passbands, you can do so explicitly via phoebe.download_passband or by visiting tables.phoebe-project.org. See also the tutorial on updating passbands.
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
b['atm']
b['atm@primary']
b['atm@primary'].description
b['atm@primary'].choices
b['ld_func@primary']
b['atm@primary'] = 'blackbody'
print(b.run_checks())
b['ld_mode@primary'] = 'manual'
b['ld_func@primary'] = 'logarithmic'
print(b.run_checks())
b['passband']
print(b['passband'].choices)
print(phoebe.list_installed_passbands())
print(phoebe.list_passband_directories())
print(phoebe.list_online_passbands())
phoebe.download_passband('Cousins:R')
print(phoebe.list_installed_passbands())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Chi-Square Feature Selection
Step2: One common feature selection method that is used with text data is the Chi-Square feature selection. The $\chi^2$ test is used in statistics to test the independence of two events. More specifically in feature selection we use it to test whether the occurrence of a specific term and the occurrence of a specific class are independent. More formally, given a document $D$, we estimate the following quantity for each term and rank them by their score
Step3: e.g. the second row of the observed array refers to the total count of the terms that belongs to class 1. Then we compute the expected frequencies of each term for each class.
Step4: We can confirm our result with the scikit-learn library using the chi2 function. The following code chunk computes chi-square value for each feature. For the returned tuple, the first element is the chi-square scores, the scores are better if greater. The second element is the p-values, they are better if smaller.
Step5: Scikit-learn provides a SelectKBest class that can be used with a suite of different statistical tests. It will rank the features with the statistical test that we've specified and select the top k performing ones (meaning that these terms is considered to be more relevant to the task at hand than the others), where k is also a number that we can tweak.
|
<ASSISTANT_TASK:>
Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(plot_style = False)
os.chdir(path)
import numpy as np
import pandas as pd
# 1. magic to print version
# 2. magic so that the notebook will reload external python modules
%load_ext watermark
%load_ext autoreload
%autoreload 2
from sklearn.preprocessing import LabelBinarizer
from sklearn.feature_selection import chi2, SelectKBest
from sklearn.feature_extraction.text import CountVectorizer
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,sklearn
# suppose we have the following toy text data
X = np.array(['call you tonight', 'Call me a cab', 'please call me... PLEASE!', 'he will call me'])
y = [1, 1, 2, 0]
# we'll convert it to a dense document-term matrix,
# so we can print a more readable output
vect = CountVectorizer()
X_dtm = vect.fit_transform(X)
X_dtm = X_dtm.toarray()
pd.DataFrame(X_dtm, columns = vect.get_feature_names())
# binarize the output column,
# this makes computing the observed value a
# simple dot product
y_binarized = LabelBinarizer().fit_transform(y)
print(y_binarized)
print()
# our observed count for each class (the row)
# and each feature (the column)
observed = np.dot(y_binarized.T, X_dtm)
print(observed)
# compute the probability of each class and the feature count;
# keep both as a 2 dimension array using reshape
class_prob = y_binarized.mean(axis = 0).reshape(1, -1)
feature_count = X_dtm.sum(axis = 0).reshape(1, -1)
expected = np.dot(class_prob.T, feature_count)
print(expected)
chisq = (observed - expected) ** 2 / expected
chisq_score = chisq.sum(axis = 0)
print(chisq_score)
chi2score = chi2(X_dtm, y)
chi2score
kbest = SelectKBest(score_func = chi2, k = 4)
X_dtm_kbest = kbest.fit_transform(X_dtm, y)
X_dtm_kbest
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialize Tensor Flow and GPU devices, import modules
Step2: Download raw images and annotation locally
Step11: Create a TensorFlow Datasets Builder
Step13: Create an input pipeline
Step14: Define training, test and validation splits
Step15: Take a look at the dataset
Step17: Define preprocessing
Step18: Now taking a look at the preprocessed dataset
Step19: Define a convolutional model.
Step20: Do some training!
Step21: Looking at the training performance
Step22: Looking at the validation performance
|
<ASSISTANT_TASK:>
Python Code:
@title License text
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License
import tensorflow as tf
print("Tensorflow version " + tf.__version__)
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print(f'Found GPU at: {device_name}')
!pip install opencv-python
from concurrent import futures
import io
import os
import re
import tarfile
import cv2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as plt_colors
import pandas as pd
import tensorflow_datasets as tfds
from tensorflow import keras
from tensorflow.keras import layers
from typing import Callable, Dict, Optional, Tuple
Features = Dict[str, tf.Tensor]
# Download a small part of the public Spacenet-v2 dataset.
# The dataset structure is documented at https://spacenet.ai/khartoum/
# NOTE: This cell takes a long time to execute. If colab is disconnected from
# a runtime, all data is lost. Consider storing the unpacked gzip archive in
# some external directory you can access (e.g. Google Cloud Storage bucket).
DATASET_TAR = "/tmp/AOI_5_Khartoum_train.tar.gz"
# Using tf.io.gfile allows to access AWS and GCS buckets directly from a colab.
tf.io.gfile.copy("s3://spacenet-dataset/spacenet/SN2_buildings/tarballs/SN2_buildings_train_AOI_5_Khartoum.tar.gz",
DATASET_TAR)
tf.io.gfile.mkdir("/tmp/spacenet")
with tarfile.open(DATASET_TAR) as tar_f:
tar_f.extractall("/tmp/spacenet")
tf.io.gfile.listdir("/tmp/spacenet/AOI_5_Khartoum_Train")
_DESCRIPTION = "Spacenet (Khartoum only)"
# The directory were the raw data lives.
_ROOT_DIR = "/tmp/spacenet/AOI_5_Khartoum_Train"
# Min/Max RGB value ranges over data from Khartoum.
# Needed for Spacenet dataset to convert pixel values into [0, 255] range.
# This can be pre-calculated in advance given access to all images or might not
# be needed for your dataset at all.
_GLOBAL_MIN = np.array([1.0, 1.0, 23.0])
_GLOBAL_MAX = np.array([1933.0, 2047.0, 1610.0])
IMAGE_HEIGHT, IMAGE_WIDTH = 650, 650
class SpacenetConfig(tfds.core.BuilderConfig):
BuilderConfig for spacenet.
def __init__(self, **kwargs):
Constructs a SpacenetConfig.
Args:
**kwargs: keyword arguments forwarded to super.
# Version history:
super().__init__(version=tfds.core.Version("0.0.1"), **kwargs)
self.train_path = _ROOT_DIR
self.min_val = _GLOBAL_MIN
self.max_val = _GLOBAL_MAX
class Spacenet(tfds.core.GeneratorBasedBuilder):
Spacenet remote sensing dataset (Khartoum only).
BUILDER_CONFIGS = [
SpacenetConfig(name="Spacenet-Khartoum",
description=_DESCRIPTION)
]
def __init__(self, data_dir: Optional[str] = None, **kwargs):
# NOTE: use your GCS bucket path here to persist TFRecords across multiple
# runs.
data_dir = data_dir or "/tmp/spacenet/tensorflow_datasets"
super().__init__(data_dir=data_dir, **kwargs)
def _info(self) -> tfds.core.DatasetInfo:
return tfds.core.DatasetInfo(
builder=self,
description=_DESCRIPTION,
features=tfds.features.FeaturesDict({
"image":
tfds.features.Image(
shape=[IMAGE_HEIGHT, IMAGE_WIDTH, 3],
encoding_format="jpeg"),
"segmentation_mask":
tfds.features.Image(
shape=[IMAGE_HEIGHT, IMAGE_WIDTH, 1],
encoding_format="png"),
}))
def _split_generators(self, dl_manager):
Returns SplitGenerators.
train_path = self.builder_config.train_path
return [
tfds.core.SplitGenerator(
name=tfds.Split.TRAIN,
gen_kwargs={"root_path": train_path},
),
]
def _generate_examples(self, root_path: str):
Yields examples from raw data.
max_per_channel = self.builder_config.max_val
min_per_channel = self.builder_config.min_val
path = os.path.join(root_path, "RGB-PanSharpen")
buildings_path = os.path.join(root_path, "summaryData")
# Reading polygons coordinates and label them with respect to the img number
csv_files = tf.io.gfile.glob(buildings_path + "/*.csv")
with tf.io.gfile.GFile(csv_files[0], "r") as fid:
df = pd.read_csv(fid)
df["image"] = [x.split("_img")[-1] for x in df.ImageId]
files = tf.io.gfile.glob(path + "/*.tif")
for filename in files:
# Extract the image ID XXX from "RGB-PanSharpen_AOI_5_Khartoum_imgXXX.tif"
buildings_filename = filename.split("_")[-1].split(".")[0][3:]
yield filename, {
"image": _load_tif(filename, max_per_channel, min_per_channel),
"segmentation_mask": _load_mask(df, buildings_filename),
}
def get_poly_coordinate(poly: str) -> np.ndarray:
Returns polygons coordinates as numpy array.
return np.array([
pp.split(" ") for pp in re.findall(r"[0-9.\-]+ [0-9.\-]+ [0-9.\-]+", poly)
],
dtype=np.float32)
def _load_mask(df: pd.core.series.Series,
buildings_filename: str) -> np.ndarray:
Returns a loaded segmentation mask image.
mask = np.zeros((IMAGE_HEIGHT, IMAGE_WIDTH, 1), dtype=np.uint8)
buildings = df[df.image == buildings_filename]
for _, building in buildings.iterrows():
poly_coord = get_poly_coordinate(building.PolygonWKT_Pix)
if poly_coord.size > 0:
# Subindex polygon coordinate from [x, y, 0] to [x, y]
poly_coord = poly_coord[:, :2]
cv2.fillPoly(mask, [np.array(poly_coord, dtype=np.int32)], 1)
return mask.astype(np.uint8)
def _load_tif(filename: str,
max_per_channel: np.ndarray,
min_per_channel: np.ndarray) -> np.ndarray:
Loads TIF file and returns as an image array in [0, 1].
with tf.io.gfile.GFile(filename, "rb") as fid:
img = tfds.core.lazy_imports.skimage.external.tifffile.imread(
io.BytesIO(fid.read())).astype(np.float32)
img = (img - min_per_channel) / (max_per_channel - min_per_channel) * 255
img = np.clip(img, 0, 255).astype(np.uint8)
return img
# Convert raw data into TFRecord form and prepare for access.
tfds_builder = Spacenet()
tfds_builder.download_and_prepare()
AUTOTUNE = tf.data.experimental.AUTOTUNE
def create_dataset(dataset_builder,
split: str,
preprocess_fn: Callable[[Features], Features],
batch_size: int,
num_epochs: Optional[int] = None,
shuffle: bool = False,
shuffle_buffer_size: int = 1000) -> tf.data.Dataset:
Returns a dataset to be used with TensorFlow2.
Args:
dataset_builder: `tfds.DatasetBuilder` object.
split: Name of the split to use. One of {'train', 'validation', 'test'}.
preprocess_fn: Callable for preprocessing.
batch_size: The batch size to use.
num_epochs: Number of epochs. See `tf.data.Dataset.repeat()`.
shuffle: Whether to shuffle examples in memory.
shuffle_buffer_size: Number of examples in the shuffle buffer.
Returns:
A `tf.data.Dataset` with the processed and batched features.
read_config = tfds.ReadConfig(options=tf.data.Options())
ds = dataset_builder.as_dataset(
read_config=read_config,
split=split,
shuffle_files=shuffle)
ds = ds.repeat(num_epochs)
if shuffle:
ds = ds.shuffle(shuffle_buffer_size)
ds = ds.map(preprocess_fn, num_parallel_calls=AUTOTUNE)
ds = ds.batch(batch_size, drop_remainder=True)
return ds.prefetch(AUTOTUNE)
TRAIN_SPLIT="train[:70%]"
VAL_SPLIT="train[70%:90%]"
TEST_SPLIT="train[90%:]"
BATCH_SIZE = 16
ds = create_dataset(Spacenet(),
split=TRAIN_SPLIT,
shuffle=False,
preprocess_fn = lambda x: x,
batch_size = BATCH_SIZE)
for batch in ds.take(1):
fig, axs = plt.subplots(nrows=BATCH_SIZE, ncols=2, figsize=(16, 8*BATCH_SIZE))
for i in range(BATCH_SIZE):
axs[i, 0].imshow(batch["image"][i])
axs[i, 1].imshow(batch["image"][i])
axs[i, 1].imshow(tf.squeeze(batch["segmentation_mask"][i]), cmap='gray', alpha=0.3)
def preprocess_fn(features: Dict[str, tf.Tensor], is_training: bool) -> Tuple[tf.Tensor, tf.Tensor]:
Runs preprocessing and converts examples into a Keras compatible format.
image = features["image"]
mask = features["segmentation_mask"]
# Rescale the image to [0..1]
image = tf.cast(image, tf.float32) / 255.0
# Resize the image and mask to (448, 448).
# Round resize mask values to nearest integer.
image = tf.image.resize(image, (448, 448))
mask = tf.cast(tf.image.resize(mask, (448, 448)), tf.int32)
# If training, apply random brightness change.
if is_training:
image = tf.image.random_brightness(image, max_delta=0.2)
return image, mask
train_preprocess_fn = functools.partial(preprocess_fn, is_training=True)
validation_preprocess_fn = functools.partial(preprocess_fn, is_training=False)
test_preprocess_fn = functools.partial(preprocess_fn, is_training=False)
train_ds = create_dataset(Spacenet(),
split=TRAIN_SPLIT,
shuffle=True,
preprocess_fn=train_preprocess_fn,
batch_size=BATCH_SIZE)
for batch in train_ds.take(1):
fig, axs = plt.subplots(nrows=BATCH_SIZE, ncols=2, figsize=(16, 8*BATCH_SIZE))
for i in range(BATCH_SIZE):
axs[i, 0].imshow(batch[0][i])
axs[i, 1].imshow(tf.squeeze(batch[0][i]))
axs[i, 1].imshow(tf.squeeze(batch[1][i]), cmap='gray', alpha=0.3)
# Code adapted from: https://keras.io/examples/vision/oxford_pets_image_segmentation/
# (Apache 2.0 License: https://github.com/keras-team/keras-io/blob/master/LICENSE)
# A simple encoder-decoder model for semantic segmentation.
# More on residual networks: https://arxiv.org/abs/1512.03385.
def get_model(img_size, num_classes):
inputs = keras.Input(shape=img_size + (3,))
### === Feature extractor ====
# This can be separately trained with a classfication head for pre-training.
# Entry block
x = layers.Conv2D(32, 3, strides=2, padding="same")(inputs)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
previous_block_activation = x # Set aside residual
# Blocks 1, 2, 3 are identical apart from the feature depth.
for filters in [64, 128, 256]:
x = layers.Activation("relu")(x)
x = layers.SeparableConv2D(filters, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.SeparableConv2D(filters, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
# Downscaling
x = layers.MaxPooling2D(3, strides=2, padding="same")(x)
# Project residual
residual = layers.Conv2D(filters, 1, strides=2, padding="same")(
previous_block_activation
)
x = layers.add([x, residual]) # Add back residual
previous_block_activation = x # Set aside next residual
### === Segmentation decoder ====
# Takes features generated by the feature extractor and produces
# Segmentation outputs.
previous_block_activation = x # Set aside residual
for filters in [256, 128, 64, 32]:
x = layers.Activation("relu")(x)
x = layers.Conv2DTranspose(filters, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.Conv2DTranspose(filters, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
# Upsacling
x = layers.UpSampling2D(2)(x)
# Project residual
residual = layers.UpSampling2D(2)(previous_block_activation)
residual = layers.Conv2D(filters, 1, padding="same")(residual)
x = layers.add([x, residual]) # Add back residual
previous_block_activation = x # Set aside next residual
# Add a per-pixel classification layer to assign segmentation classes.
outputs = layers.Conv2D(num_classes, 3, activation="softmax", padding="same")(x)
# Define the model
model = keras.Model(inputs, outputs)
return model
model = get_model( (448, 448), 2)
model.summary()
val_ds = create_dataset(Spacenet(),
split=VAL_SPLIT,
shuffle=False,
preprocess_fn=validation_preprocess_fn,
batch_size=BATCH_SIZE)
with tf.device('/device:GPU:0'):
model = get_model( (448, 448), 2)
model.compile(optimizer='rmsprop', loss="sparse_categorical_crossentropy")
model.fit(train_ds, epochs=10, steps_per_epoch=200, validation_data=val_ds, validation_steps=4)
for batch in train_ds.take(1):
predictions = model.predict(batch[0])
fig, axs = plt.subplots(nrows=BATCH_SIZE, ncols=4, figsize=(16, 4*BATCH_SIZE))
for i in range(BATCH_SIZE):
axs[i, 0].imshow(batch[0][i])
axs[i, 1].imshow(tf.squeeze(batch[1][i]))
axs[i, 2].imshow(tf.squeeze(predictions[i, :, :, 1] > 0.5))
axs[i, 3].imshow(tf.squeeze(predictions[i, :, :, 1]))
axs[0,0].set_title('Image')
axs[0,1].set_title('Ground truth')
axs[0,2].set_title('Segmentation @0.5')
axs[0,3].set_title('Segmentation score')
for batch in val_ds.take(1):
predictions = model.predict(batch[0])
fig, axs = plt.subplots(nrows=BATCH_SIZE, ncols=4, figsize=(16, 4*BATCH_SIZE))
for i in range(BATCH_SIZE):
axs[i, 0].imshow(batch[0][i])
axs[i, 1].imshow(tf.squeeze(batch[1][i]))
axs[i, 2].imshow(tf.squeeze(predictions[i, :, :, 1] > 0.5))
axs[i, 3].imshow(tf.squeeze(predictions[i, :, :, 1]))
axs[0,0].set_title('Image')
axs[0,1].set_title('Ground truth')
axs[0,2].set_title('Segmentation @0.5')
axs[0,3].set_title('Segmentation score')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: sad
Step2: Datentypen
Step3: Aktionen
Step4: Variablen, Vergleiche und Zuordnungen von Variablen
Step5: if - else - (elif)
Step6: Lists
Step7: Dictionaries
Step8: Tuples
Step9: Simple Funktionen - len und sort
Step10: For Loop
Step11: For loop with list of dictionaries
|
<ASSISTANT_TASK:>
Python Code:
#dsfdskjfbskjdfbdkjbfkjdbf
#asdasd
#Mit einem Hashtag vor einer Zeile können wir Code kommentieren, auch das ist sehr wichtig.
#Immer, wirklich, immer den eigenen Code zu kommentieren. Vor allem am Anfang.
print("hello world")
#Der Printbefehl druckt einfach alles aus. Nicht wirklich wahnsinnig toll.
#Doch er ist später sehr nützlich. Vorallem wenn es darum geht Fehler im eigenn Code zu finden.
#Mit dem Inputbefehl kannst Du Den Nutzer mit dem intergieren.
input('wie alt bis Du?')
#Strings
'Hallo wie "geht es Dir"'
"12345"
124
str(124)
#Integer
type(567)
type(int('1234'))
#Floats
4.542323
float(12)
int(4.64)
#Dates, eigentlich Strings
'15-11-2019'
print('Hallo' + ' '+ 'wie' + 'geht' + 'es')
print('Hallo','wie','geht','es')
#Alle anderen gängigen:
#minus -
#Mal *
#geteilt durch /
#Spezial: Modulo. %, geteilt durch und der Rest, der übrigbleibt
22 % 5
2
#Grösser und kleiner als:
#< >
#Gleich == (wichtig, doppelte Gleichzeichen)
#Denn das einfach definiert eine Variable
'Schweiz' == 'Schweiz'
Schweiz = 'reich'
Schweiz
Schweiz == 'reich'
reich = 'arm'
1 = 'reich'
"5schweiz"
1 = 6
a = 34
a = b
a = 'b'
a == 'b'
a
elem = int(input('Wie alt bist Du?'))
elem
if elem < 0:
print('Das ist unmöglich')
else:
print('Du bist aber alt')
elem = int(input('Wie alt bist Du?'))
if elem < 0:
print('Das ist unmöglich')
elif elem < 25:
print('Du bist aber jung')
else:
print('Du bist aber alt')
#Eckige Klammern
[1,"hallo",3,4,5.23,6,7]
lst = [1,2,3,4,5,6,7]
lst
#Einzelene Elemente
lst[0]
#Ganze Abschnitte
lst[:4]
#Komplexere Schnitte
lst[::3]
lst
#Append, Pop, etc.
saved_item = lst.pop()
lst
lst.append(saved_item)
list
#Aufpassen mit Befehl: list weil das macht aus etwas eine Liste. Auch aus Strings:
list('hallo wie geht')
range(0,10)
#Elegantester Weg, eine Liste zu schreiben. Und ganz wichtig,
#der Computer beginn immer bei 0.
list(range(10))
list(range(9,-1,-1))
#Komische, geschwungene Klammern
{'Tier': 'Hund', 'Grösse': 124, 'Alter': 10}
dct = {'Tier': 'Hund', 'Grösse': 124, 'Alter': 10}
dct
dct['Grösse']
#List of Dictionaires
dct_lst = [{'Tier': 'Hund', 'Grösse': 124, 'Alter': 10}, {'Tier': 'Katze', 'Grösse': 130, 'Alter': 8}]
type(dct_lst)
dct_lst[1]
dct_lst[0]['Alter']
neue_list = []
for xxxxxxxxxxxx in dct_lst:
neue_list.append(xxxxxxxxxxxx['Alter'])
neue_list
lst
tuple(lst)
lst
lst = tuple(lst)
lst
#Unveränderbar. Also gutes Format, um Sachen abzuspeichern.
#Aber das wirklich nur der Vollständigkeitshalber.
#len mit Strings
len('hallo wie geht es Dir')
#len mit Lists
len([1,2,3,4,4,5])
#len mit dictionaries
len({'Tier': 'Hund', 'Alter': 345})
#len mit Tuples
len((1,1,1,2,2,1))
#sorted für momentane Sortierung
sorted('hallo wie geht es Dir')
a = 'hallo wie geht es Dir'
sorted(a)
a
#Sort funktioniert allerdings "nur" mit lists
lst = [1, 5, 9, 10, 34, 12, 12, 14]
lst.sort()
lst
dic = {'Tier': 'Hund', 'Alter': 345}
dic.sort()
lst
for hghjgfjhf in lst:
print(x)
dicbkjghkg = {'Tier': 'Hund', 'Alter': 345}
for key, value in dicbkjghkg.items():
print(key, value)
#for loop to make new lists
lst
#Nehmen wir einmal an, wir wollen nur die geraden Zahlen in der Liste
new_lst = []
for elem in lst:
if elem % 2 == 0:
new_lst.append(elem)
# else:
# continue
new_lst
dic_lst = [{'Animal': 'Dog', 'Size': 45},
{'Animal': 'Cat', 'Size': 23},
{'Animal': 'Bird', 'Size': 121212}]
for dic in dic_lst:
print(dic)
for dic in dic_lst:
print(dic['Animal'])
for dic in dic_lst:
print(dic['Animal'] + ': '+ dic['Size']))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in the data and check out the first few rows to get acquainted with the features.
Step2: As normal, split the data into training and test sets.
Step3: 1. Train the classifier
Step4: 2. Make Predictions
Step5: 3. Interpret the results
Step6: Looking at the confusion matrix, do you think the model is doing well at classifying pulsars from radio wave signals? Is the model misclassifying noise as pulsars or missing pulsars in the data?
Step7: Thinking about unbalanced classes
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from sklearn.model_selection import train_test_split
import sklearn.metrics as metrics
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex8 import *
print("Setup complete")
pulsar_data = pd.read_csv('../input/predicting-a-pulsar-star/pulsar_stars.csv')
pulsar_data.head()
y = pulsar_data['target_class']
X = pulsar_data.drop('target_class', axis=1)
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1, test_size=.2)
from sklearn.ensemble import RandomForestClassifier
# Define the model. Set random_state to 1
model = ____
# Fit your model
____
step_1.check()
# The lines below will show you a hint or the solution.
#step_1.hint()
#step_1.solution()
#%%RM_IF(PROD)%%
from sklearn.ensemble import RandomForestClassifier
# Define the model. Set random_state to 1
model = RandomForestClassifier(random_state=1)
# fit your model on the training data
model.fit(train_X, train_y)
step_1.assert_check_passed()
# Get predictions from the trained model using the validation features
pred_y = ____
# Calculate the accuracy of the trained model with the validation targets and predicted targets
accuracy = ____
print("Accuracy: ", accuracy)
# Check your answer
step_2.check()
# The lines below will show you a hint or the solution.
#step_2.hint()
#step_2.solution()
#%%RM_IF(PROD)%%
# Get predictions from the trained model using the validation features
pred_y = model.predict(val_X)
# Calculate the accuracy of the trained model with the validation targets and predicted targets
accuracy = metrics.accuracy_score(val_y, pred_y)
print("Accuracy: ", accuracy)
step_2.assert_check_passed()
(val_y==0).mean()
confusion = metrics.confusion_matrix(val_y, pred_y)
print(f"Confusion matrix:\n{confusion}")
# Normalizing by the true label counts to get rates
print(f"\nNormalized confusion matrix:")
for row in confusion:
print(row / row.sum())
#step_3.solution()
#step_4.solution()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Um exemplo com duas VADs
Step2: Para calcular a função de probabilidade conjunta $F\left(\mathbf{W}\right)$ da VAD bidimensional $\mathbf{W}=(X_1=x, X_2=y)$ podemos fazer,
Step3: Ou seja,
Step4: Portanto, considerando uma nova variável $Z = X_1 + X_2$, temos o espaço amostral $S_Z= \left{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,17,18,20\right}$ e a função de probabilidade associada, na mesma ordem, é dada por
Step5: Método 2
Step6: Casos gerais
Step7: Exemplo 1 - Soma de duas VADs
Step8: No caso do uso do método somaVADs_conv, a função de probabilidade de massa deve ser introduzida com todos os valores definidos num intervalo com espaçamento regular $\delta$ entre $\left[min\left(S_{X_i}\right), max\left(S_{X_i}\right)\right]$. Usamos as variáveis já definidas atrás quando se introduziu o método 2.
Step9: Exemplo 2 - Diferença de duas VADs
Step10: Exemplo 3 - Soma de várias VADs (com somaVADs_conv)
Step11: Exemplo 4 (com somaVADs_conv)
Step12: O que foi feito utiliza as funções de massa de probabilidade individuais para obter a FMP da soma das VAD's, mas se tivermos acesso às amostras (como é o caso neste exemplo) também podemos fazer diretamente a sua soma (com seleção aleatória) e chegar ao conjunto de amostras da VAD da soma para depois obter o histograma (e FMP) desta.
Step13: Obter a FMP desta forma é muito mais simples e direto, mas depende da técnica de amostragem e da semente aleatória utilizadas.
Step14: Scripts otimizados
Step15: Exemplo 5
Step16: Exemplo 6 - Testes ao tempo de execução de cada rotina
Step17: Teste 2
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import time
plt.style.use('ggplot')
np.random.seed(1)
# Variável Aleatória X1
S_X1 = np.array([0., 1., 3., 4., 8., 11.]) # Espaço amostral de X1
fr_X1 = np.array([15., 28., 48., 14., 3., 7.]) # Frequências absolutas de X1
P_X1 = fr_X1 / fr_X1.sum() # Frequências relativas de X1
# Variável Aleatória X2
S_X2 = np.array([1., 3., 7., 9.]) # Espaço amostral de X2
fr_X2 = np.array([12., 8., 28., 34.]) # Frequências absolutas de X2
P_X2 = fr_X2 / fr_X2.sum() # Frequências relativas
print("P_X1 = ", P_X1)
print("P_X2 = ", P_X2)
F_W = np.multiply.outer(P_X1, P_X2)
print("F_W = ", F_W, "\n")
print("F_X1(y) = P_X1 = ", F_W.sum(axis=1))
print("F_X2(x) = P_X2 = ", F_W.sum(axis=0))
print("F_(X1=3, X2=9) = ", F_W[2,3])
somas = np.add.outer(S_X1, S_X2)
print(somas)
# FMP de Z - Espaço amostral em *keys* e probabilidade em *values*
P_Z = {}
for soma, prob in zip(somas.ravel(), F_W.ravel()):
if soma in P_Z:
P_Z[soma] += prob
else:
P_Z[soma] = prob
S_Z = list(P_Z.keys())
P_Z = list(P_Z.values())
print("S_Z = ", S_Z, "\n")
print("P_Z = ", P_Z)
# Espaços amostrais de X1, X2, e Z=X1+X2
S_X1b = np.array([0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11.])
S_X2b = np.array([1., 2., 3., 4., 5., 6., 7., 8., 9.])
S_Zb = np.arange(S_X1b.min() + S_X2b.min(), S_X1b.max() + S_X2b.max() + 1)
# Funções de massa de probabilidade
P_X1b = np.array([0.13043478, 0.24347826, 0., 0.4173913, 0.12173913, 0., 0., 0., 0.02608696, 0., 0., 0.06086957])
P_X2b = np.array([0.14634146, 0., 0.09756098, 0., 0., 0., 0.34146341, 0., 0.41463415])
P_Zb = np.convolve(P_X1b, P_X2b, mode="full")
# Coluna 0: valor de Z ; Coluna 1: probabilidade correspondente
par_Z_PZ = np.vstack([S_Zb, P_Zb]).T
print(par_Z_PZ)
from fmp_soma_vadis import somaVADs_biv, somaVADs_conv
# Construção dos elementos X_i, FMP_i para usar na lista de input
VAD1 = np.vstack([S_X1, P_X1]).T
VAD2 = np.vstack([S_X2, P_X2]).T
print("VAD da soma:\n", somaVADs_biv([VAD1,VAD2]))
# Construção dos elementos X_i, FMP_i para usar na lista de input
VAD1b = np.array([S_X1b, P_X1b]).T
VAD2b = np.array([S_X2b, P_X2b]).T
print("VAD da soma:\n", somaVADs_conv([VAD1b, VAD2b], delta=1))
# Construção dos elementos X_i, FMP_i para usar na lista de input de *somaVADs_biv*
VAD1c = np.vstack([S_X1, P_X1]).T
VAD2c = np.vstack([-S_X2, P_X2]).T # Note-se o sinal
# Construção dos elementos X_i, FMP_i para usar na lista de input de *somaVADs_conv*
VAD1d = VAD1b
VAD2d = np.flipud(VAD2b.copy()) # Flip vertical
VAD2d[:,0] = - VAD2d[:,0] # Simetrizam-se os valores da col 0
Y_res1 = somaVADs_biv([VAD1c, VAD2c])
Y_res2 = somaVADs_conv([VAD1d, VAD2d], delta=1)
# Passo desnecessário em geral - usado aqui apenas para retirar do array as linhas onde a FMP é nula
# e assim deixar o array na forma em que *somaVADs_biv* devolve para permitir comparar os
# resultados com os dois métodos.
Y_res2 = Y_res2[Y_res2[:,1] != 0.]
print("Resultado igual com os dois métodos? ", np.allclose(Y_res1, Y_res2))
print("\nSoma de três VADs:\n", somaVADs_conv([VAD1b, VAD2b, VAD1b]))
print("\nComutatividade:\n", np.allclose(somaVADs_conv([VAD1b, VAD2b, VAD1b]), somaVADs_conv([VAD1b, VAD1b, VAD2b])))
namostras = 100000
bins_edges = np.arange(-60, 60, 1)
# VAD 10
mu_X10, sigma_X10 = 0, 13 # média e desvio padrão
X10 = np.random.normal(mu_X10, sigma_X10, namostras)
hist_X10, edges_X10 = np.histogram(X10, bins=bins_edges, density=True)
S_X10 = (edges_X10[:-1] + edges_X10[1:])/2. # usa-se o centro de cada bin como abscissa
# VAD 11
mu_X11, sigma_X11 = 10, 11 # média e desvio padrão
X11 = np.random.normal(mu_X11, sigma_X11, namostras)
hist_X11, edges_X11 = np.histogram(X11, bins=bins_edges, density=True)
S_X11 = (edges_X11[:-1] + edges_X11[1:])/2.
# VAD 12
mu_X12, sigma_X12 = -5, 9 # média e desvio padrão
X12 = np.random.normal(mu_X12, sigma_X12, namostras)
hist_X12, edges_X12 = np.histogram(X12, bins=bins_edges, density=True)
S_X12 = (edges_X12[:-1] + edges_X12[1:])/2.
# Construção dos elementos X_i, FMP_i para usar na lista de input
VAD10 = np.array([S_X10, hist_X10]).T
VAD11 = np.array([S_X11, hist_X11]).T
VAD12 = np.array([S_X12, hist_X12]).T
t1 = time.time()
Z_PZ = somaVADs_conv([VAD10, VAD11, VAD12]) # Soma das VADs
t2 = time.time()
print("Tempo de execução da soma: %.5f seg\n" % (t2-t1))
# Plot
fig1 = plt.figure(figsize=(14,7))
ax1 = fig1.add_subplot(111)
ax1.plot(VAD10[:,0], VAD10[:,1], label='FMP-X10')
ax1.plot(VAD11[:,0], VAD11[:,1], label='FMP-X11')
ax1.plot(VAD12[:,0], VAD12[:,1], label='FMP-X12')
ax1.plot(Z_PZ[:,0], Z_PZ[:,1], '--', label='FMP-Z por convolucao')
ax1.legend()
plt.show()
# Cada uma das variáveis, X10, X11, X12 é um vetor com 100000 amostras obtidas com amostragem
# aleatória a partir das respetivas funções de probabilidade, portanto obtemos amostras aleatórias
# de Z somando os elementos destes vetores (na mesma ordem).
Z_amostras = X10 + X11 + X12
print("Mean(X10)+Mean(X11)+Mean(X12) = ", X10.mean()+X11.mean()+X12.mean())
print("Mean(Z_amostras) = ", Z_amostras.mean())
print("\nVar(X10)+Var(X11)+Var(X12) = ", X10.var()+X11.var()+X12.var())
print("Var(Z_amostras) = ", Z_amostras.var())
# Histograma com as amostras de Z_amostras
hist_Z_amostras, edges_Z_amostras = np.histogram(Z_amostras, bins=bins_edges, density=True)
S_Z_amostras = (edges_Z_amostras[:-1] + edges_Z_amostras[1:])/2.
# Plot
fig2 = plt.figure(figsize=(14,7))
ax2 = fig2.add_subplot(111)
ax2.plot(S_Z_amostras, hist_Z_amostras, label='FMP por soma das amostras')
ax2.plot(Z_PZ[:,0], Z_PZ[:,1], '--', label='FMP-Z por convolucao')
ax2.legend()
plt.show()
# Soma dos vetores de amostras aleatórias depois de ordenados
Z_amostras_b = np.sort(X10) + np.sort(X11) + np.sort(X12)
# Histograma com as amostras de Z_amostras
hist_Z_amostras_b, edges_Z_amostras_b = np.histogram(Z_amostras_b, bins=bins_edges, density=True)
S_Z_amostras_b = (edges_Z_amostras_b[:-1] + edges_Z_amostras_b[1:])/2.
# Plot
fig3 = plt.figure(figsize=(14,7))
ax3 = fig3.add_subplot(111)
ax3.plot(S_Z_amostras_b, hist_Z_amostras_b, label='FMP-Z por soma das amostras ORDENADAS')
ax3.plot(Z_PZ[:,0], Z_PZ[:,1], '--', label='FMP-Z por convolucao')
ax3.legend()
plt.show()
from fmp_soma_vadis import somaVADs_FFT, somaVADs_hib
# FMPs do Exemplo 4
res_ex2_biv = somaVADs_biv([VAD10, VAD11, VAD12])
res_ex2_conv = somaVADs_conv([VAD10, VAD11, VAD12])
res_ex2_FFT = somaVADs_FFT([(VAD10,1), (VAD11,1), (VAD12,1)])
res_ex2_hib = somaVADs_hib([(VAD10,1), (VAD11,1), (VAD12,1)])
eq1 = np.allclose(res_ex2_biv, res_ex2_conv)
eq2 = np.allclose(res_ex2_biv, res_ex2_FFT)
eq3 = np.allclose(res_ex2_biv, res_ex2_hib)
print("Todos iguais ao obtido no Ex.4? ", all([eq1, eq2, eq3]))
# Usam-se as VADs definidas no Exemplo 4: VAD10, VAD11 e VAD12
t1 = time.time()
resultado_1 = somaVADs_biv([VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,
VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,
VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,
VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,
VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,
VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12])
t2 = time.time()
resultado_2 = somaVADs_conv([VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,
VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,
VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,
VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,
VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,
VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12],
delta=1)
t3 = time.time()
resultado_3 = somaVADs_FFT([(VAD10,10), (VAD11,20), (VAD12,30)], delta=1)
t4 = time.time()
resultado_4 = somaVADs_hib([(VAD10,10), (VAD11,20), (VAD12,30)], delta=1)
t5 = time.time()
eq1 = np.allclose(resultado_1, resultado_2)
eq2 = np.allclose(resultado_1, resultado_3)
eq3 = np.allclose(resultado_1, resultado_4)
print("Todos os resultados iguais? ", all([eq1, eq2, eq3]))
print("Tempo somaVADs_biv : ", t2-t1)
print("Tempo somaVADs_conv : ", t3-t2)
print("Tempo somaVADs_FFT : ", t4-t3)
print("Tempo somaVADs_hib : ", t5-t4)
# Usam-se as VADs definidas no Exemplo 1: VAD1b e VAD2b, e no Exemplo 4: VAD10, VAD11 e VAD12
t1 = time.time()
resultado_1 = somaVADs_FFT([(VAD10,330),(VAD11,190),(VAD12,260),(VAD1b,244),(VAD2b,133)], delta=1)
t2 = time.time()
resultado_2 = somaVADs_hib([(VAD10,330),(VAD11,190),(VAD12,260),(VAD1b,244),(VAD2b,133)], delta=1)
t3 = time.time()
#resultado_2 = somaVADs_biv([VAD10]*330 + [VAD11]*190 + [VAD12]*260 + [VAD1b]*244 + [VAD2b]*133)
t4 = time.time()
print("Todos os resultados iguais? ", np.allclose(resultado_1, resultado_2))
print("Tempo somaVADs_FFT : ", t2-t1)
print("Tempo somaVADs_hib : ", t3-t2)
#print("Tempo somaVADs_biv : ", t4-t3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Networks
Step2: Defining Networks
Step3: Let's create a RandomPyEnvironment to generate structured observations and validate our implementation.
Step4: Since we've defined the observations to be a dict we need to create preprocessing layers to handle these.
Step5: Now that we have the actor network we can process observations from the environment.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install tf-agents
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import tensorflow as tf
import numpy as np
from tf_agents.environments import random_py_environment
from tf_agents.environments import tf_py_environment
from tf_agents.networks import encoding_network
from tf_agents.networks import network
from tf_agents.networks import utils
from tf_agents.specs import array_spec
from tf_agents.utils import common as common_utils
from tf_agents.utils import nest_utils
class ActorNetwork(network.Network):
def __init__(self,
observation_spec,
action_spec,
preprocessing_layers=None,
preprocessing_combiner=None,
conv_layer_params=None,
fc_layer_params=(75, 40),
dropout_layer_params=None,
activation_fn=tf.keras.activations.relu,
enable_last_layer_zero_initializer=False,
name='ActorNetwork'):
super(ActorNetwork, self).__init__(
input_tensor_spec=observation_spec, state_spec=(), name=name)
# For simplicity we will only support a single action float output.
self._action_spec = action_spec
flat_action_spec = tf.nest.flatten(action_spec)
if len(flat_action_spec) > 1:
raise ValueError('Only a single action is supported by this network')
self._single_action_spec = flat_action_spec[0]
if self._single_action_spec.dtype not in [tf.float32, tf.float64]:
raise ValueError('Only float actions are supported by this network.')
kernel_initializer = tf.keras.initializers.VarianceScaling(
scale=1. / 3., mode='fan_in', distribution='uniform')
self._encoder = encoding_network.EncodingNetwork(
observation_spec,
preprocessing_layers=preprocessing_layers,
preprocessing_combiner=preprocessing_combiner,
conv_layer_params=conv_layer_params,
fc_layer_params=fc_layer_params,
dropout_layer_params=dropout_layer_params,
activation_fn=activation_fn,
kernel_initializer=kernel_initializer,
batch_squash=False)
initializer = tf.keras.initializers.RandomUniform(
minval=-0.003, maxval=0.003)
self._action_projection_layer = tf.keras.layers.Dense(
flat_action_spec[0].shape.num_elements(),
activation=tf.keras.activations.tanh,
kernel_initializer=initializer,
name='action')
def call(self, observations, step_type=(), network_state=()):
outer_rank = nest_utils.get_outer_rank(observations, self.input_tensor_spec)
# We use batch_squash here in case the observations have a time sequence
# compoment.
batch_squash = utils.BatchSquash(outer_rank)
observations = tf.nest.map_structure(batch_squash.flatten, observations)
state, network_state = self._encoder(
observations, step_type=step_type, network_state=network_state)
actions = self._action_projection_layer(state)
actions = common_utils.scale_to_spec(actions, self._single_action_spec)
actions = batch_squash.unflatten(actions)
return tf.nest.pack_sequence_as(self._action_spec, [actions]), network_state
action_spec = array_spec.BoundedArraySpec((3,), np.float32, minimum=0, maximum=10)
observation_spec = {
'image': array_spec.BoundedArraySpec((16, 16, 3), np.float32, minimum=0,
maximum=255),
'vector': array_spec.BoundedArraySpec((5,), np.float32, minimum=-100,
maximum=100)}
random_env = random_py_environment.RandomPyEnvironment(observation_spec, action_spec=action_spec)
# Convert the environment to a TFEnv to generate tensors.
tf_env = tf_py_environment.TFPyEnvironment(random_env)
preprocessing_layers = {
'image': tf.keras.models.Sequential([tf.keras.layers.Conv2D(8, 4),
tf.keras.layers.Flatten()]),
'vector': tf.keras.layers.Dense(5)
}
preprocessing_combiner = tf.keras.layers.Concatenate(axis=-1)
actor = ActorNetwork(tf_env.observation_spec(),
tf_env.action_spec(),
preprocessing_layers=preprocessing_layers,
preprocessing_combiner=preprocessing_combiner)
time_step = tf_env.reset()
actor(time_step.observation, time_step.step_type)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First let's define some conditional probability tables.
Step2: Then let's convert them into distribution objects.
Step3: Next we can convert these distributions into states.
Step4: Now that we have our states created, we can finally start making our bayesian network.
Step5: Then we define the edges.
Step6: We finish by baking the network to finalize its structure.
Step7: Now we can check on the structure of our bayesian network.
|
<ASSISTANT_TASK:>
Python Code:
from pomegranate import *
import math
c_table = [[0, 0, 0, 0.6],
[0, 0, 1, 0.4],
[0, 1, 0, 0.7],
[0, 1, 1, 0.3],
[1, 0, 0, 0.2],
[1, 0, 1, 0.8],
[1, 1, 0, 0.9],
[1, 1, 1, 0.1]]
d_table = [[ 0, 0, 0.5 ],
[ 0, 1, 0.5 ],
[ 1, 0, 0.3 ],
[ 1, 1, 0.7 ]]
f_table = [[ 0, 0, 0, 0.8 ],
[ 0, 0, 1, 0.2 ],
[ 0, 1, 0, 0.3 ],
[ 0, 1, 1, 0.7 ],
[ 1, 0, 0, 0.6 ],
[ 1, 0, 1, 0.4 ],
[ 1, 1, 0, 0.9 ],
[ 1, 1, 1, 0.1 ]]
e_table = [[ 0, 0, 0.7 ],
[ 0, 1, 0.3 ],
[ 1, 0, 0.2 ],
[ 1, 1, 0.8 ]]
g_table = [[ 0, 0, 0, 0.34 ],
[ 0, 0, 1, 0.66 ],
[ 0, 1, 0, 0.83 ],
[ 0, 1, 1, 0.17 ],
[ 1, 0, 0, 0.77 ],
[ 1, 0, 1, 0.23 ],
[ 1, 1, 0, 0.12 ],
[ 1, 1, 1, 0.88 ]]
a = DiscreteDistribution({ 0: 0.5, 1: 0.5 })
b = DiscreteDistribution({ 0: 0.7, 1: 0.3 })
e = ConditionalProbabilityTable( e_table, [b] )
c = ConditionalProbabilityTable( c_table, [a,b] )
d = ConditionalProbabilityTable( d_table, [c] )
f = ConditionalProbabilityTable( f_table, [c,e] )
g = ConditionalProbabilityTable( g_table, [c,e] )
a_s = State( a, "a" )
b_s = State( b, "b" )
c_s = State( c, "c" )
d_s = State( d, "d" )
e_s = State( e, "e" )
f_s = State( f, "f" )
g_s = State( g, "g" )
model = BayesianNetwork( "derp" )
model.add_nodes( a_s, b_s, c_s, d_s, e_s, f_s, g_s )
model.add_edge( a_s, c_s )
model.add_edge( b_s, c_s )
model.add_edge( c_s, d_s )
model.add_edge( c_s, f_s )
model.add_edge( b_s, e_s )
model.add_edge( e_s, f_s )
model.add_edge( c_s, g_s )
model.add_edge( e_s, g_s )
model.bake()
print("\n".join( "{:10.10} : {}".format( state.name, belief.parameters[0] ) for state, belief in zip( model.states, model.predict_proba({}, max_iterations=100) ) ))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Inputs
Step2: Generator network
Step3: Discriminator
Step4: Hyperparameters
Step5: Build network
Step6: Discriminator and Generator Losses
Step7: Optimizers
Step8: Training
Step9: Training loss
Step10: Generator samples from training
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name="input_real")
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name="input_z")
return inputs_real, inputs_z
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(h1, h1*alpha)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope("discriminator", reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(h1, h1*alpha)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, g_hidden_size)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, d_hidden_size)
d_model_fake, d_logits_fake = discriminator(g_model, d_hidden_size, reuse=True)
# Calculate losses
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
Step6: Let's run it
Step7: Problem
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
# Parameters
learning_rate = 0.001
training_epochs = 15
display_step = 1
# Network Parameters
n_hidden_1 = 1024 # 1st layer number of features
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# output layer with softmax
logits = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
prediction = tf.nn.softmax(logits)
return prediction, logits
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
# tf Graph input
x = tf.placeholder("float", [None, image_size * image_size])
y = tf.placeholder("float", [None, num_labels])
#tf_train_dataset = tf.placeholder(tf.float32,shape=(batch_size, image_size * image_size))
#tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
weights = {
'h1': tf.Variable(tf.random_normal([image_size * image_size, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, num_labels]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([num_labels]))
}
#define model
train_prediction, train_logits = multilayer_perceptron(x, weights, biases)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(train_logits, y))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# validation, and test data.
valid_prediction,valid_logits = multilayer_perceptron(tf_valid_dataset, weights, biases)
test_prediction, test_logits = multilayer_perceptron(tf_test_dataset, weights, biases)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {x : batch_data, y : batch_labels}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Functions
Step2: Get Data
Step3: Modifiacion en variables
Step4: Para CABA
Step5: Modelo 1 b (educHeadYjobs)
Step6: Modelo 1 c (educHeadYjobsYrooms)
Step7: Modelo 2 a (jobsAndSchool)
Step8: Modelo 2 b (jobsAndSchoolYrooms)
Step9: Modelo con todas las variables (Ignorar)
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import os
import sys
import simpledbf
%pylab inline
import matplotlib.pyplot as plt
import statsmodels.api as sm
from sklearn.model_selection import train_test_split
from sklearn import linear_model
def runModel(dataset, income, varForModel):
'''
This function takes a data set, runs a model according to specifications,
and returns the model, printing the summary
'''
y = dataset[income].values
X = dataset.loc[:,varForModel].values
X = sm.add_constant(X)
w = dataset.PONDERA
lm = sm.WLS(y, X, weights=1. / w, missing = 'drop', hasconst=True).fit()
print lm.summary()
for i in range(1,len(varForModel)+1):
print 'x%d: %s' % (i,varForModel[i-1])
#testing within sample
R_IS=[]
R_OS=[]
#R_prime = []
n=500
for i in range(n):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state = 200)
X_train = sm.add_constant(X_train)
X_test = sm.add_constant(X_test)
lm = linear_model.LinearRegression(fit_intercept=True)
lm.fit(X_train, y_train, sample_weight = 1. / w[:len(X_train)])
y_hat_IS = lm.predict(X_train)
err_IS = y_hat_IS - y_train
R2_IS = 1 - (np.var(err_IS) / np.var(y_train))
y_hat_OS = lm.predict(X_test)
err_OS = y_hat_OS - y_test
R2_OS = 1 - (np.var(err_OS) / np.var(y_test))
R_IS.append(R2_IS)
R_OS.append(R2_OS)
print("IS R-squared for {} times is {}".format(n,np.mean(R_IS)))
print("OS R-squared for {} times is {}".format(n,np.mean(R_OS)))
#data = pd.read_csv('/resources/data/Data Projects/ADS Project/dataFinalParaModelo.csv')
data = pd.read_csv('data/dataFinalParaModelo.csv')
data.head()
data['hasSpouse'] = np.where(np.isnan(data.spouseJob.values),0,1)
data['spouseJob'] = np.where(np.isnan(data.spouseJob.values),0,data.spouseJob.values)
data['TotalFamilyIncome'].replace(to_replace=[0], value=[1] , inplace=True, axis=None)
data = data[data.TotalFamilyIncomeDecReg != 0]
data['income_log'] = np.log(data.TotalFamilyIncome)
(data['TotalFamilyIncome']==0).sum()
data['FloorMaterial'] = np.where(np.isnan(data.FloorMaterial.values),5,data.FloorMaterial.values)
data['sumPredicted'] = np.where(np.isnan(data.sumPredicted.values),0,data.sumPredicted.values)
data['Sewer'] = np.where(np.isnan(data.Sewer.values),5,data.Sewer.values)
data['ToiletType'] = np.where(np.isnan(data.ToiletType.values),4,data.ToiletType.values)
data['Water'] = np.where(np.isnan(data.Water.values),4,data.Water.values)
data['RoofCoat'] = np.where(np.isnan(data.RoofCoat.values),2,data.RoofCoat.values)
data['TotalFamilyIncome'].replace(to_replace=[0], value=[1] , inplace=True, axis=None)
data['income_logPer'] = np.log(data.PerCapInc)
data.corr()['TotalFamilyIncome'][data.corr()['TotalFamilyIncome'] > 0.10].sort_values()
data['haciBool'] = (data.Hacinamiento > 3).astype(int)
#data solo para la ciudad de buenos aires
dataCaba = data.loc[data.AGLO1 == 32,:]
varForModel = [
'headEduc',
#'job',
#'SleepingRooms',
#'schoolAndJob',
#'Hacinamiento'
]
runModel(dataCaba, 'TotalFamilyIncome', varForModel)
varForModel = [
'headEduc',
'job',
#'SleepingRooms',
#'schoolAndJob',
#'Hacinamiento'
]
runModel(dataCaba, 'TotalFamilyIncome', varForModel)
varForModel = [
'headEduc',
'job',
'SleepingRooms',
#'schoolAndJob',
#'Hacinamiento'
]
runModel(dataCaba, 'TotalFamilyIncome', varForModel)
varForModel = [
#'headEduc',
#'job',
#'SleepingRooms',
'schoolAndJob',
#'Hacinamiento'
]
runModel(dataCaba, 'TotalFamilyIncome', varForModel)
varForModel = [
#'headEduc',
#'job',
#'SleepingRooms',
'schoolAndJob',
'Hacinamiento'
]
runModel(dataCaba, 'TotalFamilyIncome', varForModel)
income1 = 'income_log'
income2 = 'TotalFamilyIncome'
income = 'TotalFamilyIncomeDecReg'
income = 'income_log'
varForModel = [
'SleepingRooms','HouseMembers','WaterRec',
'CookingCombustible',
# 'SleepingRooms', 'UsableTotalRooms', 'RoomsNumber', 'HouseMembers', 'WaterRec', # positivas
# 'CookingCombustible', 'FloorMaterial', 'Sewer', 'ToiletType','Sink', 'RoofCoat', 'Water', #negativas
# 'headReading', 'OwnershipRec','WaterType', #sospechoso
'headAge', 'headEduc','headJob', 'spouseJob' #base
]
runModel(data, income, varForModel)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Indefinite integrals
Step2: Integral 1
Step3: Integral 2
Step4: Integral 3
Step5: Integral 4
Step6: Integral 5
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy import integrate
def X(x):
return x**2
I,e=integrate.quad(X,0,3)
I
def integrand(x, a):
return 1.0/(x**2 + a**2)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.pi/a
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
def integrand(x,a):
return (a**2-x**2)**(1/2)
def integral_approx(a):
I1,e1=integrate.quad(integrand,0,a,args=(a,))
return I1
def integral_exact(a):
return (np.pi*a**2)/4
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
def integrand(x):
return np.sin(x)**2
def integral_approx():
I2,e2=integrate.quad(integrand,0,np.pi/2)
return I2
def integral_exact():
return np.pi/4
print("Numerical: ", integral_approx())
print("Exact : ", integral_exact())
assert True # leave this cell to grade the above integral
def integrand(x,a,b):
return np.exp(-a*x)*np.cos(b*x)
def integral_approx(a,b):
I3,e3=integrate.quad(integrand,0,np.inf,args=(a,b,))
return I3
def integral_exact(a,b):
return a/(a**2+b**2)
print("Numerical: ", integral_approx(1.0,1.0))
print("Exact : ", integral_exact(1.0,1.0))
assert True # leave this cell to grade the above integral
def integrand(x,a,b):
return np.exp(-a*x**2-(b/x**2))
def integral_approx(a,b):
I4,e4=integrate.quad(integrand,0,np.inf,args=(a,b,))
return I4
def integral_exact(a,b):
return (1/2)*(np.pi/a)**(1/2)*np.exp(-2*(a*b)**(1/2))
print("Numerical: ", integral_approx(1.0,1.0))
print("Exact : ", integral_exact(1.0,1.0))
assert True # leave this cell to grade the above integral
def integrand(x,a):
return x/(np.sinh(a*x))
def integral_approx(a):
I5,e5=integrate.quad(integrand,0,np.inf,args=(a,))
return I5
def integral_exact(a):
return np.pi**2/4*a**2
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Local Normalised Cross-correlation
Step2: Can we get a faster result using Numba?
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy.signal as sig
import matplotlib.pyplot as plt
%pylab inline
import test_signals as tst
def make_signals(nsamp,delay ):
ref = np.random.rand(nsamp+abs(delay))*2-1
wav = sig.ricker(80,5)
filtered = np.convolve(ref, wav,'same')
if delay < 0 :
return filtered[0:nsamp], filtered[-delay:nsamp-delay]
else:
return filtered[delay:nsamp+delay], filtered[0:nsamp]
#res, shifted = make_signals(1000,10)
res, shifted = tst.make_delayed_signal_pair(1000,10)
fig = plt.figure(figsize=(12,1))
plt.plot(res)
plt.plot(shifted)
shifted.shape
#
# Rolling summ of squares
def rollingSSQ(inp, winlen):
inpsq = np.square(inp)
kernel = np.ones(winlen)
return np.convolve(inpsq, kernel, 'same')
def localCorr_numpy(reference, match, winlen, nlag,lag,qual):
hwin = winlen//2
lags = 2*nlag+1
ns = reference.shape[0]
hxw = hwin-nlag
cor = np.zeros(lags)
refSSQ = np.sqrt(rollingSSQ(reference,2*hxw+1))
matSSQ = np.sqrt(rollingSSQ(match,2*hxw+1))
for ir in range(hwin,ns-hwin):
rbeg = ir - hxw
rend = ir + hxw + 1
mbeg = rbeg - nlag
mend = rend + nlag
cor = np.divide(np.correlate(match[mbeg:mend],reference[rbeg:rend],'valid'),matSSQ[ir-nlag:ir+nlag+1]*refSSQ[ir])
pos = np.argmax(cor)
if pos>0 and pos<lags-1:
cp = (cor[pos-1]-cor[pos+1])/(2.*cor[pos-1]-4.*cor[pos]+2.*cor[pos+1])
lag[ir] = pos-nlag+cp
qual[ir] = cor[pos]
else:
lag[ir]=0.0
qual[ir]=0.0
lag = np.zeros(res.shape)
qual = np.zeros(res.shape)
localCorr_numpy(res,shifted,51,15,lag,qual)
fig = plt.figure(figsize=(12,2))
plt.plot(lag)
plt.plot(qual)
%timeit -o localCorr_numpy(res,shifted,51,15,lag,qual)
import sys,os
from numba import jit
sys.path.insert(0, os.path.join(sys.path[0], '..'))
import extnumba as xn
@jit(nopython=True)
def localCorr_numba(reference, match, winlen, nlag,lag,qual):
hwin = winlen//2
lags = 2*nlag+1
ns = reference.shape[0]
hxw = hwin-nlag
cor = np.zeros(lags)
refSSQ = np.zeros(ns)
matSSQ = np.zeros(ns)
xn.winSSQ(reference,2*hxw+1,refSSQ)
xn.winSSQ(match,2*hxw+1,matSSQ)
for ir in range(hwin,ns-hwin):
rbeg = ir - hxw
rend = ir + hxw + 1
mbeg = rbeg - nlag
mend = rend + nlag
for il in range(lags):
lbeg = rbeg + il - nlag
lend = lbeg + 2 * hxw + 1
sum = 0.0
for iref,imat in zip(range(rbeg,rend),range(lbeg,lend)):
sum += reference[iref]*match[imat]
den = refSSQ[ir]*matSSQ[lbeg+hxw]
if den== 0.0:
cor[il] = 0.0
else:
cor[il] = sum/den
pos = np.argmax(cor)
if pos>0 and pos<lags-1:
cp = (cor[pos-1]-cor[pos+1])/(2.*cor[pos-1]-4.*cor[pos]+2.*cor[pos+1])
lag[ir] = pos-nlag+cp
qual[ir] = cor[pos]
else:
lag[ir]=0.0
qual[ir]=0.0
localCorr_numba(res,shifted,51,15,lag,qual)
fig = plt.figure(figsize=(12,2))
plt.plot(lag)
plt.plot(qual)
%timeit -o localCorr_numba(res,shifted,51,15,lag,qual)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get statistics of binned quantities
Step2: Create a sample of sn with z, mu, mu_err
Step3: These show
|
<ASSISTANT_TASK:>
Python Code:
mockDataFile = os.path.join(isotropy.example_data_dir, 'snFits.p.gz')
sampleData, totalSN = isotropy.read_mockDataPickle(mockDataFile)
sampleData.head()
# Total number of SN in the simulation (before we threw away bad points)
totalSN
sampleData['mu_err'] = sampleData.mu_var.apply(np.sqrt)
sampleData.head()
# mu_err > 5.0 is pretty useless, throw them
sampleData = sampleData.query('mu_err < 5.0')
df = isotropy.binnedDescStat(sampleData)
# removes the nans
df = df.dropna()
df
# Now create samples of size numSN = 50000
numSN = 50000
dfmu = isotropy.drawSamples(df, numSN, totalSN, np.random.RandomState(0))
dfmu.head()
fig, ax = plt.subplots(1,3)
_ = ax[0].hist(dfmu.z, bins=np.arange(0., 1.4, 0.1), histtype='step')
_ = ax[1].hist(dfmu.mu_err, bins=20, histtype='step')
_ = ax[2].errorbar(dfmu.z, dfmu.mu, yerr=dfmu.mu_err.values, fmt='o')
_ = ax[2].plot(dfmu.z, cosmo.distmod(dfmu.z), 'k-', lw=2.)
ax[0].set_xlabel('x')
ax[0].set_ylabel('Num')
ax[1].set_xlabel('mu_err')
ax[2].set_xlabel('z')
ax[2].set_ylabel('mu')
from utils import plotutils as pl
fig, ax0, ax1 = pl.settwopanel(setdifflimits=None)
ax0.errorbar(dfmu.z, dfmu.mu, yerr=dfmu.mu_err.values, fmt='.')
ax0.plot(dfmu.z, cosmo.distmod(dfmu.z), 'k-', lw=2.)
ax0.set_xscale('log')
ax1.errorbar(dfmu.z, dfmu.mu - cosmo.distmod(dfmu.z).value, yerr=dfmu.mu_err.values, fmt='o')
ax1.set_xscale('log')
ax0.set_xlim(0., 1.5)
ax1.set_xlim(0., 1.5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Charge calculations
Step2: Now we need to count the number of residues. The problem is that if we count the number of GLU, ASP,... occurrences, we will be counting atoms, not residues. And as each residue has a different number of atoms, the translation from atoms to residues is not trivial. But residues are numbered in a PDB. They correspond to columns 23-26.
Step3: Good! So now that we know how to detect a change in residue, we need to count how many of these residues are in the charged set.
|
<ASSISTANT_TASK:>
Python Code:
filein = open('../data/protein.pdb', 'r')
fileout = open('../data/protein_hie.pdb', 'w')
#Finish...
filein.close()
fileout.close()
negative = set(['ARG', 'LYS']) #We consider histidines (HIS) neutral and epsilon protonated (thus the HIE name)
positive = #Finish
charged = #Finish using union method
total_res = 0
filein = open('data/protein.pdb', 'r')
# Finish...
print("Total number of residues: ", total_res)
total_charged = 0
charge = 0
filein = open('data/protein.pdb', 'r')
# Finish
print("Total number of charged residues: ", total_charged)
print("Net charge of the protein: ", charge)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting a dataset
Step2: This function is useful for pre-processing the data into an image and input vector.
Step3: Load all the images from root folder
Step4: Randomize the data order.
Step5: create training / validation / test split (70%, 15%, 15%)
Step6: Separate data for labels.
Step7: Pre-process the data as before by making sure it's float32 and normalized between 0 and 1.
Step8: Let's get a summary of what we have.
Step9: If everything worked properly, you should have loaded a bunch of images, and split them into three sets
Step10: First training a neural net from scratch
Step11: We've created a medium-sized network with ~1.2 million weights and biases (the parameters). Most of them are leading into the one pre-softmax fully-connected layer "dense_5".
Step12: Let's plot the validation loss and validation accuracy over time.
Step13: Notice that the validation loss begins to actually rise after around 16 epochs, even though validation accuracy remains roughly between 40% and 50%. This suggests our model begins overfitting around then, and best performance would have been achieved if we had stopped early around then. Nevertheless, our accuracy would not have likely been above 50%, and probably lower down.
Step14: Finally, we see that we have achieved a (top-1) accuracy of around 49%. That's not too bad for 6000 images, considering that if we were to use a naive strategy of taking random guesses, we would have only gotten around 1% accuracy.
Step15: Notice that VGG16 is much bigger than the network we constructed earlier. It contains 13 convolutional layers and two fully connected layers at the end, and has over 138 million parameters, around 100 times as many parameters than the network we made above. Like our first network, the majority of the parameters are stored in the connections leading into the first fully-connected layer.
Step16: We are going to retrain this network, model_new on the new dataset and labels. But first, we need to freeze the weights and biases in all the layers in the network, except our new one at the end, with the expectation that the features that were learned in VGG should still be fairly relevant to the new image classification task. Not optimal, but most likely better than what we can train to in our limited dataset.
Step17: Looking at the summary, we see the network is identical to the VGG model we instantiated earlier, except the last layer, formerly a 1000-neuron softmax, has been replaced by a new 97-neuron softmax. Additionally, we still have roughly 134 million weights, but now the vast majority of them are "non-trainable params" because we froze the layers they are contained in. We now only have 397,000 trainable parameters, which is actually only a quarter of the number of parameters needed to train the first model.
Step18: Our validation accuracy hovers close to 80% towards the end, which is more than 30% improvement on the original network trained from scratch (meaning that we make the wrong prediction on 20% of samples, rather than 50%).
Step19: Notice that whereas the original model began overfitting around epoch 16, the new model continued to slowly decrease its loss over time, and likely would have improved its accuracy slightly with more iterations. The new model made it to roughly 80% top-1 accuracy (in the validation set) and continued to improve slowly through 100 epochs.
Step20: To predict a new image, simply run the following code to get the probabilities for each class.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import os
#if using Theano with GPU
#os.environ["KERAS_BACKEND"] = "tensorflow"
import random
import numpy as np
import keras
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from keras.preprocessing import image
from keras.applications.imagenet_utils import preprocess_input
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Activation
from keras.layers import Conv2D, MaxPooling2D
from keras.models import Model
!echo "Downloading 101_Object_Categories for image notebooks"
!curl -L -o 101_ObjectCategories.tar.gz --progress-bar http://www.vision.caltech.edu/Image_Datasets/Caltech101/101_ObjectCategories.tar.gz
!tar -xzf 101_ObjectCategories.tar.gz
!rm 101_ObjectCategories.tar.gz
!ls
root = '101_ObjectCategories'
exclude = ['BACKGROUND_Google', 'Motorbikes', 'airplanes', 'Faces_easy', 'Faces']
train_split, val_split = 0.7, 0.15
categories = [x[0] for x in os.walk(root) if x[0]][1:]
categories = [c for c in categories if c not in [os.path.join(root, e) for e in exclude]]
print(categories)
# helper function to load image and return it and input vector
def get_image(path):
img = image.load_img(path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
return img, x
data = []
for c, category in enumerate(categories):
images = [os.path.join(dp, f) for dp, dn, filenames
in os.walk(category) for f in filenames
if os.path.splitext(f)[1].lower() in ['.jpg','.png','.jpeg']]
for img_path in images:
img, x = get_image(img_path)
data.append({'x':np.array(x[0]), 'y':c})
# count the number of classes
num_classes = len(categories)
random.shuffle(data)
idx_val = int(train_split * len(data))
idx_test = int((train_split + val_split) * len(data))
train = data[:idx_val]
val = data[idx_val:idx_test]
test = data[idx_test:]
x_train, y_train = np.array([t["x"] for t in train]), [t["y"] for t in train]
x_val, y_val = np.array([t["x"] for t in val]), [t["y"] for t in val]
x_test, y_test = np.array([t["x"] for t in test]), [t["y"] for t in test]
print(y_test)
# normalize data
x_train = x_train.astype('float32') / 255.
x_val = x_val.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
# convert labels to one-hot vectors
y_train = keras.utils.to_categorical(y_train, num_classes)
y_val = keras.utils.to_categorical(y_val, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print(y_test.shape)
# summary
print("finished loading %d images from %d categories"%(len(data), num_classes))
print("train / validation / test split: %d, %d, %d"%(len(x_train), len(x_val), len(x_test)))
print("training data shape: ", x_train.shape)
print("training labels shape: ", y_train.shape)
images = [os.path.join(dp, f) for dp, dn, filenames in os.walk(root) for f in filenames if os.path.splitext(f)[1].lower() in ['.jpg','.png','.jpeg']]
idx = [int(len(images) * random.random()) for i in range(8)]
imgs = [image.load_img(images[i], target_size=(224, 224)) for i in idx]
concat_image = np.concatenate([np.asarray(img) for img in imgs], axis=1)
plt.figure(figsize=(16,4))
plt.imshow(concat_image)
# build the network
model = Sequential()
print("Input dimensions: ",x_train.shape[1:])
model.add(Conv2D(32, (3, 3), input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
model.summary()
# compile the model to use categorical cross-entropy loss function and adadelta optimizer
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(x_train, y_train,
batch_size=128,
epochs=10,
validation_data=(x_val, y_val))
fig = plt.figure(figsize=(16,4))
ax = fig.add_subplot(121)
ax.plot(history.history["val_loss"])
ax.set_title("validation loss")
ax.set_xlabel("epochs")
ax2 = fig.add_subplot(122)
ax2.plot(history.history["val_acc"])
ax2.set_title("validation accuracy")
ax2.set_xlabel("epochs")
ax2.set_ylim(0, 1)
plt.show()
loss, accuracy = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', loss)
print('Test accuracy:', accuracy)
vgg = keras.applications.VGG16(weights='imagenet', include_top=True)
vgg.summary()
# make a reference to VGG's input layer
inp = vgg.input
# make a new softmax layer with num_classes neurons
new_classification_layer = Dense(num_classes, activation='softmax')
# connect our new layer to the second to last layer in VGG, and make a reference to it
out = new_classification_layer(vgg.layers[-2].output)
# create a new network between inp and out
model_new = Model(inp, out)
# make all layers untrainable by freezing weights (except for last layer)
for l, layer in enumerate(model_new.layers[:-1]):
layer.trainable = False
# ensure the last layer is trainable/not frozen
for l, layer in enumerate(model_new.layers[-1:]):
layer.trainable = True
model_new.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model_new.summary()
history2 = model_new.fit(x_train, y_train,
batch_size=128,
epochs=10,
validation_data=(x_val, y_val))
fig = plt.figure(figsize=(16,4))
ax = fig.add_subplot(121)
ax.plot(history.history["val_loss"])
ax.plot(history2.history["val_loss"])
ax.set_title("validation loss")
ax.set_xlabel("epochs")
ax2 = fig.add_subplot(122)
ax2.plot(history.history["val_acc"])
ax2.plot(history2.history["val_acc"])
ax2.set_title("validation accuracy")
ax2.set_xlabel("epochs")
ax2.set_ylim(0, 1)
plt.show()
loss, accuracy = model_new.evaluate(x_test, y_test, verbose=0)
print('Test loss:', loss)
print('Test accuracy:', accuracy)
img, x = get_image('101_ObjectCategories/airplanes/image_0003.jpg')
probabilities = model_new.predict([x])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Live ploting with Bokeh
Step2: Parameter tuning with grid search
|
<ASSISTANT_TASK:>
Python Code:
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import StratifiedKFold, cross_val_score
from sklearn.preprocessing import StandardScaler
import numpy as np
import pandas as pd
import time
# 1. Number of times pregnant
# 2. Plasma glucose concentration a 2 hours in an oral glucose tolerance test
# 3. Diastolic blood pressure (mm Hg)
# 4. Triceps skin fold thickness (mm)
# 5. 2-Hour serum insulin (mu U/ml)
# 6. Body mass index (weight in kg/(height in m)^2)
# 7. Diabetes pedigree function
# 8. Age (years)
# 9. Class variable (0 or 1)
names = ["#of preg", "gluc_conc", "blood_pressure", "skin_thickness", "insulin_conc",
"BMI", "DPF", "age", "class"]
df = pd.read_csv('data/pima-indians-diabetes.csv', names=names)
df.head()
X = df[df.columns[:-1].values]
y = df["class"].values
scaler = StandardScaler()
X_scale = scaler.fit_transform(X)
def create_model():
model = Sequential()
model.add(Dense(12, input_dim=8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(8, kernel_initializer='uniform', activation="relu"))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
np.random.seed(7)
start = time.time()
model = KerasClassifier(build_fn=create_model, epochs=150, batch_size=10, verbose=0)
kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=7)
results = cross_val_score(model, X_scale, y, cv=10)
print(results.mean())
print(time.time() - start)
from keras.callbacks import Callback
from bokeh.io import push_notebook, output_notebook
from bokeh.layouts import row, widgetbox, column
from bokeh.models import ColumnDataSource
from bokeh.models.widgets import PreText
from bokeh.plotting import figure, show
output_notebook()
source = ColumnDataSource(data=dict(x=[], y=[], z=[]))
plot = figure(plot_height=250, plot_width=700)
plot.circle('x', 'y', source=source)
plot.line('x', 'y', source=source, color='red')
plot.circle('x', 'z', source=source)
plot.line('x', 'z', source=source, color='green')
show(plot, notebook_handle=True)
new_data = {
'x' : [],
'y' : [],
'z' : []
}
class TrainingHistory(Callback):
def on_train_begin(self, logs={}):
self.losses = []
self.i = 1
def on_epoch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
new_data['x'] = [self.i]
new_data['y'] = [logs.get('loss')]
new_data['z'] = [logs.get('acc')]
source.stream(new_data, rollover=30)
# text_input.text = "Progress: " + str(self.i/50.0 * 100)
push_notebook()
self.i += 1
history = TrainingHistory()
def create_model():
model = Sequential()
model.add(Dense(12, input_dim=8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(8, kernel_initializer='uniform', activation="relu"))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = create_model()
model.fit(X_scale, y, epochs=250, batch_size=5, verbose=0, callbacks=[history])
from sklearn.model_selection import GridSearchCV
def create_model(optimizer="rmsprop", init="glorot_uniform"):
model = Sequential()
model.add(Dense(12, input_dim=X.shape[1], kernel_initializer=init, activation='relu'))
model.add(Dense(8, kernel_initializer=init, activation='relu'))
model.add(Dense(1, kernel_initializer=init, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=create_model, verbose=0)
optimizers = ['rmsprop', 'adam']
init = ['glorot_uniform', 'uniform', 'normal']
epochs = np.arange(50, 300, 50)
batches = np.array([5, 10, 25, 32])
param_grid = dict(optimizer=optimizers, epochs=epochs, init=init, batch_size=batches)
grid = GridSearchCV(estimator=model, param_grid=param_grid)
grid_results = grid.fit(X_scale, y)
print("IT TOOK: {} minutes".format((time.time() - start)/60))
print("Best: %f using %s" % (grid_results.best_score_, grid_results.best_params_))
for params, mean_score, scores in grid_results.grid_scores_:
print("%f (%f) with: %r" % (scores.mean(), scores.std(), params))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To reduce memory consumption and running time, some of the steps are
Step2: The data was collected with a CTF 275 system at 2400 Hz and low-pass
Step3: In the memory saving mode we use preload=False and use the memory
Step4: Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference
Step5: For noise reduction, a set of bad segments have been identified and stored
Step6: Here we compute the saccade and EOG projectors for magnetometers and add
Step7: Visually inspect the effects of projections. Click on 'proj' button at the
Step8: Typical preprocessing step is the removal of power line artifact (50 Hz or
Step9: We also lowpass filter the data at 100 Hz to remove the hf components.
Step10: Epoching and averaging.
Step11: The event timing is adjusted by comparing the trigger times on detected
Step12: We mark a set of bad channels that seem noisier than others. This can also
Step13: The epochs (trials) are created for MEG channels. First we find the picks
Step14: We only use first 40 good epochs from each run. Since we first drop the bad
Step15: The averages for each conditions are computed.
Step16: Typical preprocessing step is the removal of power line artifact (50 Hz or
Step17: Here we plot the ERF of standard and deviant conditions. In both conditions
Step18: Show activations as topography figures.
Step19: We can see the MMN effect more clearly by looking at the difference between
Step20: Source estimation.
Step21: The transformation is read from a file. More information about coregistering
Step22: To save time and memory, the forward solution is read from a file. Set
Step23: The sources are computed using dSPM method and plotted on an inflated brain
Step24: Deviant condition.
Step25: Difference.
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Mainak Jas <mainak.jas@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# Jaakko Leppakangas <jaeilepp@student.jyu.fi>
#
# License: BSD (3-clause)
import os.path as op
import pandas as pd
import numpy as np
import mne
from mne import combine_evoked
from mne.minimum_norm import apply_inverse
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
print(__doc__)
use_precomputed = True
data_path = bst_auditory.data_path()
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
raw_fname1 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_02.ds')
erm_fname = op.join(data_path, 'MEG', 'bst_auditory',
'S01_Noise_20131218_01.ds')
preload = not use_precomputed
raw = read_raw_ctf(raw_fname1, preload=preload)
n_times_run1 = raw.n_times
mne.io.concatenate_raws([raw, read_raw_ctf(raw_fname2, preload=preload)])
raw_erm = read_raw_ctf(erm_fname, preload=preload)
raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'})
if not use_precomputed:
# Leave out the two EEG channels for easier computation of forward.
raw.pick_types(meg=True, eeg=False, stim=True, misc=True, eog=True,
ecg=True)
annotations_df = pd.DataFrame()
offset = n_times_run1
for idx in [1, 2]:
csv_fname = op.join(data_path, 'MEG', 'bst_auditory',
'events_bad_0%s.csv' % idx)
df = pd.read_csv(csv_fname, header=None,
names=['onset', 'duration', 'id', 'label'])
print('Events from run {0}:'.format(idx))
print(df)
df['onset'] += offset * (idx - 1)
annotations_df = pd.concat([annotations_df, df], axis=0)
saccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int)
# Conversion from samples to times:
onsets = annotations_df['onset'].values / raw.info['sfreq']
durations = annotations_df['duration'].values / raw.info['sfreq']
descriptions = annotations_df['label'].values
annotations = mne.Annotations(onsets, durations, descriptions)
raw.annotations = annotations
del onsets, durations, descriptions
saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True,
reject_by_annotation=False)
projs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0,
desc_prefix='saccade')
if use_precomputed:
proj_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-eog-proj.fif')
projs_eog = mne.read_proj(proj_fname)[0]
else:
projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(),
n_mag=1, n_eeg=0)
raw.add_proj(projs_saccade)
raw.add_proj(projs_eog)
del saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory
raw.plot(block=True)
if not use_precomputed:
meg_picks = mne.pick_types(raw.info, meg=True, eeg=False)
raw.plot_psd(tmax=np.inf, picks=meg_picks)
notches = np.arange(60, 181, 60)
raw.notch_filter(notches, phase='zero-double', fir_design='firwin2')
raw.plot_psd(tmax=np.inf, picks=meg_picks)
if not use_precomputed:
raw.filter(None, 100., h_trans_bandwidth=0.5, filter_length='10s',
phase='zero-double', fir_design='firwin2')
tmin, tmax = -0.1, 0.5
event_id = dict(standard=1, deviant=2)
reject = dict(mag=4e-12, eog=250e-6)
# find events
events = mne.find_events(raw, stim_channel='UPPT001')
sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0]
onsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0]
min_diff = int(0.5 * raw.info['sfreq'])
diffs = np.concatenate([[min_diff + 1], np.diff(onsets)])
onsets = onsets[diffs > min_diff]
assert len(onsets) == len(events)
diffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq']
print('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms'
% (np.mean(diffs), np.std(diffs)))
events[:, 0] = onsets
del sound_data, diffs
raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408']
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=False,
proj=True)
epochs.drop_bad()
epochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)],
epochs['standard'][182:222]])
epochs_standard.load_data() # Resampling to save memory.
epochs_standard.resample(600, npad='auto')
epochs_deviant = epochs['deviant'].load_data()
epochs_deviant.resample(600, npad='auto')
del epochs, picks
evoked_std = epochs_standard.average()
evoked_dev = epochs_deviant.average()
del epochs_standard, epochs_deviant
for evoked in (evoked_std, evoked_dev):
evoked.filter(l_freq=None, h_freq=40., fir_design='firwin')
evoked_std.plot(window_title='Standard', gfp=True, time_unit='s')
evoked_dev.plot(window_title='Deviant', gfp=True, time_unit='s')
times = np.arange(0.05, 0.301, 0.025)
evoked_std.plot_topomap(times=times, title='Standard', time_unit='s')
evoked_dev.plot_topomap(times=times, title='Deviant', time_unit='s')
evoked_difference = combine_evoked([evoked_dev, -evoked_std], weights='equal')
evoked_difference.plot(window_title='Difference', gfp=True, time_unit='s')
reject = dict(mag=4e-12)
cov = mne.compute_raw_covariance(raw_erm, reject=reject)
cov.plot(raw_erm.info)
del raw_erm
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
trans = mne.read_trans(trans_fname)
if use_precomputed:
fwd_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-meg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
else:
src = mne.setup_source_space(subject, spacing='ico4',
subjects_dir=subjects_dir, overwrite=True)
model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3],
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src,
bem=bem)
inv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov)
snr = 3.0
lambda2 = 1.0 / snr ** 2
del fwd
stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM')
brain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_standard, brain
stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM')
brain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_deviant, brain
stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM')
brain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.15, time_unit='s')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Visualizing our Data
Step4: Useful Functions
Step5: Neural Networks 1D Input
Step6: Here, the input is now 1D. Because it has 64 elements per array, we'll need 64 input neurons.
Step8: Create training and testing inputs
Step12: Create training solution
Step13: Creating the Training Class
Step14: Our input array is "training_input" our solution array is "training_solution". I chose 40 hidden nodes and 90 iterations with a learning rate of 0.7.
Step15: To generate the weights, use the commented out line below. Because it is a slow process, I will be loading weights from a previous training.
Step17: Testing Class
Step18: Calculate and Visualize Accuray
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from sklearn.datasets import load_digits
digits = load_digits()
from IPython.html.widgets import interact
%matplotlib inline
import matplotlib.pyplot as plt
import NNpix as npx
def interact_fun(i):
plt.matshow(digits.images[i])
plt.show()
print("True Number: %d" % digits.target[i])
interact(interact_fun, i=(0,1796));
The activation function.
def sigmoid(x):
return 1/(1+np.exp(-x))
assert sigmoid(np.log(2)) == 2/3
The derivative of the activation function
def sigmoid_prime(x):
return sigmoid(x)*(1-sigmoid(x))
print(digits.images[0])
print(digits.images[0].flatten())
perm = np.random.permutation(1797)
Divide by 100 to get inputs into decimals
training_input = np.array([digits.images[perm[i]].flatten() for i in range(1000)])/100
test_input = np.array([digits.images[perm[i]].flatten() for i in range(1000,1797)])/100
assert len(training_input[0]) == 64
assert len(test_input[0]) == 64
def create_soln(training_numbers):
Creates 2D array for training solutions
a = np.repeat(0,10,None)
a = np.repeat([a], len(training_numbers), 0)
for i in range(len(training_numbers)):
a[i][training_numbers[i]] = 1
return a
True number solutions used to calculate accuracy
number_solution = np.array([digits.target[perm[i]] for i in range(1000,1797)])
Creates the array of solutions to be entered into the neural network
training_solution = create_soln([digits.target[perm[i]] for i in range(1000)])
assert len(training_solution[0]) == 10
class NN_training(object):
def __init__(self, input_array, soln, hidnum, iters, lr):
self.input_array = input_array
self.soln = soln
#Number of hidden nodes
self.hidnum = hidnum
#Number of iterations through the training set
self.iters = iters
#Initalize WIJ weights (input to hidden)
self.wij = np.random.uniform(-.5,0.5,(hidnum,65))
#Initalize WJK weights (hidden to output)
self.wjk = np.random.uniform(-0.5,0.5,(10,hidnum+1))
#Set a learning rate
self.lr = lr
def train(self):
iters = self.iters
for n in range(iters):
for i in range(len(self.input_array)):
soln = self.soln[i]
hidnum = self.hidnum
input_array = np.append(self.input_array[i],[1])
#Find sum of weights x input array values for each hidden
self.hidden_sums = (sum((input_array * self.wij).T)).T
#Find outputs of hidden neurons; include bias
self.hidden_out = np.append(sigmoid(self.hidden_sums),[1])
#Find sums of weights x hidden outs for each neuron in output layer
self.output_sums = (sum((self.hidden_out * self.wjk).T)).T
#Find output of the outputs
self.output_out = sigmoid(self.output_sums)
self.E = self.output_out - soln
#Find delta values for each output
self.output_deltas = self.E * sigmoid_prime(self.output_sums)
#Find delta values for each hidden
self.hidden_deltas = sigmoid_prime(np.delete(self.hidden_out,[hidnum],None)) * sum((self.output_deltas * (np.delete(self.wjk, [hidnum], 1)).T).T)
#Change weights
self.wij = -self.lr * (self.hidden_deltas*(np.repeat([input_array],hidnum,0)).T).T + self.wij
self.wjk = -self.lr * (self.output_deltas*(np.repeat([self.hidden_out],10,0)).T).T + self.wjk
return (self.wij, self.wjk)
my_net = NN_training(training_input, training_solution, 40, 90, 0.7)
# x, y = my_net.train()
f = np.load("NNweights.npz")
list(f)
x = f['arr_0']
y = f['arr_1']
assert len(x) == 40
assert len(y) == 10
class NN_ask (object):
Feed forward using final weights from training backpropagation
def __init__(self, input_array, wij, wjk):
self.input_array = input_array
self.wij = wij
self.wjk = wjk
def get_ans(self):
wij = self.wij
wjk = self.wjk
soln = []
for i in range(len(self.input_array)):
input_array = np.append(self.input_array[i],[1])
self.hidden_sums = (sum((input_array * wij).T)).T
self.hidden_out = np.append(sigmoid(self.hidden_sums),[1])
self.output_sums = (sum((self.hidden_out * wjk).T)).T
self.output_out = sigmoid(self.output_sums)
for i in range(len(self.output_out)):
if self.output_out[i] == max(self.output_out):
a = i
soln.append(a)
return soln
test_net = NN_ask(test_input, x, y)
comp_vals = test_net.get_ans()
print(((sum((comp_vals - number_solution == 0).astype(int)) / (1797-1000)) * 100), "%")
def interacting(i):
plt.matshow(digits.images[perm[i+1000]])
plt.show()
print("Neural Network's Value: %d" %comp_vals[i])
print("True Known Solution %d" %number_solution[i])
interact(interacting, i=(0,796));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 設定COM port (set current COM port)
Step2: 列出檔案 (list files)
Step3: 刪除檔案 (delete all files)
Step4: Functions for copying files
Step5: Copy 檔案到開發板 (copy files onto ESP8266 / NodeMCU, all needed fils will be put in the same folder)
Step6: 單一檔案上傳 (single file upload, in case needed)
Step7: 列出檔案 (list files)
Step8: 檢查檔案內容 (check file content)
Step9: 連網測試 (network config and test)
|
<ASSISTANT_TASK:>
Python Code:
import os
com_port = 'COM12'
# com_port = 'COM13'
com_port = 'COM15'
# com_port = 'COM16'
# 現存檔案
files = !ampy --port {com_port} ls
files
# 清空
for file in files:
print('Deleting {0}'.format(file))
!ampy --port {com_port} rm {file}
def copy_one_file(folder, file):
print('Copying {0}'.format(file))
!ampy --port {com_port} put {os.path.join(folder, file)}
def copy_all_files(folders, main_filename = 'main.py'):
files = !ampy --port {com_port} ls
if main_filename in files:
print('Deleting {0}'.format(main_filename))
!ampy --port {com_port} rm {main_filename}
for folder in folders:
for file in os.listdir(folder):
if not file.startswith('_') and not file.startswith(main_filename):
print('Copying {0}'.format(file))
!ampy --port {com_port} put {os.path.join(folder, file)}
folders = ['..\\codes\\micropython', '..\\codes\\node', '..\\codes\\shared']
main_filename = 'main.py'
copy_all_files(folders, main_filename)
copy_one_file('..\\codes\\micropython', main_filename)
copy_one_file('..\\..\\dmz', 'config.py')
copy_one_file('..\\codes\\shared', 'config.py')
copy_one_file('..\\codes\\node', 'node.py')
copy_one_file('..\\codes\\micropython', 'u_python.py')
# !ampy --port {com_port} ls
# !ampy --port {com_port} get boot.py
# !ampy --port {com_port} get main.py
# 連上網路
# import network; nic=network.WLAN(network.STA_IF); nic.active(True); nic.connect('SSID','password');nic.ifconfig()
# import network; nic=network.WLAN(network.STA_IF); nic.active(True); nic.connect('Kingnet-70M-$370', '');nic.ifconfig()
# import network; nic=network.WLAN(network.STA_IF);nic.ifconfig();nic.config('mac');nic.ifconfig((['mac',])
# 發出 http request
# import socket;addr=socket.getaddrinfo('micropython.org',80)[0][-1]
# s = socket.socket();s.connect(addr);s.send(b'GET / HTTP/1.1\r\nHost: micropython.org\r\n\r\n');data = s.recv(1000);s.close()
# Delete all files
# import u_python;u_python.del_all_files();import os;os.listdir()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: How JAX primitives work
Step9: In order to understand how JAX is internally using the primitives,
Step10: Instead of using jax.lax primitives directly, we can use other functions
Step13: Notice that in the process of computing grad, JAX invokes square_add_numpy and
Step14: If we try to call the newly defined functions we get an error, because
Step16: Primal evaluation rules
Step17: JIT
Step19: Abstract evaluation rules
Step20: If we re-attempt to JIT, we see how the abstract evaluation proceeds, but
Step22: XLA Compilation rules
Step23: Now we succeed to JIT. Notice below that JAX first evaluates the function
Step24: Below is another use of jit where we compile only
Step26: Forward differentiation
Step27: TO EXPLAIN
Step28: Notice that first we evaluate multiply_add_value_and_jvp abstractly, which in turn
Step30: The above error is because there is a missing piece for JAX to be able
Step31: Now we can complete the run of the grad
Step32: Notice the two calls to multiply_add_transpose. They correspond to the two
Step33: Batching
Step35: We need to tell JAX how to evaluate the batched version of the primitive. In this particular case, the multiply_add_prim already operates pointwise for any dimension of input vectors. So the batched version can use the same multiply_add_prim implementation.
Step36: JIT of batching
|
<ASSISTANT_TASK:>
Python Code:
from jax import lax
from jax._src import api
def multiply_add_lax(x, y, z):
Implementation of multiply-add using the jax.lax primitives.
return lax.add(lax.mul(x, y), z)
def square_add_lax(a, b):
A square-add function using the newly defined multiply-add.
return multiply_add_lax(a, a, b)
print("square_add_lax = ", square_add_lax(2., 10.))
# Differentiate w.r.t. the first argument
print("grad(square_add_lax) = ", api.grad(square_add_lax, argnums=0)(2.0, 10.))
#@title Helper functions (execute this cell)
import functools
import traceback
_indentation = 0
def _trace(msg=None):
Print a message at current indentation.
if msg is not None:
print(" " * _indentation + msg)
def _trace_indent(msg=None):
Print a message and then indent the rest.
global _indentation
_trace(msg)
_indentation = 1 + _indentation
def _trace_unindent(msg=None):
Unindent then print a message.
global _indentation
_indentation = _indentation - 1
_trace(msg)
def trace(name):
A decorator for functions to trace arguments and results.
def trace_func(func): # pylint: disable=missing-docstring
def pp(v):
Print certain values more succinctly
vtype = str(type(v))
if "jax._src.lib.xla_bridge._JaxComputationBuilder" in vtype:
return "<JaxComputationBuilder>"
elif "jaxlib.xla_extension.XlaOp" in vtype:
return "<XlaOp at 0x{:x}>".format(id(v))
elif ("partial_eval.JaxprTracer" in vtype or
"batching.BatchTracer" in vtype or
"ad.JVPTracer" in vtype):
return "Traced<{}>".format(v.aval)
elif isinstance(v, tuple):
return "({})".format(pp_values(v))
else:
return str(v)
def pp_values(args):
return ", ".join([pp(arg) for arg in args])
@functools.wraps(func)
def func_wrapper(*args):
_trace_indent("call {}({})".format(name, pp_values(args)))
res = func(*args)
_trace_unindent("|<- {} = {}".format(name, pp(res)))
return res
return func_wrapper
return trace_func
class expectNotImplementedError(object):
Context manager to check for NotImplementedError.
def __enter__(self): pass
def __exit__(self, type, value, tb):
global _indentation
_indentation = 0
if type is NotImplementedError:
print("\nFound expected exception:")
traceback.print_exc(limit=3)
return True
elif type is None: # No exception
assert False, "Expected NotImplementedError"
else:
return False
import jax.numpy as jnp
import numpy as np
@trace("multiply_add_numpy")
def multiply_add_numpy(x, y, z):
return jnp.add(jnp.multiply(x, y), z)
@trace("square_add_numpy")
def square_add_numpy(a, b):
return multiply_add_numpy(a, a, b)
print("\nNormal evaluation:")
print("square_add_numpy = ", square_add_numpy(2., 10.))
print("\nGradient evaluation:")
print("grad(square_add_numpy) = ", api.grad(square_add_numpy)(2.0, 10.))
from jax import core
multiply_add_p = core.Primitive("multiply_add") # Create the primitive
@trace("multiply_add_prim")
def multiply_add_prim(x, y, z):
The JAX-traceable way to use the JAX primitive.
Note that the traced arguments must be passed as positional arguments
to `bind`.
return multiply_add_p.bind(x, y, z)
@trace("square_add_prim")
def square_add_prim(a, b):
A square-add function implemented using the new JAX-primitive.
return multiply_add_prim(a, a, b)
with expectNotImplementedError():
square_add_prim(2., 10.)
@trace("multiply_add_impl")
def multiply_add_impl(x, y, z):
Concrete implementation of the primitive.
This function does not need to be JAX traceable.
Args:
x, y, z: the concrete arguments of the primitive. Will only be called with
concrete values.
Returns:
the concrete result of the primitive.
# Note that we can use the original numpy, which is not JAX traceable
return np.add(np.multiply(x, y), z)
# Now we register the primal implementation with JAX
multiply_add_p.def_impl(multiply_add_impl)
assert square_add_prim(2., 10.) == 14.
with expectNotImplementedError():
api.jit(square_add_prim)(2., 10.)
from jax._src import abstract_arrays
@trace("multiply_add_abstract_eval")
def multiply_add_abstract_eval(xs, ys, zs):
Abstract evaluation of the primitive.
This function does not need to be JAX traceable. It will be invoked with
abstractions of the actual arguments.
Args:
xs, ys, zs: abstractions of the arguments.
Result:
a ShapedArray for the result of the primitive.
assert xs.shape == ys.shape
assert xs.shape == zs.shape
return abstract_arrays.ShapedArray(xs.shape, xs.dtype)
# Now we register the abstract evaluation with JAX
multiply_add_p.def_abstract_eval(multiply_add_abstract_eval)
with expectNotImplementedError():
api.jit(square_add_prim)(2., 10.)
from jax._src.lib import xla_client
@trace("multiply_add_xla_translation")
def multiply_add_xla_translation(ctx, avals_in, avals_out, xc, yc, zc):
The compilation to XLA of the primitive.
Given an XlaBuilder and XlaOps for each argument, return the XlaOp for the
result of the function.
Does not need to be a JAX-traceable function.
return [xla_client.ops.Add(xla_client.ops.Mul(xc, yc), zc)]
# Now we register the XLA compilation rule with JAX
# TODO: for GPU? and TPU?
from jax.interpreters import xla
xla.register_translation(multiply_add_p, multiply_add_xla_translation, platform='cpu')
assert api.jit(lambda x, y: square_add_prim(x, y))(2., 10.) == 14.
assert api.jit(lambda x, y: square_add_prim(x, y),
static_argnums=1)(2., 10.) == 14.
# The second argument `(2., 10.)` are the argument values
# where we evaluate the Jacobian, and the third `(1., 1.)`
# are the values of the tangents for the arguments.
with expectNotImplementedError():
api.jvp(square_add_prim, (2., 10.), (1., 1.))
from jax.interpreters import ad
@trace("multiply_add_value_and_jvp")
def multiply_add_value_and_jvp(arg_values, arg_tangents):
Evaluates the primal output and the tangents (Jacobian-vector product).
Given values of the arguments and perturbation of the arguments (tangents),
compute the output of the primitive and the perturbation of the output.
This method must be JAX-traceable. JAX may invoke it with abstract values
for the arguments and tangents.
Args:
arg_values: a tuple of arguments
arg_tangents: a tuple with the tangents of the arguments. The tuple has
the same length as the arg_values. Some of the tangents may also be the
special value ad.Zero to specify a zero tangent.
Returns:
a pair of the primal output and the tangent.
x, y, z = arg_values
xt, yt, zt = arg_tangents
_trace("Primal evaluation:")
# Now we have a JAX-traceable computation of the output.
# Normally, we can use the ma primtive itself to compute the primal output.
primal_out = multiply_add_prim(x, y, z)
_trace("Tangent evaluation:")
# We must use a JAX-traceable way to compute the tangent. It turns out that
# the output tangent can be computed as (xt * y + x * yt + zt),
# which we can implement in a JAX-traceable way using the same "multiply_add_prim" primitive.
# We do need to deal specially with Zero. Here we just turn it into a
# proper tensor of 0s (of the same shape as 'x').
# An alternative would be to check for Zero and perform algebraic
# simplification of the output tangent computation.
def make_zero(tan):
return lax.zeros_like_array(x) if type(tan) is ad.Zero else tan
output_tangent = multiply_add_prim(make_zero(xt), y, multiply_add_prim(x, make_zero(yt), make_zero(zt)))
return (primal_out, output_tangent)
# Register the forward differentiation rule with JAX
ad.primitive_jvps[multiply_add_p] = multiply_add_value_and_jvp
# Tangent is: xt*y + x*yt + zt = 1.*2. + 2.*1. + 1. = 5.
assert api.jvp(square_add_prim, (2., 10.), (1., 1.)) == (14., 5.)
assert api.jit(lambda arg_values, arg_tangents:
api.jvp(square_add_prim, arg_values, arg_tangents))(
(2., 10.), (1., 1.)) == (14., 5.)
# This is reverse differentiation w.r.t. the first argument of square_add_prim
with expectNotImplementedError():
api.grad(square_add_prim)(2., 10.)
@trace("multiply_add_transpose")
def multiply_add_transpose(ct, x, y, z):
Evaluates the transpose of a linear primitive.
This method is only used when computing the backward gradient following
value_and_jvp, and is only needed for primitives that are used in the JVP
calculation for some other primitive. We need transposition for multiply_add_prim,
because we have used multiply_add_prim in the computation of the output_tangent in
multiply_add_value_and_jvp.
In our case, multiply_add is not a linear primitive. However, it is used linearly
w.r.t. tangents in multiply_add_value_and_jvp:
output_tangent(xt, yt, zt) = multiply_add_prim(xt, y, multiply_add_prim(x, yt, zt))
Always one of the first two multiplicative arguments is a constant.
Args:
ct: the cotangent of the output of the primitive.
x, y, z: values of the arguments. The arguments that are used linearly
get an ad.UndefinedPrimal value. The other arguments get a constant
value.
Returns:
a tuple with the cotangent of the inputs, with the value None
corresponding to the constant arguments.
if not ad.is_undefined_primal(x):
# This use of multiply_add is with a constant "x"
assert ad.is_undefined_primal(y)
ct_y = ad.Zero(y.aval) if type(ct) is ad.Zero else multiply_add_prim(x, ct, lax.zeros_like_array(x))
res = None, ct_y, ct
else:
# This use of multiply_add is with a constant "y"
assert ad.is_undefined_primal(x)
ct_x = ad.Zero(x.aval) if type(ct) is ad.Zero else multiply_add_prim(ct, y, lax.zeros_like_array(y))
res = ct_x, None, ct
return res
ad.primitive_transposes[multiply_add_p] = multiply_add_transpose
assert api.grad(square_add_prim)(2., 10.) == 4.
assert api.jit(api.grad(square_add_prim))(2., 10.) == 4.
# The arguments are two vectors instead of two scalars
with expectNotImplementedError():
api.vmap(square_add_prim, in_axes=0, out_axes=0)(np.array([2., 3.]),
np.array([10., 20.]))
from jax.interpreters import batching
@trace("multiply_add_batch")
def multiply_add_batch(vector_arg_values, batch_axes):
Computes the batched version of the primitive.
This must be a JAX-traceable function.
Since the multiply_add primitive already operates pointwise on arbitrary
dimension tensors, to batch it we can use the primitive itself. This works as
long as both the inputs have the same dimensions and are batched along the
same axes. The result is batched along the axis that the inputs are batched.
Args:
vector_arg_values: a tuple of two arguments, each being a tensor of matching
shape.
batch_axes: the axes that are being batched. See vmap documentation.
Returns:
a tuple of the result, and the result axis that was batched.
assert batch_axes[0] == batch_axes[1]
assert batch_axes[0] == batch_axes[2]
_trace("Using multiply_add to compute the batch:")
res = multiply_add_prim(*vector_arg_values)
return res, batch_axes[0]
batching.primitive_batchers[multiply_add_p] = multiply_add_batch
assert np.allclose(api.vmap(square_add_prim, in_axes=0, out_axes=0)(
np.array([2., 3.]),
np.array([10., 20.])),
[14., 29.])
assert np.allclose(api.jit(api.vmap(square_add_prim, in_axes=0, out_axes=0))
(np.array([2., 3.]),
np.array([10., 20.])),
[14., 29.])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, import all the dependencies we'll use in this exercise, which include Fairness Indicators, TensorFlow Model Analysis (tfma), and the What-If tool (WIT)
Step2: Run the following code to download and import the training and validation datasets. By default, the following code will load the preprocessed data (see Fairness Exercise 1
Step3: Next, train the original model from Fairness Exercise 1
Step4: In the next section, we'll apply bias-remediation techniques on our data and then train a revised model on the updated data.
Step5: Retrain the model
Step6: Recompute fairness metrics
Step7: Next, run the fairness evaluation using TFMA
Step8: Load evaluation results
Step9: In Fairness Indicators, we'll display the remediated model's evaluation results on the validation set.
|
<ASSISTANT_TASK:>
Python Code:
!pip install fairness-indicators \
"absl-py==0.8.0" \
"pyarrow==0.15.1" \
"apache-beam==2.17.0" \
"avro-python3==1.9.1" \
"tfx-bsl==0.21.4" \
"tensorflow-data-validation==0.21.5"
%tensorflow_version 2.x
import os
import tempfile
import apache_beam as beam
import numpy as np
import pandas as pd
from datetime import datetime
import tensorflow_hub as hub
import tensorflow as tf
import tensorflow_model_analysis as tfma
from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators
from tensorflow_model_analysis.addons.fairness.view import widget_view
from witwidget.notebook.visualization import WitConfigBuilder
from witwidget.notebook.visualization import WitWidget
download_original_data = False #@param {type:"boolean"}
if download_original_data:
train_tf_file = tf.keras.utils.get_file('train_tf.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/train_tf.tfrecord')
validate_tf_file = tf.keras.utils.get_file('validate_tf.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/validate_tf.tfrecord')
# The identity terms list will be grouped together by their categories
# (see 'IDENTITY_COLUMNS') on threshould 0.5. Only the identity term column,
# text column and label column will be kept after processing.
train_tf_file = util.convert_comments_data(train_tf_file)
validate_tf_file = util.convert_comments_data(validate_tf_file)
else:
train_tf_file = tf.keras.utils.get_file('train_tf_processed.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/train_tf_processed.tfrecord')
validate_tf_file = tf.keras.utils.get_file('validate_tf_processed.tfrecord',
'https://storage.googleapis.com/civil_comments_dataset/validate_tf_processed.tfrecord')
#@title Run this cell to train the baseline model from Exercise 1
TEXT_FEATURE = 'comment_text'
LABEL = 'toxicity'
FEATURE_MAP = {
# Label:
LABEL: tf.io.FixedLenFeature([], tf.float32),
# Text:
TEXT_FEATURE: tf.io.FixedLenFeature([], tf.string),
# Identities:
'sexual_orientation':tf.io.VarLenFeature(tf.string),
'gender':tf.io.VarLenFeature(tf.string),
'religion':tf.io.VarLenFeature(tf.string),
'race':tf.io.VarLenFeature(tf.string),
'disability':tf.io.VarLenFeature(tf.string),
}
def train_input_fn():
def parse_function(serialized):
parsed_example = tf.io.parse_single_example(
serialized=serialized, features=FEATURE_MAP)
# Adds a weight column to deal with unbalanced classes.
parsed_example['weight'] = tf.add(parsed_example[LABEL], 0.1)
return (parsed_example,
parsed_example[LABEL])
train_dataset = tf.data.TFRecordDataset(
filenames=[train_tf_file]).map(parse_function).batch(512)
return train_dataset
BASE_DIR = tempfile.gettempdir()
model_dir = os.path.join(BASE_DIR, 'train', datetime.now().strftime(
"%Y%m%d-%H%M%S"))
embedded_text_feature_column = hub.text_embedding_column(
key=TEXT_FEATURE,
module_spec='https://tfhub.dev/google/nnlm-en-dim128/1')
classifier = tf.estimator.DNNClassifier(
hidden_units=[500, 100],
weight_column='weight',
feature_columns=[embedded_text_feature_column],
optimizer=tf.optimizers.Adagrad(learning_rate=0.003),
loss_reduction=tf.losses.Reduction.SUM,
n_classes=2,
model_dir=model_dir)
classifier.train(input_fn=train_input_fn, steps=1000)
def train_input_fn_with_remediation():
def parse_function(serialized):
parsed_example = tf.io.parse_single_example(
serialized=serialized, features=FEATURE_MAP)
# Adds a weight column to deal with unbalanced classes.
parsed_example['weight'] = tf.add(parsed_example[LABEL], 0.1)
# BEGIN UPDATES FOR UPWEIGHTING
# Up-weighting non-toxic examples to balance toxic and non-toxic examples
# for gender slice.
#
values = parsed_example['gender'].values
# 'toxicity' label zero represents the example is non-toxic.
if tf.equal(parsed_example[LABEL], 0):
# We tuned the upweighting hyperparameters, and found we got good
# results by setting `weight`s of 0.4 for `transgender`,
# 0.5 for `female`, and 0.7 for `male`.
# NOTE: `other_gender` is not upweighted separately, because all examples
# tagged with `other_gender` were also tagged with one of the other
# values below
if tf.greater(tf.math.count_nonzero(tf.equal(values, 'transgender')), 0):
parsed_example['weight'] = tf.constant(0.4)
if tf.greater(tf.math.count_nonzero(tf.equal(values, 'female')), 0):
parsed_example['weight'] = tf.constant(0.5)
if tf.greater(tf.math.count_nonzero(tf.equal(values, 'male')), 0):
parsed_example['weight'] = tf.constant(0.7)
return (parsed_example,
parsed_example[LABEL])
# END UPDATES FOR UPWEIGHTING
train_dataset = tf.data.TFRecordDataset(
filenames=[train_tf_file]).map(parse_function).batch(512)
return train_dataset
BASE_DIR = tempfile.gettempdir()
model_dir_with_remediation = os.path.join(BASE_DIR, 'train', datetime.now().strftime(
"%Y%m%d-%H%M%S"))
embedded_text_feature_column = hub.text_embedding_column(
key=TEXT_FEATURE,
module_spec='https://tfhub.dev/google/nnlm-en-dim128/1')
classifier_with_remediation = tf.estimator.DNNClassifier(
hidden_units=[500, 100],
weight_column='weight',
feature_columns=[embedded_text_feature_column],
n_classes=2,
optimizer=tf.optimizers.Adagrad(learning_rate=0.003),
loss_reduction=tf.losses.Reduction.SUM,
model_dir=model_dir_with_remediation)
classifier_with_remediation.train(input_fn=train_input_fn_with_remediation, steps=1000)
def eval_input_receiver_fn():
serialized_tf_example = tf.compat.v1.placeholder(
dtype=tf.string, shape=[None], name='input_example_placeholder')
receiver_tensors = {'examples': serialized_tf_example}
features = tf.io.parse_example(serialized_tf_example, FEATURE_MAP)
features['weight'] = tf.ones_like(features[LABEL])
return tfma.export.EvalInputReceiver(
features=features,
receiver_tensors=receiver_tensors,
labels=features[LABEL])
tfma_export_dir_with_remediation = tfma.export.export_eval_savedmodel(
estimator=classifier_with_remediation,
export_dir_base=os.path.join(BASE_DIR, 'tfma_eval_model_with_remediation'),
eval_input_receiver_fn=eval_input_receiver_fn)
tfma_eval_result_path_with_remediation = os.path.join(BASE_DIR, 'tfma_eval_result_with_remediation')
slice_selection = 'gender'
compute_confidence_intervals = False
# Define slices that you want the evaluation to run on.
slice_spec = [
tfma.slicer.SingleSliceSpec(), # Overall slice
tfma.slicer.SingleSliceSpec(columns=['gender']),
]
# Add the fairness metrics.
add_metrics_callbacks = [
tfma.post_export_metrics.fairness_indicators(
thresholds=[0.1, 0.3, 0.5, 0.7, 0.9],
labels_key=LABEL
)
]
eval_shared_model_with_remediation = tfma.default_eval_shared_model(
eval_saved_model_path=tfma_export_dir_with_remediation,
add_metrics_callbacks=add_metrics_callbacks)
validate_dataset = tf.data.TFRecordDataset(filenames=[validate_tf_file])
# Run the fairness evaluation.
with beam.Pipeline() as pipeline:
_ = (
pipeline
| 'ReadData' >> beam.io.ReadFromTFRecord(validate_tf_file)
| 'ExtractEvaluateAndWriteResults' >>
tfma.ExtractEvaluateAndWriteResults(
eval_shared_model=eval_shared_model_with_remediation,
slice_spec=slice_spec,
compute_confidence_intervals=compute_confidence_intervals,
output_path=tfma_eval_result_path_with_remediation)
)
eval_result_with_remediation = tfma.load_eval_result(output_path=tfma_eval_result_path_with_remediation)
DEFAULT_MAX_EXAMPLES = 1000
# Load 100000 examples in memory. When first rendered, What-If Tool only
# displays 1000 of these examples to ensure data loads successfully for most
# browser/machine configurations.
def wit_dataset(file, num_examples=100000):
dataset = tf.data.TFRecordDataset(
filenames=[train_tf_file]).take(num_examples)
return [tf.train.Example.FromString(d.numpy()) for d in dataset]
wit_data = wit_dataset(train_tf_file)
# Configure WIT with 1000 examples, the FEATURE_MAP we defined above, and
# a label of 1 for positive (toxic) examples and 0 for negative (nontoxic)
# examples
config_builder = WitConfigBuilder(wit_data[:DEFAULT_MAX_EXAMPLES]).set_estimator_and_feature_spec(
classifier, FEATURE_MAP).set_compare_estimator_and_feature_spec(
classifier_with_remediation, FEATURE_MAP).set_label_vocab(['0', '1']).set_target_feature(LABEL)
wit = WitWidget(config_builder)
# Link Fairness Indicators widget with WIT widget above,
# so that clicking a slice in FI below will load its data in WIT above.
event_handlers={'slice-selected':
wit.create_selection_callback(wit_data, DEFAULT_MAX_EXAMPLES)}
widget_view.render_fairness_indicator(eval_result=eval_result_with_remediation,
slicing_column=slice_selection,
event_handlers=event_handlers)
#@title Alternative: Run this cell only if you intend to skip the What-If tool exercises (see Warning above)
# Link Fairness Indicators widget with WIT widget above,
# so that clicking a slice in FI below will load its data in WIT above.
widget_view.render_fairness_indicator(eval_result=eval_result_with_remediation,
slicing_column=slice_selection)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now that we have this function, we can use the samples to compute the expected price at time $T$.
Step2: Instead of computing the expected price for the first 5 points, i.e. for $T=1,\ldots,5,$ let us instead do it for the first 100. This shows how the expected price drops after a while.
|
<ASSISTANT_TASK:>
Python Code:
def pricingSamples(T, number_of_samples,S0=25, SIGMA=1,
R=0.01, K=25):
'''
Produces samples of the selling price of an asset at time t=T,
given the parameters above.
'''
z = randn(number_of_samples)
s_T = S0*np.exp((R-0.5*SIGMA**2)*T+SIGMA*np.sqrt(T)*z)
c = np.exp(-R*T)*(K-s_T)*((K-s_T)>0)
return c
def PriceOption(T, num_of_samples=10000,S0=25, SIGMA=1,
R=0.01, K=25):
'''
Computes the expected price at time T given
samples.
'''
samples = pricingSamples(T, num_of_samples)
aver_price = np.mean(samples)
return aver_price
prices = np.zeros(100);
for i in range(1,100):
prices[i] = PriceOption(i);
pale_red = sns.xkcd_rgb['pale red']
pl.plot(prices,color=pale_red)
pl.xlabel('Time',fontsize=20)
pl.ylabel('Expected Price',fontsize=20)
pl.title('Using 10000 samples.', fontsize=20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we set some parameters for our simulation. By keeping these variables grouped
Step2: At what stage these constants will be used will become clear as we work
Step3: Model specification
Step4: Then we create our molecular species (only one in this simple model we will call A) and our volume system vsys, much as in previous chapters.
Step5: After that we can create our diffusion rule. In STEPS this means creating a steps.model.Diff
Step6: So our complete function is
Step7: Geometry specification
Step8: Our tetrahedral mesh geometry object is very different from our well-mixed geometry
Step9: Note that we do not (and indeed can not) set the volume of the compartment because
Step10: Next we wish to make sure that we include data from around the central tetrahedron,
Step11: Then we fill the rest of our tetidxs array with tetrahedrons chosen at random.
Step12: This example is intended to demonstrate that there is a lot of functionality
Step13: Finally, we return the steps.geom.Tetmesh object required for simulation object construction.
Step14: Simulation with Tetexact
Step15: Now we can create our reaction-diffusion steps.solver.Tetexact solver object, which requires a steps.geom.Tetmesh object to it’s initializing function (if we try to present it with simple well-mixed geometry an error message will appear)
Step16: This solver builds on the functionality of the well-mixed solvers, with methods for manipulating certain regions in the mesh. We will see some examples in the following snippets of code, and a full list of available methods is available in steps.solver.Tetexact. Similarly to our well-mixed simulations we must create the data structures for saving our results. We create the “time points“ array (based on parameters we set at the beginning of our script) and the “results“ array, which in this case will store data for all the tetrahedrons we are sampling
Step17: We are now ready to run a simulation. This will look quite similar to our previous code for running a well-mixed simulation, but this time we are injecting molecules into and recording data from individual tetrahedrons, not the whole compartment (though this is also possible). We first need to find the central tetrahedron index again (as we did not pass this information on from the gen_geom() function, though this is of course an option). We then use solver method steps.solver.Tetexact.setTetCount to set the number of molecules in the central tetrahedron at time t = 0 to the number stored in variable NINJECT (default number in all tetrahedrons is zero set by steps.solver.Tetexact.reset). We will then run our simulation and collect the data in a few lines of code in nested for loops
Step18: That is all the code we require to run our simple diffusion simulation.
Step19: Plotting simulation output
Step20: You may have noticed that we call a function that we have not defined yet, plotanlyt. This function will plot the concentration from the analytical concentration given by our equation. The function for plotting the analytical solution is provided here, but we will not go through this code in detail. Here we can see why the diffusion constant was stored in variable DCST at the top of our script
Step21: And that is everything we need to set up and run our simple diffusion simulation and plot the data, alongside the analytical solution for comparison. With this structure, it is intended that the plotting functions is called interactively, giving us the chance to visualise a number of different time plots, then save whichever plots we chose. It often makes sense to output the data to a file, then write plotting functions in separate modules that can load the saved data from these files and plot. This requires some knowledge of writing and reading files in Python, but like most operations in Python, can usually be picked up quite quickly.
Step22: Simulation with TetOpSplit
Step23: The model can be run with the exact same set of Python commands as for the previous simulation with Tetexact, using the reference to the TetOpSplit object sim_tos, writing over the data recorded to NumPy array res and recreating res_mean for use by the plotting function
Step24: We can now plot the results obtained with TetOpSplit
|
<ASSISTANT_TASK:>
Python Code:
import math
import numpy
import pylab
import random
import time
import steps.model as smodel
import steps.solver as solvmod
import steps.geom as stetmesh
import steps.rng as srng
# The number of iterations to run
NITER = 10
# The data collection time increment (s)
DT = 0.001
# The simulation endtime (s)
INT = 0.101
# The number of molecules to be injected into the centre
NINJECT = 10000
# The number of tetrahedral elements to sample data from.
SAMPLE = 2000
# The diffusion constant for our diffusing species (m^2/s)
DCST = 20.0e-12
# Array to hold tetrahedron indices (integers)
tetidxs = numpy.zeros(SAMPLE, dtype='int')
# Array to hold tetrahedron radial distances (floats)
tetrads = numpy.zeros(SAMPLE)
mdl = smodel.Model()
A = smodel.Spec('A', mdl)
vsys = smodel.Volsys('cytosolv', mdl)
diff_A = smodel.Diff('diff_A', vsys, A)
diff_A.setDcst(DCST)
def gen_model():
mdl = smodel.Model()
A = smodel.Spec('A', mdl)
vsys = smodel.Volsys('cytosolv', mdl)
diff_A = smodel.Diff('diff_A', vsys, A)
diff_A.setDcst(DCST)
return mdl
import steps.utilities.meshio as smeshio
mesh = smeshio.loadMesh('meshes/sphere_rad10_11Ktets')[0]
# Find the total number of tetrahedrons in the mesh
ntets = mesh.countTets()
# Create a compartment containing all tetrahedrons
comp = stetmesh.TmComp('cyto', mesh, range(ntets))
comp.addVolsys('cytosolv')
# Fetch the central tetrahedron index and store:
ctetidx = mesh.findTetByPoint([0.0, 0.0, 0.0])
tetidxs[0] = ctetidx
# Find the central tetrahedron's four neighbours:
neighbidcs = mesh.getTetTetNeighb(ctetidx)
tetidxs[1],tetidxs[2],tetidxs[3],tetidxs[4] = neighbidcs
# Keep track how many tet indices we have stored so far
stored = 5
# Find the maximum and minimum coordinates of the mesh:
max = mesh.getBoundMax()
min = mesh.getBoundMin()
# Run a loop until we have stored all tet indices we require
while (stored < SAMPLE):
# Fetch 3 random numbers between 0 and 1:
rnx = random.random()
rny = random.random()
rnz = random.random()
# Find the related coordinates in the mesh:
xcrd = min[0] + (max[0]-min[0])*rnx
ycrd = min[1] + (max[1]-min[1])*rny
zcrd = min[2] + (max[2]-min[2])*rnz
# Find the tetrahedron that encompasses this point:
tidx = mesh.findTetByPoint([xcrd, ycrd, zcrd])
# -1 was returned if point is outside the mesh
if (tidx == stetmesh.UNKNOWN_TET): continue
if (tidx not in tetidxs):
tetidxs[stored] = tidx
stored += 1
# Find the barycenter of the central tetrahedron
cbaryc = mesh.getTetBarycenter(ctetidx)
for i in range(SAMPLE):
# Fetch the barycenter of the tetrahedron:
baryc = mesh.getTetBarycenter(tetidxs[i])
# Find the radial distance of this tetrahedron to mesh centre:
r = math.sqrt(math.pow((baryc[0]-cbaryc[0]),2) \
+ math.pow((baryc[1]-cbaryc[1]),2) \
+ math.pow((baryc[2]-cbaryc[2]),2))
# Store the radial distance (in microns):
tetrads[i] = r*1.0e6
import steps.utilities.meshio as smeshio
def gen_geom():
print("Loading mesh...")
mesh = smeshio.loadMesh('meshes/sphere_rad10_11Ktets')[0]
print("Mesh Loaded")
# Find the total number of tetrahedrons in the mesh
ntets = mesh.countTets()
# Create a compartment containing all tetrahedrons
comp = stetmesh.TmComp('cyto', mesh, range(ntets))
comp.addVolsys('cytosolv')
print("Finding tetrahedron samples...")
# Fetch the central tetrahedron index and store:
ctetidx = mesh.findTetByPoint([0.0, 0.0, 0.0])
tetidxs[0] = ctetidx
# Find the central tetrahedron's four neighbours:
neighbidcs = mesh.getTetTetNeighb(ctetidx)
tetidxs[1:5] = neighbidcs
# Keep track how many tet indices we have stored so far
stored = 5
# Find the maximum and minimum coordinates of the mesh
max = mesh.getBoundMax()
min = mesh.getBoundMin()
# Run a loop until we have stored all tet indices we require
while stored < SAMPLE:
# Fetch 3 random numbers between 0 and 1
rnx = random.random()
rny = random.random()
rnz = random.random()
# Find the coordinates in the mesh that these numbers relate to
xcrd = min[0] + (max[0] - min[0]) * rnx
ycrd = min[1] + (max[1] - min[1]) * rny
zcrd = min[2] + (max[2] - min[2]) * rnz
# Find the tetrahedron that encompasses this point.
tidx = mesh.findTetByPoint([xcrd, ycrd, zcrd])
# -1 was returned if point is outside the mesh:
if tidx == stetmesh.UNKNOWN_TET: continue
if tidx not in tetidxs:
tetidxs[stored] = tidx
stored += 1
# Find the barycenter of the central tetrahedron
cbaryc = mesh.getTetBarycenter(ctetidx)
for i in range(SAMPLE):
# Fetch the barycenter of the tetrahedron:
baryc = mesh.getTetBarycenter(int(tetidxs[i]))
# Find the radial distance of this tetrahedron to mesh center:
r = math.sqrt(math.pow((baryc[0] - cbaryc[0]), 2) + \
math.pow((baryc[1] - cbaryc[1]), 2) + \
math.pow((baryc[2] - cbaryc[2]), 2))
# Store the radial distance (in microns):
tetrads[i] = r * 1.0e6
print("Tetrahedron samples found")
return mesh
model = gen_model()
tmgeom = gen_geom()
rng = srng.create('mt19937', 512)
rng.initialize(2903)
sim = solvmod.Tetexact(model, tmgeom, rng)
tpnts = numpy.arange(0.0, INT, DT)
# Find how many "time points" we have
ntpnts = tpnts.shape[0]
# Create the data structure: iterations x time points x tet samples
res = numpy.zeros((NITER, ntpnts, SAMPLE))
# Fetch the index of the tetrahedron at the centre of the mesh
ctetidx = tmgeom.findTetByPoint([0.0, 0.0, 0.0])
# Run NITER number of iterations:
for i in range(NITER):
sim.reset()
# Inject all molecules into the central tet:
sim.setTetCount(ctetidx, 'A', NINJECT)
for j in range(ntpnts):
sim.run(tpnts[j])
# Loop over the tetrahedrons we are saving data for
for k in range(SAMPLE):
# Save the concentration in the tetrahedron, in uM
res[i, j, k] = sim.getTetConc(int(tetidxs[k]), 'A') * 1.0e6
res_mean = numpy.mean(res, axis=0)
%matplotlib inline
import matplotlib.pyplot as plt
def plotres(res_mean, tidx):
if tidx >= INT / DT:
print("Time index is out of range.")
return
plt.scatter(tetrads, res_mean[tidx], s=2)
plt.xlabel('Radial distance of tetrahedron ($\mu$m)')
plt.ylabel('Concentration in tetrahedron ($\mu$M)')
t = tpnts[tidx]
plt.title('Unbounded diffusion. Time: ' + str(t) + 's')
plotanlyt(t)
plt.xlim(0.0, 10.0)
plt.ylim(0.0)
def plotanlyt(t):
segs = 100
anlytconc = numpy.zeros(segs)
radialds = numpy.zeros(segs)
maxrad = 0.0
for i in tetrads:
if i > maxrad: maxrad = i
maxrad *= 1e-6
intervals = maxrad / segs
rad = 0.0
for i in range(segs):
# Find the conc from analytical solution, and convert to mol/L
anlytconc[i] = 1.0e3 * (1 / 6.022e23) * \
((NINJECT / (math.pow((4 * math.pi * DCST * t), 1.5))) * \
(math.exp((-1.0 * (rad * rad)) / (4 * DCST * t))))
radialds[i] = rad * 1e6
rad += intervals
plt.plot(radialds, anlytconc, color='red')
plt.figure(figsize=(10,6))
plotres(res_mean, 100)
import steps.mpi.solver as mpisolvmod
sim_tos = mpisolvmod.TetOpSplit(model, tmgeom, rng, False, \
[0] * tmgeom.ntets)
# Run NITER number of iterations:
for i in range(NITER):
sim_tos.reset()
# Inject all molecules into the central tet:
sim_tos.setTetCount(ctetidx, 'A', NINJECT)
for j in range(ntpnts):
sim_tos.run(tpnts[j])
# Loop over the tetrahedrons we are saving data for
for k in range(SAMPLE):
# Save the concentration in the tetrahedron, in uM
res[i, j, k] = sim_tos.getTetConc(int(tetidxs[k]), 'A') * 1.0e6
res_mean = numpy.mean(res, axis=0)
plt.figure(figsize=(10, 6))
plotres(res_mean, 100)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will treat the MNIST image $\in \mathcal{R}^{28 \times 28}$ as $28$ sequences of a vector $\mathbf{x} \in \mathcal{R}^{28}$.
Step2: Out Network looks like this
Step3: Run!
Step4: What we have done so far is to feed 28 sequences of vectors $ \mathbf{x} \in \mathcal{R}^{28}$.
Step5: What's going on inside the RNN?
Step6: Reshaped inputs
Step7: Feeds
Step8: Each indivisual input to the LSTM
Step9: Each indivisual intermediate state
Step10: Actual input to the LSTM (List)
Step11: Output from the LSTM (List)
Step12: Final prediction
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import tensorflow.examples.tutorials.mnist.input_data as input_data
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
print ("Packages imported")
mnist = input_data.read_data_sets("data/", one_hot=True)
trainimgs, trainlabels, testimgs, testlabels \
= mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels
ntrain, ntest, dim, nclasses \
= trainimgs.shape[0], testimgs.shape[0], trainimgs.shape[1], trainlabels.shape[1]
print ("MNIST loaded")
diminput = 28
dimhidden = 128
dimoutput = nclasses
nsteps = 28
weights = {
'hidden': tf.Variable(tf.random_normal([diminput, dimhidden])),
'out': tf.Variable(tf.random_normal([dimhidden, dimoutput]))
}
biases = {
'hidden': tf.Variable(tf.random_normal([dimhidden])),
'out': tf.Variable(tf.random_normal([dimoutput]))
}
def _RNN(_X, _istate, _W, _b, _nsteps, _name):
# 1. Permute input from [batchsize, nsteps, diminput]
# => [nsteps, batchsize, diminput]
_X = tf.transpose(_X, [1, 0, 2])
# 2. Reshape input to [nsteps*batchsize, diminput]
_X = tf.reshape(_X, [-1, diminput])
# 3. Input layer => Hidden layer
_H = tf.matmul(_X, _W['hidden']) + _b['hidden']
# 4. Splite data to 'nsteps' chunks. An i-th chunck indicates i-th batch data
_Hsplit = tf.split(0, _nsteps, _H)
# 5. Get LSTM's final output (_LSTM_O) and state (_LSTM_S)
# Both _LSTM_O and _LSTM_S consist of 'batchsize' elements
# Only _LSTM_O will be used to predict the output.
with tf.variable_scope(_name):
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(dimhidden, forget_bias=1.0)
_LSTM_O, _LSTM_S = tf.nn.rnn(lstm_cell, _Hsplit, initial_state=_istate)
# 6. Output
_O = tf.matmul(_LSTM_O[-1], _W['out']) + _b['out']
# Return!
return {
'X': _X, 'H': _H, 'Hsplit': _Hsplit,
'LSTM_O': _LSTM_O, 'LSTM_S': _LSTM_S, 'O': _O
}
print ("Network ready")
learning_rate = 0.001
x = tf.placeholder("float", [None, nsteps, diminput])
istate = tf.placeholder("float", [None, 2*dimhidden])
# state & cell => 2x n_hidden
y = tf.placeholder("float", [None, dimoutput])
myrnn = _RNN(x, istate, weights, biases, nsteps, 'basic')
pred = myrnn['O']
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optm = tf.train.AdamOptimizer(learning_rate).minimize(cost) # Adam Optimizer
accr = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(pred,1), tf.argmax(y,1)), tf.float32))
init = tf.initialize_all_variables()
print ("Network Ready!")
training_epochs = 5
batch_size = 128
display_step = 1
sess = tf.Session()
sess.run(init)
summary_writer = tf.train.SummaryWriter('/tmp/tensorflow_logs', graph=sess.graph)
print ("Start optimization")
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
batch_xs = batch_xs.reshape((batch_size, nsteps, diminput))
# Fit training using batch data
feeds = {x: batch_xs, y: batch_ys, istate: np.zeros((batch_size, 2*dimhidden))}
sess.run(optm, feed_dict=feeds)
# Compute average loss
avg_cost += sess.run(cost, feed_dict=feeds)/total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print ("Epoch: %03d/%03d cost: %.9f" % (epoch, training_epochs, avg_cost))
feeds = {x: batch_xs, y: batch_ys, istate: np.zeros((batch_size, 2*dimhidden))}
train_acc = sess.run(accr, feed_dict=feeds)
print (" Training accuracy: %.3f" % (train_acc))
testimgs = testimgs.reshape((ntest, nsteps, diminput))
feeds = {x: testimgs, y: testlabels, istate: np.zeros((ntest, 2*dimhidden))}
test_acc = sess.run(accr, feed_dict=feeds)
print (" Test accuracy: %.3f" % (test_acc))
print ("Optimization Finished.")
# How may sequences will we use?
nsteps2 = 25
# Test with truncated inputs
testimgs = testimgs.reshape((ntest, nsteps, diminput))
testimgs_trucated = np.zeros(testimgs.shape)
testimgs_trucated[:, 28-nsteps2:] = testimgs[:, :nsteps2, :]
feeds = {x: testimgs_trucated, y: testlabels, istate: np.zeros((ntest, 2*dimhidden))}
test_acc = sess.run(accr, feed_dict=feeds)
print (" If we use %d seqs, test accuracy becomes %.3f" % (nsteps2, test_acc))
batch_size = 5
xtest, _ = mnist.test.next_batch(batch_size)
print ("Shape of 'xtest' is %s" % (xtest.shape,))
# Reshape (this will go into the network)
xtest1 = xtest.reshape((batch_size, nsteps, diminput))
print ("Shape of 'xtest1' is %s" % (xtest1.shape,))
feeds = {x: xtest1, istate: np.zeros((batch_size, 2*dimhidden))}
rnnout_X = sess.run(myrnn['X'], feed_dict=feeds)
print ("Shape of 'rnnout_X' is %s" % (rnnout_X.shape,))
rnnout_H = sess.run(myrnn['H'], feed_dict=feeds)
print ("Shape of 'rnnout_H' is %s" % (rnnout_H.shape,))
rnnout_Hsplit = sess.run(myrnn['Hsplit'], feed_dict=feeds)
print ("Type of 'rnnout_Hsplit' is %s" % (type(rnnout_Hsplit)))
print ("Length of 'rnnout_Hsplit' is %s and the shape of each item is %s"
% (len(rnnout_Hsplit), rnnout_Hsplit[0].shape))
rnnout_LSTM_O = sess.run(myrnn['LSTM_O'], feed_dict=feeds)
print ("Type of 'rnnout_LSTM_O' is %s" % (type(rnnout_LSTM_O)))
print ("Length of 'rnnout_LSTM_O' is %s and the shape of each item is %s"
% (len(rnnout_LSTM_O), rnnout_LSTM_O[0].shape))
rnnout_O = sess.run(myrnn['O'], feed_dict=feeds)
print ("Shape of 'rnnout_O' is %s" % (rnnout_O.shape,))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Gramáticas Independientes del Contexto (CFG)
Step3: Fíjate cómo hemos definido nuestra gramática
Step4: Con el objeto grammar1 ya creado, creamos el analizador con el método nltk.ChatParser.
Step5: Una vez creado nuestro analizador ya lo podemos utilizar. Tenemos a nuestro alcance el método .parse para analizar sintácticamente cualquier oración que se especifique como una cadena de palabras. Nuestra gramática es bastante limitada, pero podemos utilizarla para analizar la oración I shot an elephant in my pajamas. Si imprimimos el resultado del método, obtenemos el árbol sintáctico.
Step6: Por si no te has dado cuenta, la oración I shot an elephant in my pajamas es ambigua en inglés
Step7: Recuerda que para imprimir el árbol sintáctico hay que iterar (con un bucle for, por ejemplo) sobre el objeto que devuelve el método parse() y utilizar la función print.
Step9: A continuación modifico ligeramente mi gramática g1 para incluir una nueva categoría gramatical PRO y añadir algo de volcabulario nuevo. Compara ambos ejemplos
Step10: NOTA IMPORTANTE sobre errores y el comportamiento de parse()
Step11: Sin embargo, cuando el analizador no reconoce todo el vocabulario (porque utilizamos una palabra no definida dentro del léxico), el método parse() falla y muestra un mensaje de error de tipo ValueError como el siguiente. Fíjate solo en la última línea
Step13: Tenlo en cuenta a la hora de detectar errores en tu código.
Step15: Vamos a probar si es capaz de analizar distintas oraciones es español. Para hacerlo más divertido, vamos a guardar varias oraciones separadas por un intro (simbolizado por el metacarácter \n) en una lista de cadenas llamda oraciones. Iteramos sobre esas oraciones, las imprimimos, después las rompemos en listas de palabras (con el método .split()) e imprimimos el resultado de analizarlas con nuestro analizador.
Step18: Vamos a aumentar la cobertura de nuestra gramática de modo que sea capaz de reconocer y analizar oraciones coordinadas. Para ello, modificamos la regla en la que definimos la oración añadiendo una definición recursivaque defina oración como la secuencia de una oración (O) seguida de una conjunción (Conj) y de otra oración (O). Por último añadimos también algo de léxico nuevo
Step21: Recuerda que una gramática no es un programa
Step24: ¿Podemos extender g4 para que reconozca oraciones subordinadas introducidas con verbos de lengua o de pensamiento. Me refiero a oraciones del tipo
|
<ASSISTANT_TASK:>
Python Code:
import nltk
g1 =
S -> NP VP
NP -> Det N | Det N PP | 'I'
VP -> V NP | VP PP
PP -> P NP
Det -> 'an' | 'my'
N -> 'elephant' | 'pajamas'
V -> 'shot'
P -> 'in'
grammar1 = nltk.CFG.fromstring(g1)
analyzer = nltk.ChartParser(grammar1)
oracion = "I shot an elephant in my pajamas".split()
# guardamos todos los posibles análisis sintácticos en trees
trees = analyzer.parse(oracion)
for tree in trees:
print(tree)
print(analyzer.parse_one(oracion))
print(analyzer.parse(oracion))
g1v2 =
S -> NP VP
NP -> Det N | Det N PP | PRO
VP -> V NP | VP PP
PP -> P NP
Det -> 'an' | 'my'
PRO -> 'I' | 'you'
N -> 'elephant' | 'pajamas'
V -> 'shot'
P -> 'in'
grammar1v2 = nltk.CFG.fromstring(g1v2)
analyzer1v2 = nltk.ChartParser(grammar1v2)
# itero sobre la estructura que devuelve parse()
for tree in analyzer1v2.parse(oracion):
print(tree)
print("\n", "-------------------------------", "\n")
for tree in analyzer1v2.parse("you shot my elephant".split()):
print(tree)
for tree in analyzer.parse("shot an pajamas elephant my I".split()):
print("El análisis sintáctico es el siguiente")
print(tree)
for tree in analyzer.parse("our time is running out".split()):
print("El análisis sintáctico es el siguiente")
print(tree)
g2 =
O -> SN SV
SN -> Det N | Det N Adj | Det Adj N | NProp | SN SP
SV -> V | V SN | V SP | V SN SP
SP -> Prep SN
Det -> 'el' | 'la' | 'un' | 'una'
N -> 'niño' | 'niña' | 'manzana' | 'pera' | 'cuchillo'
NProp -> 'Juan' | 'Ana' | 'Perico'
Adj -> 'bonito' | 'pequeña' | 'verde'
V -> 'come' | 'salta' | 'pela' | 'persigue'
Prep -> 'de' | 'con' | 'desde' | 'a'
grammar2 = nltk.CFG.fromstring(g2)
analizador2 = nltk.ChartParser(grammar2)
oraciones = Ana salta
la niña pela una manzana verde con el cuchillo
Juan come un cuchillo bonito desde el niño
un manzana bonito salta el cuchillo desde el niño verde
el cuchillo verde persigue a la pequeña manzana de Ana
el cuchillo verde persigue a Ana.split("\n")
for oracion in oraciones:
print(oracion)
for tree in analizador2.parse(oracion.split()):
print(tree, "\n")
g3 =
O -> SN SV | O Conj O
SN -> Det N | Det N Adj | Det Adj N | NProp | SN SP
SV -> V | V SN | V SP | V SN SP
SP -> Prep SN
Det -> 'el' | 'la' | 'un' | 'una'
N -> 'niño' | 'niña' | 'manzana' | 'pera' | 'cuchillo'
NProp -> 'Juan' | 'Ana' | 'Perico'
Adj -> 'bonito' | 'pequeña' | 'verde'
V -> 'come' | 'salta' | 'pela' | 'persigue'
Prep -> 'de' | 'con' | 'desde' | 'a'
Conj -> 'y' | 'pero'
# Ahora fijate cómo creamos en analizador en un solo paso
# compáralo con los ejemplos anteriores
analizador3 = nltk.ChartParser(nltk.CFG.fromstring(g3))
for tree in analizador3.parse(la manzana salta y el niño come pero el cuchillo
verde persigue a la pequeña manzana de Ana.split()):
print(tree)
# ojo, son sencillas, pero contienen oraciones impersonales, verbos copulativos, sujetos elípticos
oraciones = mañana es viernes
hoy es jueves
tenéis sueño
hace frío
Pepe hace sueño.split("\n")
# escribe tu gramática en esta celda
g4 =
analyzer4 = nltk.ChartParser(nltk.CFG.fromtring(g4))
# ¿qué tal funciona?
for oracion in oraciones:
print(oracion)
for tree in analyzer4.parse(oracion.split()):
print(tree, "\n")
oraciones = Pepe cree que mañana es viernes
María dice que Pepe cree que mañana es viernes.split()
# escribe la extensión de tu gramática en esta celda
g5 =
analyzer5 = nltk.ChartParser(nltk.CFG.fromstring(g5))
# ¿qué tal funciona?
for oracion in oraciones:
print(oracion)
for tree in analyzer5.parse(oracion.split()):
print(tree, "\n")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: Wrapping Castep with f90wrap - CasPyTep
Step3: Single point calculation
Step4: Interactive introspection
Step5: Postprocessing and Visualisation
Step6: Updating data inside a running Castep instance
Step7: Example 2 - testing new algorithms
Step8: Example 3 - convergence testing
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
#Customize default plotting style
%matplotlib inline
import seaborn as sns
sns.set_context('talk')
plt.rcParams["figure.figsize"] = (10, 8)
import os
from ase.build import bulk
from gpaw import GPAW, restart
if not os.path.exists('si-vac.gpw'):
si = bulk('Si', cubic=True)
del si[0] # create a vacancy
gpaw = GPAW(h=0.15)
si.set_calculator(gpaw)
si.get_potential_energy()
gpaw.write('si-vac.gpw')
si, gpaw = restart('si-vac.gpw')
rho = gpaw.get_pseudo_density()
plt.plot(si.positions[:, 0], si.positions[:, 1], 'r.', ms=50)
plt.imshow(rho[:,:,0], extent=[0, si.cell[0,0], 0, si.cell[1,1]]);
import bader
bdr = bader.bader(si, rho)
print('ionchg:', bdr.ionchg)
## TAB-complete to introspect inside `bdr` Fortran type
# collect Bader volumes associated with atom #5
atom = 4
rho3 = np.zeros_like(rho)
for v in (bdr.nnion == atom+1).nonzero()[0]:
rho3[bdr.volnum == v+1] = rho[bdr.volnum == v+1]
# write a CUBE file to allow visualisation of density (FIXME: can this be avoided?)
from ase.io.cube import write_cube
with open('rho3.cube', 'w') as f:
write_cube(f, si, rho3)
import nglview
v = nglview.show_ase(si, gui=True)
v.add_representation('unitcell')
v.add_component('rho3.cube')
v.component_1.update_surface()
v
import numpy as np
import caspytep
## uncomment line below and press TAB for autocompletion
caspytep.
#caspytep.cell.unit_cell.
## append a ? to access documentation strings
caspytep.model.model_wave_read?
#caspytep.cell.cell_read?
from ase.build import bulk
atoms = bulk('Si', cubic=True)
calc = caspytep.calculator.CasPyTep(atoms=atoms)
atoms.set_calculator(calc)
e = atoms.get_potential_energy()
f = atoms.get_forces()
print(f'energy: {e:.3f} eV')
print(f'forces: {f}')
#calc.model.eigenvalues
#calc.model.cell.ionic_positions
#calc.model.cell.ionic_positions[...,0]
#calc.model.wvfn.beta_phi
#calc.model.cell.ionic_velocities.T
from ase.units import Hartree
calc.parameters.cut_off_energy
from ase.units import Bohr
p = calc.model.cell.ionic_positions.copy()
p = p[:, :, 0] # first species only
p = calc.model.cell.real_lattice.T @ p
xi, yi, zi = p * Bohr
plt.scatter(xi, yi, s=200, c='r')
plt.axis([0, atoms.cell[0,0], 0, atoms.cell[1,1]]); plt.axis("scaled");
# overlay the charge density
plt.scatter(xi, yi, s=200, c='r')
den = calc.model.den.real_charge.copy()
basis = caspytep.basis.get_current_basis()
den3 = (den.reshape((basis.ngx, basis.ngy,
basis.ngz), order='F') /
basis.total_grid_points)
plt.imshow(den3[:, :, basis.ngz//2],
extent=[0, atoms.cell[0,0], 0, atoms.cell[1,1]]);
from ase.build import bulk
from ase.optimize import LBFGS
from caspytep.calculator import CasPyTep
atoms = bulk("Si", cubic=True)
calc = CasPyTep(atoms=atoms)
atoms.set_calculator(calc)
atoms.rattle(0.01)
a0 = atoms.copy()
opt = LBFGS(atoms)
opt.run(fmax=0.1)
print(atoms.get_potential_energy())
from ase.optimize.precon import PreconLBFGS
from caspytep.calculator import CasPyTep
atoms = a0.copy() # restart from same randomised positions as above
atoms.set_calculator(CasPyTep(atoms=atoms))
opt = PreconLBFGS(atoms, precon='Exp')
opt.run(fmax=0.05)
from ase.build import bulk
from caspytep.calculator import CasPyTep
calc = CasPyTep(atoms=bulk("Si")) # 2-atom Si system
energy_tol = 1e-4
current_params = caspytep.parameters.get_current_params()
current_params.cut_off_energy = 7.0
cutoffs = []
energy = []
while True:
caspytep.basis.basis_initialise(current_params.cut_off_energy)
current_params.fine_gmax = (current_params.fine_grid_scale *
np.sqrt(2.0*current_params.cut_off_energy))
caspytep.ion.ion_real_initialise()
model = caspytep.model.model_state()
model.converged = caspytep.electronic.electronic_minimisation(model)
current_params.cut_off_energy *= 1.1
print('cutoff %.2f energy %.5f' % (current_params.cut_off_energy,
model.total_energy))
cutoffs.append(current_params.cut_off_energy)
energy.append(model.total_energy)
if len(energy) > 2 and abs(energy[-1] - energy[-2]) < energy_tol:
print('converged at cutoff', cutoffs[-1])
break
from ase.units import Hartree
ecut = np.array(cutoffs) * Hartree
ediff = np.array(energy) * Hartree
ediff -= ediff[-1]
plt.plot(ecut, abs(ediff) * 1e3 / caspytep.cell.current_cell.num_ions, 'o-')
plt.xlabel('Cutoff / eV')
plt.ylabel('Energy Error / meV')
plt.axhline(1, linestyle='--');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Principal Component Analysis
Step3: (1a) Interpretando o PCA
Step4: (1b) Matriz de Covariância
Step6: (1c) Função de Covariância
Step7: (1d) Autovalores e Autovetores
Step8: (1e) PCA scores
Step10: Part 2
Step11: (2b) PCA em dataRandom
Step13: Visualização 2
Step15: (2c) Explicação da Variância
Step19: Part 3
Step21: (3b) PCA do LabeledPoint
Step22: (3c) Determinando o valor de k
Step23: (3d) Regressão Linear
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',
gridWidth=1.0):
Template for generating the plot layout.
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
def create2DGaussian(mn, sigma, cov, n):
Randomly sample points from a two-dimensional Gaussian distribution
np.random.seed(142)
return np.random.multivariate_normal(np.array([mn, mn]), np.array([[sigma, cov], [cov, sigma]]), n)
dataRandom = create2DGaussian(mn=50, sigma=1, cov=0, n=100)
# generate layout and plot data
fig, ax = preparePlot(np.arange(46, 55, 2), np.arange(46, 55, 2))
ax.set_xlabel(r'Simulated $x_1$ values'), ax.set_ylabel(r'Simulated $x_2$ values')
ax.set_xlim(45, 54.5), ax.set_ylim(45, 54.5)
plt.scatter(dataRandom[:,0], dataRandom[:,1], s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)
pass
dataCorrelated = create2DGaussian(mn=50, sigma=1, cov=.9, n=100)
# generate layout and plot data
fig, ax = preparePlot(np.arange(46, 55, 2), np.arange(46, 55, 2))
ax.set_xlabel(r'Simulated $x_1$ values'), ax.set_ylabel(r'Simulated $x_2$ values')
ax.set_xlim(45.5, 54.5), ax.set_ylim(45.5, 54.5)
plt.scatter(dataCorrelated[:,0], dataCorrelated[:,1], s=14**2, c='#d6ebf2',
edgecolors='#8cbfd0', alpha=0.75)
pass
# EXERCICIO
correlatedData = sc.parallelize(dataCorrelated)
meanCorrelated = correlatedData.<COMPLETAR>
correlatedDataZeroMean = correlatedData.<COMPLETAR>
print meanCorrelated
print correlatedData.take(1)
print correlatedDataZeroMean.take(1)
# TEST Interpreting PCA (1a)
from test_helper import Test
Test.assertTrue(np.allclose(meanCorrelated, [49.95739037, 49.97180477]),
'incorrect value for meanCorrelated')
Test.assertTrue(np.allclose(correlatedDataZeroMean.take(1)[0], [-0.28561917, 0.10351492]),
'incorrect value for correlatedDataZeroMean')
# EXERCICIO
# Compute the covariance matrix using outer products and correlatedDataZeroMean
correlatedCov = (correlatedDataZeroMean
.<COMPLETAR>
.<COMPLETAR>
)/correlatedDataZeroMean.count()
print correlatedCov
# TEST Sample covariance matrix (1b)
covResult = [[ 0.99558386, 0.90148989], [0.90148989, 1.08607497]]
Test.assertTrue(np.allclose(covResult, correlatedCov), 'incorrect value for correlatedCov')
# EXERCICIO
def estimateCovariance(data):
Compute the covariance matrix for a given rdd.
Note:
The multi-dimensional covariance array should be calculated using outer products. Don't
forget to normalize the data by first subtracting the mean.
Args:
data (RDD of np.ndarray): An `RDD` consisting of NumPy arrays.
Returns:
np.ndarray: A multi-dimensional array where the number of rows and columns both equal the
length of the arrays in the input `RDD`.
meanVal = data.<COMPLETAR>
return (data
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
)/data.count()
correlatedCovAuto= estimateCovariance(correlatedData)
print correlatedCovAuto
# TEST Covariance function (1c)
correctCov = [[ 0.99558386, 0.90148989], [0.90148989, 1.08607497]]
Test.assertTrue(np.allclose(correctCov, correlatedCovAuto),
'incorrect value for correlatedCovAuto')
# EXERCICIO
from numpy.linalg import eigh
# Calculate the eigenvalues and eigenvectors from correlatedCovAuto
eigVals, eigVecs = <COMPLETAR>
print 'eigenvalues: {0}'.format(eigVals)
print '\neigenvectors: \n{0}'.format(eigVecs)
# Use np.argsort to find the top eigenvector based on the largest eigenvalue
inds = <COMPLETAR>
topComponent = <COMPLETAR>
print '\ntop principal component: {0}'.format(topComponent)
# TEST Eigendecomposition (1d)
def checkBasis(vectors, correct):
return np.allclose(vectors, correct) or np.allclose(np.negative(vectors), correct)
Test.assertTrue(checkBasis(topComponent, [0.68915649, 0.72461254]),
'incorrect value for topComponent')
# EXERCICIO
# Use the topComponent and the data from correlatedData to generate PCA scores
correlatedDataScores = correlatedData.<COMPLETAR>
print 'one-dimensional data (first three):\n{0}'.format(np.asarray(correlatedDataScores.take(3)))
# TEST PCA Scores (1e)
firstThree = [70.51682806, 69.30622356, 71.13588168]
Test.assertTrue(checkBasis(correlatedDataScores.take(3), firstThree),
'incorrect value for correlatedDataScores')
# EXERCICIO
def pca(data, k=2):
Computes the top `k` principal components, corresponding scores, and all eigenvalues.
Note:
All eigenvalues should be returned in sorted order (largest to smallest). `eigh` returns
each eigenvectors as a column. This function should also return eigenvectors as columns.
Args:
data (RDD of np.ndarray): An `RDD` consisting of NumPy arrays.
k (int): The number of principal components to return.
Returns:
tuple of (np.ndarray, RDD of np.ndarray, np.ndarray): A tuple of (eigenvectors, `RDD` of
scores, eigenvalues). Eigenvectors is a multi-dimensional array where the number of
rows equals the length of the arrays in the input `RDD` and the number of columns equals
`k`. The `RDD` of scores has the same number of rows as `data` and consists of arrays
of length `k`. Eigenvalues is an array of length d (the number of features).
<COMPLETAR>
# Return the `k` principal components, `k` scores, and all eigenvalues
return (vecs, data.map(lambda x: x.dot(vecs)), vals)
# Run pca on correlatedData with k = 2
topComponentsCorrelated, correlatedDataScoresAuto, eigenvaluesCorrelated = pca(correlatedData, 2)
# Note that the 1st principal component is in the first column
print 'topComponentsCorrelated: \n{0}'.format(topComponentsCorrelated)
print ('\ncorrelatedDataScoresAuto (first three): \n{0}'
.format('\n'.join(map(str, correlatedDataScoresAuto.take(3)))))
print '\neigenvaluesCorrelated: \n{0}'.format(eigenvaluesCorrelated)
# Create a higher dimensional test set
pcaTestData = sc.parallelize([np.arange(x, x + 4) for x in np.arange(0, 20, 4)])
componentsTest, testScores, eigenvaluesTest = pca(pcaTestData, 3)
print '\npcaTestData: \n{0}'.format(np.array(pcaTestData.collect()))
print '\ncomponentsTest: \n{0}'.format(componentsTest)
print ('\ntestScores (first three): \n{0}'
.format('\n'.join(map(str, testScores.take(3)))))
print '\neigenvaluesTest: \n{0}'.format(eigenvaluesTest)
# TEST PCA Function (2a)
Test.assertTrue(checkBasis(topComponentsCorrelated.T,
[[0.68915649, 0.72461254], [-0.72461254, 0.68915649]]),
'incorrect value for topComponentsCorrelated')
firstThreeCorrelated = [[70.51682806, 69.30622356, 71.13588168], [1.48305648, 1.5888655, 1.86710679]]
Test.assertTrue(np.allclose(firstThreeCorrelated,
np.vstack(np.abs(correlatedDataScoresAuto.take(3))).T),
'incorrect value for firstThreeCorrelated')
Test.assertTrue(np.allclose(eigenvaluesCorrelated, [1.94345403, 0.13820481]),
'incorrect values for eigenvaluesCorrelated')
topComponentsCorrelatedK1, correlatedDataScoresK1, eigenvaluesCorrelatedK1 = pca(correlatedData, 1)
Test.assertTrue(checkBasis(topComponentsCorrelatedK1.T, [0.68915649, 0.72461254]),
'incorrect value for components when k=1')
Test.assertTrue(np.allclose([70.51682806, 69.30622356, 71.13588168],
np.vstack(np.abs(correlatedDataScoresK1.take(3))).T),
'incorrect value for scores when k=1')
Test.assertTrue(np.allclose(eigenvaluesCorrelatedK1, [1.94345403, 0.13820481]),
'incorrect values for eigenvalues when k=1')
Test.assertTrue(checkBasis(componentsTest.T[0], [ .5, .5, .5, .5]),
'incorrect value for componentsTest')
Test.assertTrue(np.allclose(np.abs(testScores.first()[0]), 3.),
'incorrect value for testScores')
Test.assertTrue(np.allclose(eigenvaluesTest, [ 128, 0, 0, 0 ]), 'incorrect value for eigenvaluesTest')
# EXERCICIO
randomData = sc.parallelize(dataRandom)
# Use pca on randomData
topComponentsRandom, randomDataScoresAuto, eigenvaluesRandom = <COMPLETAR>
print 'topComponentsRandom: \n{0}'.format(topComponentsRandom)
print ('\nrandomDataScoresAuto (first three): \n{0}'
.format('\n'.join(map(str, randomDataScoresAuto.take(3)))))
print '\neigenvaluesRandom: \n{0}'.format(eigenvaluesRandom)
# TEST PCA on `dataRandom` (2b)
Test.assertTrue(checkBasis(topComponentsRandom.T,
[[-0.2522559 , 0.96766056], [-0.96766056, -0.2522559]]),
'incorrect value for topComponentsRandom')
firstThreeRandom = [[36.61068572, 35.97314295, 35.59836628],
[61.3489929 , 62.08813671, 60.61390415]]
Test.assertTrue(np.allclose(firstThreeRandom, np.vstack(np.abs(randomDataScoresAuto.take(3))).T),
'incorrect value for randomDataScoresAuto')
Test.assertTrue(np.allclose(eigenvaluesRandom, [1.4204546, 0.99521397]),
'incorrect value for eigenvaluesRandom')
def projectPointsAndGetLines(data, components, xRange):
Project original data onto first component and get line details for top two components.
topComponent= components[:, 0]
slope1, slope2 = components[1, :2] / components[0, :2]
means = data.mean()[:2]
demeaned = data.map(lambda v: v - means)
projected = demeaned.map(lambda v: (v.dot(topComponent) /
topComponent.dot(topComponent)) * topComponent)
remeaned = projected.map(lambda v: v + means)
x1,x2 = zip(*remeaned.collect())
lineStartP1X1, lineStartP1X2 = means - np.asarray([xRange, xRange * slope1])
lineEndP1X1, lineEndP1X2 = means + np.asarray([xRange, xRange * slope1])
lineStartP2X1, lineStartP2X2 = means - np.asarray([xRange, xRange * slope2])
lineEndP2X1, lineEndP2X2 = means + np.asarray([xRange, xRange * slope2])
return ((x1, x2), ([lineStartP1X1, lineEndP1X1], [lineStartP1X2, lineEndP1X2]),
([lineStartP2X1, lineEndP2X1], [lineStartP2X2, lineEndP2X2]))
((x1, x2), (line1X1, line1X2), (line2X1, line2X2)) = \
projectPointsAndGetLines(correlatedData, topComponentsCorrelated, 5)
# generate layout and plot data
fig, ax = preparePlot(np.arange(46, 55, 2), np.arange(46, 55, 2), figsize=(7, 7))
ax.set_xlabel(r'Simulated $x_1$ values'), ax.set_ylabel(r'Simulated $x_2$ values')
ax.set_xlim(45.5, 54.5), ax.set_ylim(45.5, 54.5)
plt.plot(line1X1, line1X2, linewidth=3.0, c='#8cbfd0', linestyle='--')
plt.plot(line2X1, line2X2, linewidth=3.0, c='#d6ebf2', linestyle='--')
plt.scatter(dataCorrelated[:,0], dataCorrelated[:,1], s=14**2, c='#d6ebf2',
edgecolors='#8cbfd0', alpha=0.75)
plt.scatter(x1, x2, s=14**2, c='#62c162', alpha=.75)
pass
((x1, x2), (line1X1, line1X2), (line2X1, line2X2)) = \
projectPointsAndGetLines(randomData, topComponentsRandom, 5)
# generate layout and plot data
fig, ax = preparePlot(np.arange(46, 55, 2), np.arange(46, 55, 2), figsize=(7, 7))
ax.set_xlabel(r'Simulated $x_1$ values'), ax.set_ylabel(r'Simulated $x_2$ values')
ax.set_xlim(45.5, 54.5), ax.set_ylim(45.5, 54.5)
plt.plot(line1X1, line1X2, linewidth=3.0, c='#8cbfd0', linestyle='--')
plt.plot(line2X1, line2X2, linewidth=3.0, c='#d6ebf2', linestyle='--')
plt.scatter(dataRandom[:,0], dataRandom[:,1], s=14**2, c='#d6ebf2',
edgecolors='#8cbfd0', alpha=0.75)
plt.scatter(x1, x2, s=14**2, c='#62c162', alpha=.75)
pass
# EXERCICIO
def varianceExplained(data, k=1):
Calculate the fraction of variance explained by the top `k` eigenvectors.
Args:
data (RDD of np.ndarray): An RDD that contains NumPy arrays which store the
features for an observation.
k: The number of principal components to consider.
Returns:
float: A number between 0 and 1 representing the percentage of variance explained
by the top `k` eigenvectors.
<COMPLETAR>
return eigenvalues[:k].sum()/eigenvalues.sum()
varianceRandom1 = varianceExplained(randomData, 1)
varianceCorrelated1 = varianceExplained(correlatedData, 1)
varianceRandom2 = varianceExplained(randomData, 2)
varianceCorrelated2 = varianceExplained(correlatedData, 2)
print ('Percentage of variance explained by the first component of randomData: {0:.1f}%'
.format(varianceRandom1 * 100))
print ('Percentage of variance explained by both components of randomData: {0:.1f}%'
.format(varianceRandom2 * 100))
print ('\nPercentage of variance explained by the first component of correlatedData: {0:.1f}%'.
format(varianceCorrelated1 * 100))
print ('Percentage of variance explained by both components of correlatedData: {0:.1f}%'
.format(varianceCorrelated2 * 100))
# TEST Variance explained (2d)
Test.assertTrue(np.allclose(varianceRandom1, 0.588017172066), 'incorrect value for varianceRandom1')
Test.assertTrue(np.allclose(varianceCorrelated1, 0.933608329586),
'incorrect value for varianceCorrelated1')
Test.assertTrue(np.allclose(varianceRandom2, 1.0), 'incorrect value for varianceRandom2')
Test.assertTrue(np.allclose(varianceCorrelated2, 1.0), 'incorrect value for varianceCorrelated2')
baseDir = os.path.join('Data')
inputPath = os.path.join('Aula04', 'millionsong.txt')
fileName = os.path.join(baseDir, inputPath)
numPartitions = 2
rawData = sc.textFile(fileName, numPartitions)
print rawData.first()
from pyspark.mllib.regression import LabeledPoint
def parsePoint(line):
Converts a comma separated unicode string into a `LabeledPoint`.
Args:
line (unicode): Comma separated unicode string where the first element is the label and the
remaining elements are features.
Returns:
LabeledPoint: The line is converted into a `LabeledPoint`, which consists of a label and
features.
Point = map(float,line.split(','))
return LabeledPoint(Point[0]-1922,Point[1:])
baselineRL = 17.017
baselineInteract = 15.690
millionRDD = rawData.map(parsePoint)
print millionRDD.take(1)
def squaredError(label, prediction):
Calculates the the squared error for a single prediction.
Args:
label (float): The correct value for this observation.
prediction (float): The predicted value for this observation.
Returns:
float: The difference between the `label` and `prediction` squared.
return np.square(label-prediction)
def calcRMSE(labelsAndPreds):
Calculates the root mean squared error for an `RDD` of (label, prediction) tuples.
Args:
labelsAndPred (RDD of (float, float)): An `RDD` consisting of (label, prediction) tuples.
Returns:
float: The square root of the mean of the squared errors.
return np.sqrt(labelsAndPreds.map(lambda rec: squaredError(rec[0],rec[1])).mean())
def pcaLP(data, k=2):
Computes the top `k` principal components, corresponding scores, and all eigenvalues.
Note:
All eigenvalues should be returned in sorted order (largest to smallest). `eigh` returns
each eigenvectors as a column. This function should also return eigenvectors as columns.
Args:
data (RDD of np.ndarray): An `RDD` consisting of NumPy arrays.
k (int): The number of principal components to return.
Returns:
tuple of (np.ndarray, RDD of np.ndarray, np.ndarray): A tuple of (eigenvectors, `RDD` of
scores, eigenvalues). Eigenvectors is a multi-dimensional array where the number of
rows equals the length of the arrays in the input `RDD` and the number of columns equals
`k`. The `RDD` of scores has the same number of rows as `data` and consists of arrays
of length `k`. Eigenvalues is an array of length d (the number of features).
cov = estimateCovariance(data.map(lambda x: x.features))
eigVals, eigVecs = eigh(cov)
inds = np.argsort(-eigVals)
vecs = eigVecs[:,inds[:k]]
vals = eigVals[inds[:cov.shape[0]]]
# Return the `k` principal components, `k` scores, and all eigenvalues
return data.map(lambda x: LabeledPoint(x.label,x.features.dot(vecs)))
for k in range(1,10):
varexp = varianceExplained(millionRDD.map(lambda x: x.features), k)
print 'Variation explained by {} components is {}'.format(k,varexp)
from pyspark.mllib.regression import LinearRegressionWithSGD
# Values to use when training the linear regression model
numIters = 2000 # iterations
alpha = 1.0 # step
miniBatchFrac = 1.0 # miniBatchFraction
reg = 1e-1 # regParam
regType = None#'l2' # regType
useIntercept = True # intercept
# TODO: Replace <FILL IN> with appropriate code
# Run pca using scaledData
pcaMillionRDD = pcaLP(millionRDD, 2)
weights = [.8, .1, .1]
seed = 42
parsedTrainData, parsedValData, parsedTestData = pcaMillionRDD.randomSplit(weights, seed)
pcaModel = LinearRegressionWithSGD.train(parsedTrainData, iterations = numIters, step = alpha, miniBatchFraction = 1.0,
regParam=reg,regType=regType, intercept=useIntercept)
labelsAndPreds = parsedValData.map(lambda lp: (lp.label, pcaModel.predict(lp.features)))
rmseValLPCA2 = calcRMSE(labelsAndPreds)
pcaMillionRDD = pcaLP(millionRDD, 6)
weights = [.8, .1, .1]
seed = 42
parsedTrainData, parsedValData, parsedTestData = pcaMillionRDD.randomSplit(weights, seed)
pcaModel = LinearRegressionWithSGD.train(parsedTrainData, iterations = numIters, step = alpha, miniBatchFraction = 1.0,
regParam=reg,regType=regType, intercept=useIntercept)
labelsAndPreds = parsedValData.map(lambda lp: (lp.label, pcaModel.predict(lp.features)))
rmseValLPCA6 = calcRMSE(labelsAndPreds)
pcaMillionRDD = pcaLP(millionRDD, 8)
weights = [.8, .1, .1]
seed = 42
parsedTrainData, parsedValData, parsedTestData = pcaMillionRDD.randomSplit(weights, seed)
pcaModel = LinearRegressionWithSGD.train(parsedTrainData, iterations = numIters, step = alpha, miniBatchFraction = 1.0,
regParam=reg,regType=regType, intercept=useIntercept)
labelsAndPreds = parsedValData.map(lambda lp: (lp.label, pcaModel.predict(lp.features)))
rmseValLPCA8 = calcRMSE(labelsAndPreds)
print ('Validation RMSE:\n\tLinear Regression (orig.) = {0:.3f}\n\tLinear Regression Interact = {1:.3f}' +
'\n\tLR PCA (k=2) = {2:.3f}\n\tLR PCA (k=6) = {3:.3f}\n\tLR PCA (k=8) = {4:.3f}'
).format(baselineRL, baselineInteract, rmseValLPCA2, rmseValLPCA6, rmseValLPCA8)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: At this point anything above python 3.5 should be ok.
Step2: Notes
Step3: Notes
Step4: Notes
Step5: Notes
Step6: Notes
Step7: Notes
Step8: Notes
Step9: Notes
Step10: Notes
Step11: Notes
|
<ASSISTANT_TASK:>
Python Code:
import sys
print(sys.version)
print(2 / 3)
print(2 // 3)
print(2 - 3)
print(2 * 3)
print(2 ** 3)
print(12 % 5)
print("Welcome" + " to the " + "BornAgain" + " School")
print([1, 2, 3, 4] + [5, 6, 7, 8])
print((1, 2, 3, 4) + (5, 6, 7, 8))
print(5 < 6)
print(5 >= 6)
print(5 <= 6)
print(5 == 6)
print(5 in [1, 2, 3, 4, 5, 6, 7])
print(5 < 6 and 5 >= 6)
print(5 >= 6 or 5 < 6)
print(not 5 == 6)
a = 2.0
b = 3.0
c = 4.0
print(type(a))
#print(float.__dict__)
print(a / b)
print(a.__truediv__(b))
print(c >= b)
print(c.__ge__(b))
a = [
0.0,
1.0,
6.2,
5.333,
9,
4,
3.4]
print("a is equal to ", a)
print("a is of type: ", type(a))
print("a[0] is equal to ", a[0], "and is of type: ", type(a[0]))
print("a[4] is equal to ", a[4], "and is of type: ", type(a[4]))
b = a[2:4]
print(b)
b = a[:4]
print(b)
b = a[2:]
print(b)
b = a[::-1]
print(b)
b = a[2:6:2]
print(b)
iterator = iter(a)
print(type(iterator))
print(next(iterator))
print(next(iterator))
print(next(iterator))
print(next(iterator))
for element in a:
print(element, type(element))
b = 4
for element in a:
if element < b:
print(element, "is smaller than ", b)
elif element == b:
print(element, "is equal to ", b)
else:
print(element, "is bigger than ",b)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The generative probabilistic model
Step2: The true model is shown as the thick grey line and the effect of the
Step3: This figure shows the least-squares estimate of the line parameters as a dashed line.
Step4: In this code snippet, you'll notice that we're using the logarithm of $f$
Step5: It's worth noting that the optimize module minimizes functions whereas we
Step6: Then, combining this with the definition of log_likelihood from above, the full
Step7: After all this setup, it's easy to sample this distribution using emcee.
Step8: Let's take a look at what the sampler has done.
Step9: As mentioned above, the walkers start in small distributions around the
Step10: This suggests that only about 40 steps are needed for the chain to "forget" where it started.
Step11: Results
Step12: The corner plot shows all the one and two dimensional projections of the
Step13: This leaves us with one question
|
<ASSISTANT_TASK:>
Python Code:
%config InlineBackend.figure_format = "retina"
from matplotlib import rcParams
rcParams["savefig.dpi"] = 100
rcParams["figure.dpi"] = 100
rcParams["font.size"] = 20
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(123)
# Choose the "true" parameters.
m_true = -0.9594
b_true = 4.294
f_true = 0.534
# Generate some synthetic data from the model.
N = 50
x = np.sort(10 * np.random.rand(N))
yerr = 0.1 + 0.5 * np.random.rand(N)
y = m_true * x + b_true
y += np.abs(f_true * y) * np.random.randn(N)
y += yerr * np.random.randn(N)
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
x0 = np.linspace(0, 10, 500)
plt.plot(x0, m_true * x0 + b_true, "k", alpha=0.3, lw=3)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
A = np.vander(x, 2)
C = np.diag(yerr * yerr)
ATA = np.dot(A.T, A / (yerr**2)[:, None])
cov = np.linalg.inv(ATA)
w = np.linalg.solve(ATA, np.dot(A.T, y / yerr**2))
print("Least-squares estimates:")
print("m = {0:.3f} ± {1:.3f}".format(w[0], np.sqrt(cov[0, 0])))
print("b = {0:.3f} ± {1:.3f}".format(w[1], np.sqrt(cov[1, 1])))
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.plot(x0, m_true * x0 + b_true, "k", alpha=0.3, lw=3, label="truth")
plt.plot(x0, np.dot(np.vander(x0, 2), w), "--k", label="LS")
plt.legend(fontsize=14)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
def log_likelihood(theta, x, y, yerr):
m, b, log_f = theta
model = m * x + b
sigma2 = yerr**2 + model**2 * np.exp(2 * log_f)
return -0.5 * np.sum((y - model) ** 2 / sigma2 + np.log(sigma2))
from scipy.optimize import minimize
np.random.seed(42)
nll = lambda *args: -log_likelihood(*args)
initial = np.array([m_true, b_true, np.log(f_true)]) + 0.1 * np.random.randn(3)
soln = minimize(nll, initial, args=(x, y, yerr))
m_ml, b_ml, log_f_ml = soln.x
print("Maximum likelihood estimates:")
print("m = {0:.3f}".format(m_ml))
print("b = {0:.3f}".format(b_ml))
print("f = {0:.3f}".format(np.exp(log_f_ml)))
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.plot(x0, m_true * x0 + b_true, "k", alpha=0.3, lw=3, label="truth")
plt.plot(x0, np.dot(np.vander(x0, 2), w), "--k", label="LS")
plt.plot(x0, np.dot(np.vander(x0, 2), [m_ml, b_ml]), ":k", label="ML")
plt.legend(fontsize=14)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
def log_prior(theta):
m, b, log_f = theta
if -5.0 < m < 0.5 and 0.0 < b < 10.0 and -10.0 < log_f < 1.0:
return 0.0
return -np.inf
def log_probability(theta, x, y, yerr):
lp = log_prior(theta)
if not np.isfinite(lp):
return -np.inf
return lp + log_likelihood(theta, x, y, yerr)
import emcee
pos = soln.x + 1e-4 * np.random.randn(32, 3)
nwalkers, ndim = pos.shape
sampler = emcee.EnsembleSampler(
nwalkers, ndim, log_probability, args=(x, y, yerr)
)
sampler.run_mcmc(pos, 5000, progress=True);
fig, axes = plt.subplots(3, figsize=(10, 7), sharex=True)
samples = sampler.get_chain()
labels = ["m", "b", "log(f)"]
for i in range(ndim):
ax = axes[i]
ax.plot(samples[:, :, i], "k", alpha=0.3)
ax.set_xlim(0, len(samples))
ax.set_ylabel(labels[i])
ax.yaxis.set_label_coords(-0.1, 0.5)
axes[-1].set_xlabel("step number");
tau = sampler.get_autocorr_time()
print(tau)
flat_samples = sampler.get_chain(discard=100, thin=15, flat=True)
print(flat_samples.shape)
import corner
fig = corner.corner(
flat_samples, labels=labels, truths=[m_true, b_true, np.log(f_true)]
);
inds = np.random.randint(len(flat_samples), size=100)
for ind in inds:
sample = flat_samples[ind]
plt.plot(x0, np.dot(np.vander(x0, 2), sample[:2]), "C1", alpha=0.1)
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.plot(x0, m_true * x0 + b_true, "k", label="truth")
plt.legend(fontsize=14)
plt.xlim(0, 10)
plt.xlabel("x")
plt.ylabel("y");
from IPython.display import display, Math
for i in range(ndim):
mcmc = np.percentile(flat_samples[:, i], [16, 50, 84])
q = np.diff(mcmc)
txt = "\mathrm{{{3}}} = {0:.3f}_{{-{1:.3f}}}^{{{2:.3f}}}"
txt = txt.format(mcmc[1], q[0], q[1], labels[i])
display(Math(txt))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Series Validator
Step2: DataFrame Validator
|
<ASSISTANT_TASK:>
Python Code:
# Please install this package using following command.
# $ pip install pandas-validator
import pandas_validator as pv
import pandas as pd
import numpy as np
# Create validator's instance
validator = pv.IntegerSeriesValidator(min_value=0, max_value=10)
series = pd.Series([0, 3, 6, 9]) # This series is valid.
print(validator.is_valid(series))
series = pd.Series([0, 4, 8, 12]) # This series is invalid. because that includes 12 number.
print(validator.is_valid(series))
# Define validator
class SampleDataFrameValidator(pv.DataFrameValidator):
row_num = 5
column_num = 2
label1 = pv.IntegerColumnValidator('label1', min_value=0, max_value=10)
label2 = pv.FloatColumnValidator('label2', min_value=0, max_value=10)
# Create validator's instance
validator = SampleDataFrameValidator()
df = pd.DataFrame({'label1': [0, 1, 2, 3, 4], 'label2': [5.0, 6.0, 7.0, 8.0, 9.0]}) # This data frame is valid.
print(validator.is_valid(df))
df = pd.DataFrame({'label1': [11, 12, 13, 14, 15], 'label2': [5.0, 6.0, 7.0, 8.0, 9.0]}) # This data frame is invalid.
print(validator.is_valid(df))
df = pd.DataFrame({'label1': [0, 1, 2], 'label2': [5.0, 6.0, 7.0]}) # This data frame is invalid.
print(validator.is_valid(df))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Spatial signal
Step2: Fourier transform
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.constants import c, pi
a = 0.192 # m
dx = 1e-4
x = np.arange(0, a+dx, step=dx, )
E = np.zeros_like(x)
# weights of the modes (example)
Ems = np.r_[0.2, 0, 1, 0.3, 0.1]
# total electric field is the sum of the modes
for m, Em in enumerate(Ems, start=1):
E += Em*np.sin(pi*m/a * x)
# display the "measured field"
fig, ax = plt.subplots()
ax.plot(x, E)
# pad the spatial dimensions to improve the spectral resolution and display the results
x = np.pad(x, (20*1024,), 'reflect', reflect_type='odd')
E = np.pad(E, (20*1024,), 'reflect', reflect_type='odd') # reflect the signal as in infinite large wg
ax.plot(x, E, alpha=0.5)
from numpy.fft import fft, fftshift, fftfreq
U = fftshift(fft(E))
kx= fftshift(fftfreq(len(x), d=dx)*2*pi)
fig, ax = plt.subplots()
ax.plot(kx, np.abs(U), marker='.')
ax.set_xlim(-.9*pi/a, 7*pi/a)
# shows where the modes 1,2,... are
for mode_index in range(8):
ax.axvline(mode_index*pi/a, color='#888888', linestyle='--')
ax.set_xticks(np.arange(0,8)*pi/a)
ax.set_xticklabels([0] + [f'${m}\pi/a$' for m in range(1,7)])
ax.set_xlabel('$k_x$', size=16)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'sandbox-3', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: However, it cannot match multiple repititions
Step2: We can use this to find strings that may or may not include elements, like phone numbers with and without area codes.
Step3: The '*' Regex Operater
Step4: The '+' Regex Operater
Step5: All of these characters can be escaped for literal matches
Step6: The '{}' Regex Operater
Step7: This operator can also take the {x,y} argument to create a minimum or maximum number of repititions.
Step8: RegEx does greedy matches, which means it will try to find the longest string that matches, not the shortest.
Step9: You can do a non-greedy match by using a '}?' operator.
|
<ASSISTANT_TASK:>
Python Code:
import re
batRegex = re.compile(r'Bat(wo)?man') # The ()? says this group can appear 0 or 1 times to match; it is optional
mo = batRegex.search('The Adventures of Batman')
print(mo.group())
mo = batRegex.search('The Adventures of Batwoman')
print(mo.group())
mo = batRegex.search('The Adventures of Batwowowowoman')
print(mo.group())
phoneNumRegex = re.compile(r'\d\d\d\-\d\d\d-\d\d\d\d') # this requires an area code.
mo = phoneNumRegex.search('My number is 415-555-4242') # matches
print(mo.group())
mo2 = phoneNumRegex.search('My number is 555-4242') # will not match
print(mo2)
phoneNumRegex = re.compile(r'(\d\d\d\-)?\d\d\d-\d\d\d\d') # Make first three digits and dash optional
mo = phoneNumRegex.search('My number is 415-555-4242') # matches
print(mo.group())
mo2 = phoneNumRegex.search('My number is 555-4242') # matches
print(mo2.group())
import re
batRegex = re.compile(r'Bat(wo)*man') # The ()* says this group can appear 0 or n times to match
print(batRegex.search('The Adventures of Batwoman').group())
print(batRegex.search('The Adventures of Batwowowowoman').group())
import re
batRegex = re.compile(r'Bat(wo)+man') # The ()+ says this group can appear 1 or n times; it is NOT optional
print(batRegex.search('The Adventures of Batwoman').group())
print(batRegex.search('The Adventures of Batwowowowoman').group())
print(batRegex.search('The Adventures of Batman').group())
import re
batRegex = re.compile(r'\+\*\?') # The +,*, and ? are escaped.
print(batRegex.search('I learned about +*? RegEx syntax').group())
haRegex = re.compile(r'(Ha){3}')
print(haRegex.search('HaHaHa').group())
print(haRegex.search('HaHaHaHa').group()) # Matches only three times, so returns only 3
#print(haRegex.search('HaHa').group()) # No Match
phoneRegex = re.compile(r'((\d)?\d\d\d(\d)?){3}') # Useful to avoid repitition
phoneNumRegex.search('My number is 415-555-4242').group()
haRegex = re.compile(r'(Ha){3,5}')
print(haRegex.search('HaHaHa').group())
print(haRegex.search('HaHaHaHa').group())
print(haRegex.search('HaHaHaHaHa').group())
print(haRegex.search('HaHaHaHaHaHaHaHa').group()) # Matches max of 5
haRegex = re.compile(r'(Ha){,5}') # Can drop one or the other for unbounded matches
print(haRegex.search('Ha').group())
print(haRegex.search('HaHa').group())
print(haRegex.search('HaHaHa').group())
print(haRegex.search('HaHaHaHa').group())
print(haRegex.search('HaHaHaHaHa').group())
print(haRegex.search('HaHaHaHaHaHaHaHa').group()) # Matches max of 5
haRegex = re.compile(r'(Ha){1,6}') # at least 1, or 6
print(haRegex.search('HaHaHaHaHaHaHaHa').group()) # Matches longest string; 6
haRegex = re.compile(r'(Ha){1,6}?') # The }? says favor the first condition, not the second; non-greedy
print(haRegex.search('HaHaHaHaHaHaHaHa').group()) # Matches shortest string, 1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Energy Model Related APIs
Step2: The above example shows how the EnergyModel class can be used to find optimal task placements. Here it is shown that on ARM Juno, if the system is presented with just two small tasks, it should place them on the same CPU, not using the big CPUs (1 and 2).
Step3: Public interface
Step4: The run method is the only one which the user is required to implement to specify how to run the specific Android workload.
Step5: ... and others are on their way ;-)
Step6: Public interface
|
<ASSISTANT_TASK:>
Python Code:
from test import LisaTest
print LisaTest.__doc__
from energy_model import EnergyModel
print EnergyModel.__doc__
# juno_energy provides an instance of EnergyModel for ARM Juno platforms
from platforms.juno_energy import juno_energy
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
possible_placements = juno_energy.get_optimal_placements({'task1': 10, 'task2': 15})
fig, axs = plt.subplots(1, 4, sharey=True)
# fig.set_ylabel('Utilization')
for ax, placement in zip(axs, possible_placements):
ax.set_ylabel('Utilization')
ax.set_xlabel('CPU')
pd.DataFrame(list(placement)).plot(kind='bar', figsize=(16, 2), ax=ax, legend=False)
from android import Workload
print Workload.__doc__
{ k:v for k,v in vars(Workload).items() if not k.startswith('_') }
!tree $LISA_HOME/libs/utils/android/workloads
from android import LisaBenchmark
print LisaBenchmark.__doc__
{ k:v for k,v in vars(LisaBenchmark).items() if not k.startswith('_') }
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Replace the variable values in the cell below
Step2: Create a Dataset from BigQuery
Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step6: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
Step7: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
Step8: AutoML for text classification requires that
Step9: Let's make sure we have roughly the same number of labels for each of our three labels
Step10: Finally we will save our data, which is currently in-memory, to disk.
Step11: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Step12: Let's write the sample datatset to disk.
Step13: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located
Step14: Loading the dataset
Step15: Let's look again at the number of examples per label to make sure we have a well-balanced dataset
Step16: Preparing the labels
Step17: Preparing the train/test splits
Step18: To be on the safe side, we verify that the train and test splits
Step19: Now let's create the features and labels we will feed our models with
Step20: NNLM Model
Step21: Note that this TF-Hub embedding produces a single 50-dimensional vector when passed a sentence
Step22: Swivel Model
Step23: Similarly as the previous pre-trained embedding, it outputs a single vector when passed a sentence
Step24: Building the models
Step25: Let's also wrap the training code into a train_and_evaluate function that
Step26: Training NNLM
Step27: Training Swivel
Step28: Comparing the models
Step29: Deploying the model
Step30: Then we can deploy the model using the gcloud CLI as before
Step31: Note the ENDPOINT_RESOURCENAME above as you'll need it below for the prediction.
Step32: Let's go ahead and hit our model
Step33: Insert below the ENDPOINT_RESOURCENAME from the deployment code above.
|
<ASSISTANT_TASK:>
Python Code:
import os
import pandas as pd
from google.cloud import bigquery
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set ai/region $REGION
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 100
regex = ".*://(.[^/]+)/"
sub_query =
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
.format(
regex
)
query =
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
.format(
sub_query=sub_query
)
print(query)
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
print(f"The full dataset contains {len(title_dataset)} titles")
title_dataset.source.value_counts()
DATADIR = "./data/"
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = "titles_full.csv"
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding="utf-8"
)
sample_title_dataset = title_dataset.sample(n=1000)
sample_title_dataset.source.value_counts()
SAMPLE_DATASET_NAME = "titles_sample.csv"
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding="utf-8"
)
sample_title_dataset.head()
import datetime
import os
import shutil
import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import EarlyStopping, TensorBoard
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
from tensorflow_hub import KerasLayer
print(tf.__version__)
%matplotlib inline
MODEL_DIR = f"gs://{BUCKET}/text_models"
ls $DATADIR
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATADIR, DATASET_NAME)
COLUMNS = ["title", "source"]
titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()
titles_df.source.value_counts()
CLASSES = {"github": 0, "nytimes": 1, "techcrunch": 2}
N_CLASSES = len(CLASSES)
def encode_labels(sources):
classes = [CLASSES[source] for source in sources]
one_hots = to_categorical(classes, num_classes=N_CLASSES)
return one_hots
encode_labels(titles_df.source[:4])
N_TRAIN = int(len(titles_df) * 0.95)
titles_train, sources_train = (
titles_df.title[:N_TRAIN],
titles_df.source[:N_TRAIN],
)
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:],
titles_df.source[N_TRAIN:],
)
sources_train.value_counts()
sources_valid.value_counts()
X_train, Y_train = titles_train.values, encode_labels(sources_train)
X_valid, Y_valid = titles_valid.values, encode_labels(sources_valid)
X_train[:3]
Y_train[:3]
NNLM = "https://tfhub.dev/google/nnlm-en-dim50/2"
nnlm_module = KerasLayer(
# TODO
)
nnlm_module(
tf.constant(
[
# TODO
]
)
)
SWIVEL = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1"
swivel_module = KerasLayer(
# TODO
)
swivel_module(
tf.constant(
[
# TODO
]
)
)
def build_model(hub_module, name):
model = Sequential(
[
# TODO
Dense(16, activation="relu"),
Dense(N_CLASSES, activation="softmax"),
],
name=name,
)
model.compile(
optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]
)
return model
def train_and_evaluate(train_data, val_data, model, batch_size=5000):
X_train, Y_train = train_data
tf.random.set_seed(33)
model_dir = os.path.join(MODEL_DIR, model.name)
if tf.io.gfile.exists(model_dir):
tf.io.gfile.rmtree(model_dir)
history = model.fit(
X_train,
Y_train,
epochs=100,
batch_size=batch_size,
validation_data=val_data,
callbacks=[EarlyStopping(patience=1), TensorBoard(model_dir)],
)
return history
data = (X_train, Y_train)
val_data = (X_valid, Y_valid)
nnlm_model = build_model(nnlm_module, "nnlm")
nnlm_history = train_and_evaluate(data, val_data, nnlm_model)
history = nnlm_history
pd.DataFrame(history.history)[["loss", "val_loss"]].plot()
pd.DataFrame(history.history)[["accuracy", "val_accuracy"]].plot()
swivel_model = build_model(swivel_module, name="swivel")
swivel_history = train_and_evaluate(data, val_data, swivel_model)
history = swivel_history
pd.DataFrame(history.history)[["loss", "val_loss"]].plot()
pd.DataFrame(history.history)[["accuracy", "val_accuracy"]].plot()
!echo tensorboard --logdir $MODEL_DIR --port 6006
OUTPUT_DIR = "./savedmodels_vertex"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, "swivel")
os.environ["EXPORT_PATH"] = EXPORT_PATH
shutil.rmtree(EXPORT_PATH, ignore_errors=True)
tf.keras.models.save_model(swivel_model, EXPORT_PATH)
%%bash
# TODO 5
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
MODEL_DISPLAYNAME=title_model_$TIMESTAMP
ENDPOINT_DISPLAYNAME=swivel_$TIMESTAMP
IMAGE_URI=# TODO
ARTIFACT_DIRECTORY=gs://${BUCKET}/${MODEL_DISPLAYNAME}/
echo $ARTIFACT_DIRECTORY
gsutil cp -r ${EXPORT_PATH}/* ${ARTIFACT_DIRECTORY}
# Model
MODEL_RESOURCENAME=$(gcloud ai models upload \
--region=$REGION \
--display-name=$MODEL_DISPLAYNAME \
--container-image-uri=$IMAGE_URI \
--artifact-uri=# TODO
--format="value(model)")
echo "MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}"
echo "MODEL_RESOURCENAME=${MODEL_RESOURCENAME}"
# Endpoint
ENDPOINT_RESOURCENAME=$(gcloud ai endpoints create \
--region=$REGION \
--display-name=$ENDPOINT_DISPLAYNAME \
--format="value(name)")
echo "ENDPOINT_DISPLAYNAME=${ENDPOINT_DISPLAYNAME}"
echo "ENDPOINT_RESOURCENAME=${ENDPOINT_RESOURCENAME}"
# Deployment
DEPLOYED_MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}_deployment
MACHINE_TYPE=n1-standard-2
MIN_REPLICA_COUNT=1
MAX_REPLICA_COUNT=3
gcloud ai endpoints deploy-model $ENDPOINT_RESOURCENAME \
--region=$REGION \
--model=$MODEL_RESOURCENAME \
--display-name=$DEPLOYED_MODEL_DISPLAYNAME \
--machine-type=$MACHINE_TYPE \
--min-replica-count=$MIN_REPLICA_COUNT \
--max-replica-count=$MAX_REPLICA_COUNT \
--traffic-split=0=100
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir {EXPORT_PATH}
!find {EXPORT_PATH}
%%writefile input.json
{
# TODO
}
%%bash
ENDPOINT_RESOURCENAME= #TODO: insert the ENDPOINT_RESOURCENAME here from above
gcloud ai endpoints predict $ENDPOINT_RESOURCENAME \
--region $REGION \
--json-request input.json
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set environment variables so that we can use them throughout the entire lab. We will be using our project ID for our bucket, so you only need to change your project and region.
Step2: Create ML datasets by sampling using BigQuery
Step3: We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash with in the modulo. Feel free to play around with these values to get the perfect combination.
Step5: We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows.
Step7: For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results.
Step9: Using COALESCE would provide the same result as the nested CASE WHEN. This is preferable when all we want is the first non-null instance. To be precise the CASE WHEN would become COALESCE(wday, day, 0) AS date. You can read more about it here.
Step11: The next query is going to find the counts of each of the unique 657484 hash_values. This will be our first step at making actual hash buckets for our split via the GROUP BY.
Step13: The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records.
Step15: The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query.
Step17: We'll now select the range of buckets to be used in training.
Step19: We'll do the same by selecting the range of buckets to be used evaluation.
Step21: Lastly, we'll select the hash buckets to be used for the test split.
Step23: In the below query, we'll UNION ALL all of the datasets together so that all three sets of hash buckets will be within one table. We added dataset_id so that we can sort on it in the query after.
Step25: Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to the 80/10/10 that we were hoping to get.
Step27: Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train/eval/test sets do not overlap and takes a subsample of our global splits.
Step28: Preprocess data using Pandas
Step29: Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
Step31: It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a preprocess function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.
Step32: Let's process the train/eval/test set and see a small sample of the training data after our preprocessing
Step33: Let's look again at a summary of the dataset. Note that we only see numeric columns, so plurality does not show up.
Step34: Write to .csv files
|
<ASSISTANT_TASK:>
Python Code:
import os
import pandas as pd
from google.cloud import bigquery
PROJECT = !gcloud config list --format 'value(core.project)'
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
bq = bigquery.Client(project=PROJECT)
modulo_divisor = 100
train_percent = 80.0
eval_percent = 10.0
train_buckets = int(modulo_divisor * train_percent / 100.0)
eval_buckets = int(modulo_divisor * eval_percent / 100.0)
def display_dataframe_head_from_query(query, count=10):
Displays count rows from dataframe head from query.
Args:
query: str, query to be run on BigQuery, results stored in dataframe.
count: int, number of results from head of dataframe to display.
Returns:
Dataframe head with count number of results.
df = bq.query(query + f" LIMIT {count}").to_dataframe()
return df.head(count)
# Get label, features, and columns to hash and split into buckets
hash_cols_fixed_query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
year,
month,
CASE
WHEN day IS NULL THEN
CASE
WHEN wday IS NULL THEN 0
ELSE wday
END
ELSE day
END AS date,
IFNULL(state, "Unknown") AS state,
IFNULL(mother_birth_state, "Unknown") AS mother_birth_state
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
display_dataframe_head_from_query(hash_cols_fixed_query)
data_query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING),
CAST(date AS STRING),
CAST(state AS STRING),
CAST(mother_birth_state AS STRING)
)
) AS hash_values
FROM
({CTE_hash_cols_fixed})
.format(
CTE_hash_cols_fixed=hash_cols_fixed_query
)
display_dataframe_head_from_query(data_query)
# Get the counts of each of the unique hashs of our splitting column
first_bucketing_query =
SELECT
hash_values,
COUNT(*) AS num_records
FROM
({CTE_data})
GROUP BY
hash_values
.format(
CTE_data=data_query
)
display_dataframe_head_from_query(first_bucketing_query)
# Get the number of records in each of the hash buckets
second_bucketing_query =
SELECT
ABS(MOD(hash_values, {modulo_divisor})) AS bucket_index,
SUM(num_records) AS num_records
FROM
({CTE_first_bucketing})
GROUP BY
ABS(MOD(hash_values, {modulo_divisor}))
.format(
CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor
)
display_dataframe_head_from_query(second_bucketing_query)
# Calculate the overall percentages
percentages_query =
SELECT
bucket_index,
num_records,
CAST(num_records AS FLOAT64) / (
SELECT
SUM(num_records)
FROM
({CTE_second_bucketing})) AS percent_records
FROM
({CTE_second_bucketing})
.format(
CTE_second_bucketing=second_bucketing_query
)
display_dataframe_head_from_query(percentages_query)
# Choose hash buckets for training and pull in their statistics
train_query =
SELECT
*,
"train" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= 0
AND bucket_index < {train_buckets}
.format(
CTE_percentages=percentages_query, train_buckets=train_buckets
)
display_dataframe_head_from_query(train_query)
# Choose hash buckets for validation and pull in their statistics
eval_query =
SELECT
*,
"eval" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {train_buckets}
AND bucket_index < {cum_eval_buckets}
.format(
CTE_percentages=percentages_query,
train_buckets=train_buckets,
cum_eval_buckets=train_buckets + eval_buckets,
)
display_dataframe_head_from_query(eval_query)
# Choose hash buckets for testing and pull in their statistics
test_query =
SELECT
*,
"test" AS dataset_name
FROM
({CTE_percentages})
WHERE
bucket_index >= {cum_eval_buckets}
AND bucket_index < {modulo_divisor}
.format(
CTE_percentages=percentages_query,
cum_eval_buckets=train_buckets + eval_buckets,
modulo_divisor=modulo_divisor,
)
display_dataframe_head_from_query(test_query)
# Union the training, validation, and testing dataset statistics
union_query =
SELECT
0 AS dataset_id,
*
FROM
({CTE_train})
UNION ALL
SELECT
1 AS dataset_id,
*
FROM
({CTE_eval})
UNION ALL
SELECT
2 AS dataset_id,
*
FROM
({CTE_test})
.format(
CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query
)
display_dataframe_head_from_query(union_query)
# Show final splitting and associated statistics
split_query =
SELECT
dataset_id,
dataset_name,
SUM(num_records) AS num_records,
SUM(percent_records) AS percent_records
FROM
({CTE_union})
GROUP BY
dataset_id,
dataset_name
ORDER BY
dataset_id
.format(
CTE_union=union_query
)
display_dataframe_head_from_query(split_query)
# every_n allows us to subsample from each of the hash values
# This helps us get approximately the record counts we want
every_n = 1000
splitting_string = "ABS(MOD(hash_values, {} * {}))".format(
every_n, modulo_divisor
)
def create_data_split_sample_df(query_string, splitting_string, lo, up):
Creates a dataframe with a sample of a data split.
Args:
query_string: str, query to run to generate splits.
splitting_string: str, modulo string to split by.
lo: float, lower bound for bucket filtering for split.
up: float, upper bound for bucket filtering for split.
Returns:
Dataframe containing data split sample.
query = "SELECT * FROM ({0}) WHERE {1} >= {2} and {1} < {3}".format(
query_string, splitting_string, int(lo), int(up)
)
df = bq.query(query).to_dataframe()
return df
train_df = create_data_split_sample_df(
data_query, splitting_string, lo=0, up=train_percent
)
eval_df = create_data_split_sample_df(
data_query,
splitting_string,
lo=train_percent,
up=train_percent + eval_percent,
)
test_df = create_data_split_sample_df(
data_query,
splitting_string,
lo=train_percent + eval_percent,
up=modulo_divisor,
)
print(f"There are {len(train_df)} examples in the train dataset.")
print(f"There are {len(eval_df)} examples in the validation dataset.")
print(f"There are {len(test_df)} examples in the test dataset.")
train_df.head()
train_df.describe()
def preprocess(df):
Preprocess pandas dataframe for augmented babyweight data.
Args:
df: Dataframe containing raw babyweight data.
Returns:
Pandas dataframe containing preprocessed raw babyweight data as well
as simulated no ultrasound data masking some of the original data.
# Clean up raw data
# Filter out what we don"t want to use for training
df = df[df.weight_pounds > 0]
df = df[df.mother_age > 0]
df = df[df.gestation_weeks > 0]
df = df[df.plurality > 0]
# Modify plurality field to be a string
twins_etc = dict(
zip(
[1, 2, 3, 4, 5],
[
"Single(1)",
"Twins(2)",
"Triplets(3)",
"Quadruplets(4)",
"Quintuplets(5)",
],
)
)
df["plurality"].replace(twins_etc, inplace=True)
# Clone data and mask certain columns to simulate lack of ultrasound
no_ultrasound = df.copy(deep=True)
# Modify is_male
no_ultrasound["is_male"] = "Unknown"
# Modify plurality
condition = no_ultrasound["plurality"] != "Single(1)"
no_ultrasound.loc[condition, "plurality"] = "Multiple(2+)"
# Concatenate both datasets together and shuffle
return pd.concat([df, no_ultrasound]).sample(frac=1).reset_index(drop=True)
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
test_df = preprocess(test_df)
train_df.head()
train_df.tail()
train_df.describe()
# Define columns
columns = [
"weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks",
]
# Write out CSV files
train_df.to_csv(
path_or_buf="train.csv", columns=columns, header=False, index=False
)
eval_df.to_csv(
path_or_buf="eval.csv", columns=columns, header=False, index=False
)
test_df.to_csv(
path_or_buf="test.csv", columns=columns, header=False, index=False
)
%%bash
wc -l *.csv
%%bash
head *.csv
%%bash
tail *.csv
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some functions
Step2: Checking
Step3: Initial exploration
Step4: Holomap (with selector)
Step5: Curve and HoloMap
Step6: Processing a cycle
Step7: Some calculations are needed...
Step8: The following equation to estimate the Diffusion Coefficients appears in a paper
Step11: Comparing another reference (below) I assume that M is 1/number_of_moles ?
Step12: Using the function to extract Diffusion constants vs capacity
Step13: TODO
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
import os
from pathlib import Path
from pprint import pprint
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cellpy
from cellpy import prms
from cellpy import prmreader
from cellpy import cellreader
from cellpy.utils import ocv_rlx
import holoviews as hv
%matplotlib inline
hv.extension("bokeh")
######################################################################
## ##
## development ##
## ##
######################################################################
if os.name == "nt":
# Use these when working on my work PC:
raw_data_path = r"C:\Scripting\MyFiles\development_cellpy\dev_data\gitt"
out_data_path = r"C:\Scripting\MyFiles\development_cellpy\out"
else:
# Use these when working on my MacBook:
raw_data_path = "/Users/jepe/scripting/cellpy/dev_data/gitt"
out_data_path = "/Users/jepe/scripting/cellpy/dev_data/out"
raw_data_path = Path(raw_data_path)
out_data_path = Path(out_data_path)
print(" SETTING SOME PRMS ".center(80, "="))
prms.Paths.db_filename = "cellpy_db.xlsx"
prms.Paths.cellpydatadir = out_data_path
prms.Paths.outdatadir = out_data_path
prms.Paths.rawdatadir = raw_data_path
prms.Paths.db_path = out_data_path
prms.Paths.filelogdir = out_data_path
pprint(prms.Paths)
pd.set_option("display.max_rows", 30)
pd.set_option("display.max_columns", 500)
pd.set_option("display.width", 1000)
fn = "20190403_cen59_04_rateGITT_01"
mass = 0.3 # mg
resn = fn + ".res"
cellpyn = fn + ".h5"
filename = prms.Paths.rawdatadir / resn
cellpyname = prms.Paths.cellpydatadir / cellpyn
cell = cellreader.get(filename, mass=mass, logging_mode="INFO")
cell.save(cellpyname)
cell = cellreader.get(cellpyname)
# print(cell)
steptable = cell.make_step_table(all_steps=True).cell.steps
steptable.loc[steptable.cycle == 5, ["index", "step", "ustep", "type"]].head(4)
# retrieving the voltage vs time curve
t, v = cell.get_timestamp(), cell.get_voltage()
all_cycs = hv.Curve(
(t, v), ("t", "time (sec)"), ("v", "voltage (v)"), label="voltage-time"
).opts()
# creating labels
cycle_label_df = steptable.drop_duplicates("cycle")
cycle_label_df["label"] = cycle_label_df.cycle.apply(str)
cycle_labels = hv.Labels(
(cycle_label_df.test_time_first, cycle_label_df.voltage_first, cycle_label_df.label)
).opts(
text_align="left",
)
# creating a dictionary of curves (for each cycle)
cycs_dict = dict()
for c in cell.get_cycle_numbers():
t = cell.get_timestamp(cycle=c)
t = t - np.amin(t.values) # setting first point to t=0
curve = hv.Curve(
(t, cell.get_voltage(cycle=c)),
("time", "time (seconds)"),
("voltage", "voltage (v vs. Li/Li+)"),
)
cycs_dict[c] = curve
# creating a holomap object
hmap = hv.HoloMap(cycs_dict, "cycle")
%%opts Curve [width=800, xformatter="%6.0f" tools=["hover"]]
(all_cycs * cycle_labels + hmap).cols(1)
dfdata = cell.cell.raw
cyc5 = dfdata.loc[dfdata.Cycle_Index == 5, :]
gt5 = steptable.loc[steptable.cycle == 5, :]
cyc5curve = hv.Curve(cyc5, ("Test_Time", "time"), ("Voltage", "voltage")).opts(
color="grey", alpha=0.5
)
cyc5points = hv.Scatter(cyc5, ("Test_Time", "time"), ("Voltage", "voltage")).opts(
size=5, fill_alpha=0.2
)
slabels5 = hv.Labels(
(gt5.test_time_first, gt5.voltage_first, gt5.ustep.apply(str))
).opts(text_align="left")
spikes5 = hv.Spikes(gt5.test_time_first).opts(
line_alpha=0.3,
spike_length=1,
)
%%opts Curve [width=1000, height=600, tools=['hover']]
print("the cycle starts where the label is")
cyc5curve * cyc5points * slabels5 * spikes5
gt5.head()
# approximating electrode-electrolyte contact area
diameter_si = 200.0 * 10e-7 # cm
rho_si = 2.32 # g/cm3
_mass = mass / 1000
area = (2 * 3 / diameter_si) * (_mass / rho_si) # CHECK THIS
print(f"Diameter: {diameter_si:8.6f} cm")
print(f"Mass: {mass:8.6f} mg")
print(f"Calculated contact area (ideal case): {area:8.6f} cm2")
# For reference (v, a, m is pr Si particle)
v = (4 / 3) * np.pi * ((diameter_si / 2) ** 3)
m = rho_si * v
n = mass / m
a = 4 * np.pi * (diameter_si / 2) ** 2
# estimate some values
# electrolyte-electrode area (cm2)
radius = 0.75 # cm
roughness = 1.0 # a dimensionless parameter ranging from 1 to a lot.
print("Should probably use BET to find out this?")
area = (1 * roughness) * np.pi * (radius ** 2)
print(f"S: {area:6.2f} cm2")
print(
"You should compare this number to what was calculated in the cell above (the theoretical surface area for the set of same sized nanoparticles)"
)
# number of moles
# 1 mol Si weighs 28.0855 g
_mass = mass / 1000 # convert mass from mg to g
number_of_moles = _mass / 28.0855
print(f"n_m: {number_of_moles:8.6f} mol")
# molar volume
# The 2006 CODATA recommended value for the molar volume of silicon is 12.0588349(11)×10−6 m3/mol, with a relative standard uncertainty of 9.1×10−8
molar_volume = 12.06 # cm3/mol
print(f"V_m: {molar_volume:6.2f} cm3/mol")
# Simple calculations by reading values from the graph
m = mass / 1000
molar_mass = 28.0855 # g/mol
density = 2.32 # g/cm3
Vm = molar_mass / density # cm3/mol
S = 1.7 # cm2
t = 2880 # s
M = 1
Es1 = 0.378
Es2 = 0.344
DEs = Es2 - Es1
Et11 = 0.378 # starting the current pulse
Et12 = 0.309 # after (possible) IR drop
Et2 = 0.28
DEt1 = Et2 - Et11
DEt2 = Et2 - Et12
p1 = (4 / (np.pi * t)) * (m * Vm / (M * S)) ** 2
p21 = (DEs / DEt1) ** 2
p22 = (DEs / DEt2) ** 2
D1 = p1 * p21
D2 = p1 * p22
print(f"Assuming no IR drop: {D1}\nWith IR drop: {D2}")
print(f"A: {p1}")
print(f"DEt: {DEt1}")
print(f"DEs: {DEs}")
def calc_A(n_m=0.0011, V_m=12.06, S=1.77):
D = 4 /((pi)(tau)) * (n_m * V_m / S)^2 * (DEs / DEt)^2 = A /tau * (DEs / DEt)^2
A = 4 /(pi) * (n_m * V_m / S)^2
tau: duration of the current pulse (s)
n_m: number of moles (mol)
V_m: molar volume of electrode (cm3/mol)
S: electrode-electrolyte contact area (cm2)
DEs: steady state voltage change due to the current pulse (V)
DEt: voltage change during the constant current pulse (eliminating the iR drop) (V)
Ref.: application note from Metrohm Autolab b.v. pdf (BAT03)
A = (4 / np.pi) * (n_m * V_m / S) ** 2
return A
def auto_calc_D(
steptable, cycle_number, A=1.0, tau=None, ustep_first=None, ustep_last=None
):
Function for extracting diffusion constant(s) and inserting into steptable.
D = 4 /((pi)(tau)) * (n_m * V_m / S)^2 * (DEs / DEt)^2 = A /tau * (DEs / DEt)^2
A = 4 /(pi) * (n_m * V_m / S)^2
OBS! not corrected for IR drop yet.
st = None
if cycle_number is None:
# This function is intended to only work on a pr. cycle basis
# to prevent users from "poluting" the steptable for "non_GITT" experiments.
print("no cycle number given")
return
st = steptable[steptable.cycle == cycle_number]
st = st[st.type.isin(["charge", "discharge", "ocvrlx_up", "ocvrlx_down"])]
if st.empty:
print("the given cycle is not found")
return
if ustep_first is not None:
st = st[st.ustep >= ustep_first]
if ustep_last is not None:
st = st[st.ustep <= ustep_last]
# used for finding DE
n3 = st["voltage_last"].shift(periods=-3)
n2 = st["voltage_last"].shift(periods=-2)
n1 = st["voltage_last"].shift(periods=-1)
n0 = st["voltage_last"]
# st["n3"] = n3
# st["n2"] = st["voltage_last"].shift(periods=-2)
# st["n1"] = n1
# st["n0"] = st["voltage_last"].shift(periods=0)
# used for finding tau
if tau is None:
tau = st["step_time_last"] - st["step_time_first"]
# used for validating if proper GITT step
t0 = st["type"]
t1 = st["type"].shift(periods=-1)
t2 = st["type"].shift(periods=-2)
t3 = st["type"].shift(periods=-3)
st["valid_D"] = (t0 == t2) & (t1.str.contains("ocv"))
# calculating
st["DEt"] = (
st["voltage_last"] - st["voltage_first"]
) # should remove the IR drop here (maybe calculate it based on the measured IR time the current?)
st["DEs"] = n3 - n1
st["D"] = A / tau * (st["DEs"] / st["DEt"]) ** 2
return st
# Lets just run it and see what happens
A = calc_A(n_m=0.0011, V_m=12.06, S=1.77)
print(f"A: {A}")
x = auto_calc_D(steptable, 5, A=A, ustep_first=27)
discharge = x.loc[(x.type == "discharge") & (x.valid_D)]
charge = x.loc[(x.type == "charge") & (x.valid_D)]
# discharge.head(25)
%%opts Curve [width=600 xformatter="%6.0f" tools=["hover"]]
X = discharge["discharge_avr"]*1_000_000/mass
Y = discharge["D"]
discharge_diffcurve = hv.Scatter((X,Y), ("capacity"), ("diffusion coef. (cm2 s-1)"), label="discharge").opts(size=12) * hv.Curve((X,Y)).opts(alpha=0.4)
X = charge["charge_avr"]*1_000_000/mass
Y = charge["D"]
charge_diffcurve = hv.Scatter((X,Y), ("capacity"), ("diffusion coef. (cm2 s-1)"), label="charge").opts(size=12) * hv.Curve((X,Y)).opts(alpha=0.4)
discharge_diffcurve * charge_diffcurve
cycle = 5
titcurve_ustep = 28
# filtering wrt cycle number to get the Data_Points for the step
datapoints = x.loc[
(x.type == "discharge") & (x.ustep == titcurve_ustep), ["point_first", "point_last"]
]
datapoints
# Using the data points to select the data from dfdata
first = datapoints.iloc[0, 0]
last = datapoints.iloc[0, 1]
dftit = dfdata.loc[first:last, :]
dftit.head()
dfcurve = (
hv.Curve(dftit, ("Step_Time", "time"), ("Voltage", "voltage"))
.opts(color="grey", alpha=0.5)
.opts(width=1000, xformatter="%6.0f")
)
min_time = hv.VLine(100.0)
max_time = hv.VLine(1000.0)
a = hv.Arrow(x=500, y=0.31, text="fit", direction="v")
dfcurve * min_time * max_time * a
# plotting dE vs sqrt(t) and doing a linear regression to find the slope
dE = dftit["Voltage"]
t = np.sqrt(dftit["Step_Time"])
decurve = hv.Scatter((t, dE), "sqrt-time", "voltage").opts(
width=1000, xformatter="%6.0f"
)
decurve
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prepare the dataset. Download all works of Shakespeare concatenated. Other plain text files can also be used.
Step2: Conduct SGD
Step3: Checkpoint
Step4: Sample
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
from __future__ import print_function
from future import standard_library
standard_library.install_aliases()
from builtins import zip
from builtins import range
from builtins import object
from past.utils import old_div
import pickle as pickle
import numpy as np
import argparse
import sys
from tqdm import tnrange, tqdm_notebook
# sys.path.append(os.path.join(os.path.dirname(__file__), '../../build/python'))
from singa import layer
from singa import loss
from singa import device
from singa import tensor
from singa import optimizer
from singa import initializer
from singa.proto import model_pb2
from singa import utils
class Data(object):
def __init__(self, fpath, batch_size=32, seq_length=100, train_ratio=0.8):
'''Data object for loading a plain text file.
Args:
fpath, path to the text file.
train_ratio, split the text file into train and test sets, where
train_ratio of the characters are in the train set.
'''
self.raw_data = open(fpath, 'r').read() # read text file
chars = list(set(self.raw_data))
self.vocab_size = len(chars)
self.char_to_idx = {ch: i for i, ch in enumerate(chars)}
self.idx_to_char = {i: ch for i, ch in enumerate(chars)}
data = [self.char_to_idx[c] for c in self.raw_data]
# seq_length + 1 for the data + label
nsamples = old_div(len(data), (1 + seq_length))
data = data[0:nsamples * (1 + seq_length)]
data = np.asarray(data, dtype=np.int32)
data = np.reshape(data, (-1, seq_length + 1))
# shuffle all sequences
np.random.shuffle(data)
self.train_dat = data[0:int(data.shape[0]*train_ratio)]
self.num_train_batch = old_div(self.train_dat.shape[0], batch_size)
self.val_dat = data[self.train_dat.shape[0]:]
self.num_test_batch = old_div(self.val_dat.shape[0], batch_size)
self.batch_size = batch_size
self.seq_length = seq_length
print('train dat', self.train_dat.shape)
print('val dat', self.val_dat.shape)
def numpy2tensors(npx, npy, dev):
'''batch, seq, dim -- > seq, batch, dim'''
tmpx = np.swapaxes(npx, 0, 1)
tmpy = np.swapaxes(npy, 0, 1)
inputs = []
labels = []
for t in range(tmpx.shape[0]):
x = tensor.from_numpy(tmpx[t])
y = tensor.from_numpy(tmpy[t])
x.to_device(dev)
y.to_device(dev)
inputs.append(x)
labels.append(y)
return inputs, labels
def convert(batch, batch_size, seq_length, vocab_size, dev):
'''convert a batch of data into a sequence of input tensors'''
y = batch[:, 1:]
x1 = batch[:, :seq_length]
x = np.zeros((batch_size, seq_length, vocab_size), dtype=np.float32)
for b in range(batch_size):
for t in range(seq_length):
c = x1[b, t]
x[b, t, c] = 1
return numpy2tensors(x, y, dev)
def get_lr(epoch):
return old_div(0.001, float(1 << (old_div(epoch, 50))))
hidden_size=32
num_stacks=1
dropout=0.5
data = Data('static/shakespeare_input.txt')
# SGD with L2 gradient normalization
opt = optimizer.RMSProp(constraint=optimizer.L2Constraint(5))
cuda = device.create_cuda_gpu()
rnn = layer.LSTM(name='lstm', hidden_size=hidden_size, num_stacks=num_stacks, dropout=dropout, input_sample_shape=(data.vocab_size,))
rnn.to_device(cuda)
rnn_w = rnn.param_values()[0]
rnn_w.uniform(-0.08, 0.08)
dense = layer.Dense('dense', data.vocab_size, input_sample_shape=(32,))
dense.to_device(cuda)
dense_w = dense.param_values()[0]
dense_b = dense.param_values()[1]
print('dense w ', dense_w.shape)
print('dense b ', dense_b.shape)
initializer.uniform(dense_w, dense_w.shape[0], 0)
print('dense weight l1 = %f' % (dense_w.l1()))
dense_b.set_value(0)
print('dense b l1 = %f' % (dense_b.l1()))
g_dense_w = tensor.Tensor(dense_w.shape, cuda)
g_dense_b = tensor.Tensor(dense_b.shape, cuda)
lossfun = loss.SoftmaxCrossEntropy()
train_loss = 0
for epoch in range(3):
bar = tnrange(data.num_train_batch, desc='Epoch %d' % 0)
for b in bar:
batch = data.train_dat[b * data.batch_size: (b + 1) * data.batch_size]
inputs, labels = convert(batch, data.batch_size, data.seq_length, data.vocab_size, cuda)
inputs.append(tensor.Tensor())
inputs.append(tensor.Tensor())
outputs = rnn.forward(model_pb2.kTrain, inputs)[0:-2]
grads = []
batch_loss = 0
g_dense_w.set_value(0.0)
g_dense_b.set_value(0.0)
for output, label in zip(outputs, labels):
act = dense.forward(model_pb2.kTrain, output)
lvalue = lossfun.forward(model_pb2.kTrain, act, label)
batch_loss += lvalue.l1()
grad = lossfun.backward()
grad /= data.batch_size
grad, gwb = dense.backward(model_pb2.kTrain, grad)
grads.append(grad)
g_dense_w += gwb[0]
g_dense_b += gwb[1]
# print output.l1(), act.l1()
bar.set_postfix(train_loss=old_div(batch_loss, data.seq_length))
train_loss += batch_loss
grads.append(tensor.Tensor())
grads.append(tensor.Tensor())
g_rnn_w = rnn.backward(model_pb2.kTrain, grads)[1][0]
dense_w, dense_b = dense.param_values()
opt.apply_with_lr(epoch, get_lr(epoch), g_rnn_w, rnn_w, 'rnnw')
opt.apply_with_lr(epoch, get_lr(epoch), g_dense_w, dense_w, 'dense_w')
opt.apply_with_lr(epoch, get_lr(epoch), g_dense_b, dense_b, 'dense_b')
print('\nEpoch %d, train loss is %f' % (epoch, train_loss / data.num_train_batch / data.seq_length))
model_path= 'static/model_' + str(epoch) + '.bin'
with open(model_path, 'wb') as fd:
print('saving model to %s' % model_path)
d = {}
for name, w in zip(['rnn_w', 'dense_w', 'dense_b'],[rnn_w, dense_w, dense_b]):
d[name] = tensor.to_numpy(w)
d['idx_to_char'] = data.idx_to_char
d['char_to_idx'] = data.char_to_idx
d['hidden_size'] = hidden_size
d['num_stacks'] = num_stacks
d['dropout'] = dropout
pickle.dump(d, fd)
fd.close()
nsamples = 300
seed_text = "Before we proceed any further, hear me speak."
do_sample = True
with open(model_path, 'rb') as fd:
d = pickle.load(fd)
rnn_w = tensor.from_numpy(d['rnn_w'])
idx_to_char = d['idx_to_char']
char_to_idx = d['char_to_idx']
vocab_size = len(idx_to_char)
dense_w = tensor.from_numpy(d['dense_w'])
dense_b = tensor.from_numpy(d['dense_b'])
hidden_size = d['hidden_size']
num_stacks = d['num_stacks']
dropout = d['dropout']
rnn = layer.LSTM(name='lstm', hidden_size=hidden_size,
num_stacks=num_stacks, dropout=dropout,
input_sample_shape=(len(idx_to_char),))
rnn.to_device(cuda)
rnn.param_values()[0].copy_data(rnn_w)
dense = layer.Dense('dense', vocab_size, input_sample_shape=(hidden_size,))
dense.to_device(cuda)
dense.param_values()[0].copy_data(dense_w)
dense.param_values()[1].copy_data(dense_b)
hx = tensor.Tensor((num_stacks, 1, hidden_size), cuda)
cx = tensor.Tensor((num_stacks, 1, hidden_size), cuda)
hx.set_value(0.0)
cx.set_value(0.0)
if len(seed_text) > 0:
for c in seed_text:
x = np.zeros((1, vocab_size), dtype=np.float32)
x[0, char_to_idx[c]] = 1
tx = tensor.from_numpy(x)
tx.to_device(cuda)
inputs = [tx, hx, cx]
outputs = rnn.forward(model_pb2.kEval, inputs)
y = dense.forward(model_pb2.kEval, outputs[0])
y = tensor.softmax(y)
hx = outputs[1]
cx = outputs[2]
sys.stdout.write(seed_text)
else:
y = tensor.Tensor((1, vocab_size), cuda)
y.set_value(old_div(1.0, vocab_size))
for i in range(nsamples):
y.to_host()
prob = tensor.to_numpy(y)[0]
if do_sample:
cur = np.random.choice(vocab_size, 1, p=prob)[0]
else:
cur = np.argmax(prob)
sys.stdout.write(idx_to_char[cur])
x = np.zeros((1, vocab_size), dtype=np.float32)
x[0, cur] = 1
tx = tensor.from_numpy(x)
tx.to_device(cuda)
inputs = [tx, hx, cx]
outputs = rnn.forward(model_pb2.kEval, inputs)
y = dense.forward(model_pb2.kEval, outputs[0])
y = tensor.softmax(y)
hx = outputs[1]
cx = outputs[2]
print('')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 케라스를 사용한 다중 워커(Multi-worker) 훈련
Step2: 데이터셋 준비하기
Step3: 케라스 모델 만들기
Step4: 먼저 단일 워커를 이용하여 적은 수의 에포크만큼만 훈련을 해보고 잘 동작하는지 확인해봅시다. 에포크가 넘어감에 따라 손실(loss)은 줄어들고 정확도는 1.0에 가까워져야 합니다.
Step5: 다중 워커 구성
Step6: Note
Step7: 데이터셋 샤딩과 배치 크기
Step8: 또 하나 주목할 점은 datasets의 배치 크기입니다. 앞서 코드에서 GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS로 지정하였습니다. 단일 워커일 때보다 NUM_WORKERS 배만큼 크게 지정한 것입니다. 이는 실제로 각 워커에 전달되는 배치 크기가 tf.data.Dataset.batch()에 매개변수로 전달된 전역 배치 크기를 워커의 수로 나눈 것이 되기 때문입니다. 즉, 이렇게 바꾸어야 실제로 워커가 처리하는 배치 크기가 단일 워커일 때와 동일한 값이 됩니다.
Step9: 워커가 정지당하면, 정지당한 워커가 다시 살아날 때까지 전체 클러스터가 잠시 멈춥니다. 워커가 클러스터에 다시 들어오면, 다른 워커도 재시작됩니다. 모든 워커가 이전에 저장한 체크포인트 파일을 읽고, 예전 상태를 불러오면 클러스터가 다시 일관된 상태가 됩니다. 그리고서 훈련이 재개됩니다.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# `x` 배열의 값은 uint8이고 [0, 255] 범위입니다.
# 이를 [0, 1] 사이의 float32 값으로 바꾸어야 합니다.
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
per_worker_batch_size = 64
single_worker_dataset = mnist_dataset(per_worker_batch_size)
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
NUM_WORKERS = 4
# 여기서 배치 크기는 워커의 수를 곱한 크기로 늘려야 합니다. `tf.data.Dataset.batch`에는
# 전역 배치 크기를 지정해야 하기 때문입니다. 전에는 64였지만, 이제 128이 됩니다.
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist_dataset(global_batch_size)
with strategy.scope():
# `strategy.scope()` 안에서 모델을 만들고 컴파일합니다.
multi_worker_model = build_and_compile_cnn_model()
# 케라스의 `model.fit()`은 지정된 에포크 횟수와 에포크 당 스텝 수로 모델을 훈련합니다.
# 여기에 나온 숫자는 예시를 보이기 위한 것일 뿐 제대로 된 모델을 만드는데 충분하지 않습니다.
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
train_datasets_no_auto_shard = train_datasets.with_options(options)
# `filepath` 매개변수를 모든 워커가 접근할 수 있는 파일 시스템 경로로 바꾸십시오.
callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='/tmp/keras-ckpt')]
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
# 모델 저장하기
# Let `is_chief`는 클러스터 스펙과 현재 태스크 타입을 조사하여
# 워커가 치프이면 True를 반환하고 그렇지 않으면 False를 반환하는 유틸리티 함수입니다.
def is_chief():
return True
if is_chief():
# 모델 디렉토리, 클라우드 버킷이 이상적입니다.
path = '/tmp/model_dir'
else:
# 모든 워커에 고유한 경로에 저장합니다.
worker_id = 1
path = '/tmp/model_dir/worker_tmp_' + str(worker_id)
multi_worker_model.save(path)
# 치프에서
# 체크포인트 복원
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
manager = tf.train.CheckpointManager(
checkpoint, directory=path, max_to_keep=5)
status = checkpoint.restore(manager.latest_checkpoint)
# 워커에서
# 치프가 모델을 저장한 경로입니다.
model_dir_path = '/tmp/model_dir'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
manager = tf.train.CheckpointManager(
checkpoint, directory=path, max_to_keep=5)
latest_checkpoint = tf.train.latest_checkpoint(model_dir_path)
status = checkpoint.restore(latest_checkpoint)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now import some ML stuff
Step2: Mind the seed!!
Step3: Let's begin this introduction with usage examples.
Step4: As usual do the train-test split.
Step5: scikit-learn interface
Step6: Fit the a gradient boosted tree ensemble.
Step7: Now let's validate.
Step8: Let's check out the confusuion matrix
Step9: Let's plot one-vs-all ROC-AUC curves
Step10: alternative interface
Step11: DMatrix exports several useful methods
Step12: The xgboost.train class initalizes an appropriate booster, and then fits it on the provided train dataset. Besides the booster parameters and the train DMatrix , the class initializer accepts
Step13: The method xgboost.booster.update performs one iteration of gradinet boosting
Step14: Besides these methods xgboost.booster exports
Step15: Let's plot one-vs-all ROC-AUC curves
|
<ASSISTANT_TASK:>
Python Code:
import time, os, re, zipfile
import numpy as np, pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import sklearn as sk, xgboost as xg
# from sklearn.model_selection import train_test_split
from sklearn.cross_validation import train_test_split
random_state = np.random.RandomState( seed = 0x0BADC0DE )
df_train = pd.read_csv( zipfile.ZipFile( 'train.csv.zip' ).open( 'train.csv' ), index_col = 'id' )
X = np.asanyarray( df_train.drop( 'target', axis = 1 ) )
y = sk.preprocessing.LabelEncoder( ).fit_transform( df_train[ 'target' ] )
X_train, X_, y_train, y_ = train_test_split( X, y, test_size = 0.25, random_state = random_state )
X_valid, X_test, y_valid, y_test = train_test_split( X_, y_, test_size = 0.5, random_state = random_state )
clf_ = xg.XGBClassifier( n_estimators = 50,
gamma = 1.0,
max_depth = 1000,
objective = "multi:softmax",
nthread = -1,
silent = False )
clf_.fit( X_train, y_train, eval_set = [ ( X_valid, y_valid ), ], verbose = True )
y_predict = clf_.predict( X_test )
y_score = clf_.predict_proba( X_test )
pd.DataFrame( sk.metrics.confusion_matrix( y_test, y_predict ), index = clf_.classes_, columns = clf_.classes_ )
fig = plt.figure( figsize = ( 16, 9 ) )
axis = fig.add_subplot( 111 )
axis.set_title( 'ROC-AUC (ovr) curves for the heldout dataset' )
axis.set_xlabel( "False positive rate" ) ; axis.set_ylabel( "True positive rate" )
axis.set_ylim( -0.01, 1.01 ) ; axis.set_xlim( -0.01, 1.01 )
for cls_ in clf_.classes_ :
fpr, tpr, _ = sk.metrics.roc_curve( y_test, y_score[:, cls_], pos_label = cls_ )
axis.plot( fpr, tpr, lw = 2, zorder = cls_, label = "C%d" % ( cls_, ) )
axis.legend( loc = 'lower right', shadow = True, ncol = 3 )
train_dmat = xg.DMatrix( data = X_train,
label = y_train,
feature_names = None,
feature_types = None )
test_dmat = xg.DMatrix( data = X_test, label = y_test )
xgb_params = {
'bst:max_depth':2,
'bst:eta':1,
'silent':1,
'objective':'multi:softmax',
'num_class': 9,
'nthread': 2,
'eval_metric' : 'auc'
}
xgbooster_ = xg.train( params = xgb_params,
dtrain = train_dmat,
num_boost_round = 10,
evals = (),
obj = None,
feval = None,
maximize = False,
early_stopping_rounds = None,
evals_result = None,
verbose_eval = True,
learning_rates = None,
xgb_model = None )
y_predict = xgbooster_.predict( test_dmat )
y_score = xgbooster_.predict( test_dmat, output_margin = True )
pd.DataFrame( sk.metrics.confusion_matrix( y_test, y_predict ), index = clf_.classes_, columns = clf_.classes_ )
fig = plt.figure( figsize = ( 16, 9 ) )
axis = fig.add_subplot( 111 )
axis.set_title( 'ROC-AUC (ovr) curves for the heldout dataset' )
axis.set_xlabel( "False positive rate" ) ; axis.set_ylabel( "True positive rate" )
axis.set_ylim( -0.01, 1.01 ) ; axis.set_xlim( -0.01, 1.01 )
for cls_ in clf_.classes_ :
fpr, tpr, _ = sk.metrics.roc_curve( y_test, y_score[:, cls_], pos_label = cls_ )
axis.plot( fpr, tpr, lw = 2, zorder = cls_, label = "C%d" % ( cls_, ) )
axis.legend( loc = 'lower right', shadow = True, ncol = 3 )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: if you want to see logging events.
Step2: This is a tiny corpus of nine documents, each consisting of only a single sentence.
Step3: Your way of processing the documents will likely vary; here, I only split on whitespace to tokenize, followed by lowercasing each word. In fact, I use this particular (simplistic and inefficient) setup to mimic the experiment done in Deerwester et al.’s original LSA article (Table 2).
Step4: Here we assigned a unique integer id to all words appearing in the corpus with the gensim.corpora.dictionary.Dictionary class. This sweeps across the texts, collecting word counts and relevant statistics. In the end, we see there are twelve distinct words in the processed corpus, which means each document will be represented by twelve numbers (ie., by a 12-D vector). To see the mapping between words and their ids
Step5: To actually convert tokenized documents to vectors
Step6: The function doc2bow() simply counts the number of occurrences of each distinct word, converts the word to its integer word id and returns the result as a sparse vector. The sparse vector [(0, 1), (1, 1)] therefore reads
Step7: By now it should be clear that the vector feature with id=10 stands for the question “How many times does the word graph appear in the document?” and that the answer is “zero” for the first six documents and “one” for the remaining three. As a matter of fact, we have arrived at exactly the same corpus of vectors as in the Quick Example.
Step8: The assumption that each document occupies one line in a single file is not important; you can mold the __iter__ function to fit your input format, whatever it is. Walking directories, parsing XML, accessing network... Just parse your input to retrieve a clean list of tokens in each document, then convert the tokens via a dictionary to their ids and yield the resulting sparse vector inside __iter__.
Step9: Corpus is now an object. We didn’t define any way to print it, so print just outputs address of the object in memory. Not very useful. To see the constituent vectors, let’s iterate over the corpus and print each document vector (one at a time)
Step10: Although the output is the same as for the plain Python list, the corpus is now much more memory friendly, because at most one vector resides in RAM at a time. Your corpus can now be as large as you want.
Step11: And that is all there is to it! At least as far as bag-of-words representation is concerned. Of course, what we do with such corpus is another question; it is not at all clear how counting the frequency of distinct words could be useful. As it turns out, it isn’t, and we will need to apply a transformation on this simple representation first, before we can use it to compute any meaningful document vs. document similarities. Transformations are covered in the next tutorial, but before that, let’s briefly turn our attention to corpus persistency.
Step12: Other formats include Joachim’s SVMlight format, Blei’s LDA-C format and GibbsLDA++ format.
Step13: Conversely, to load a corpus iterator from a Matrix Market file
Step14: Corpus objects are streams, so typically you won’t be able to print them directly
Step15: Instead, to view the contents of a corpus
Step16: or
Step17: The second way is obviously more memory-friendly, but for testing and development purposes, nothing beats the simplicity of calling list(corpus).
Step18: In this way, gensim can also be used as a memory-efficient I/O format conversion tool
Step19: and from/to scipy.sparse matrices
|
<ASSISTANT_TASK:>
Python Code:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
from gensim import corpora
documents = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in documents]
# remove words that appear only once
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
texts = [[token for token in text if frequency[token] > 1] for text in texts]
from pprint import pprint # pretty-printer
pprint(texts)
dictionary = corpora.Dictionary(texts)
dictionary.save('/tmp/deerwester.dict') # store the dictionary, for future reference
print(dictionary)
print(dictionary.token2id)
new_doc = "Human computer interaction"
new_vec = dictionary.doc2bow(new_doc.lower().split())
print(new_vec) # the word "interaction" does not appear in the dictionary and is ignored
corpus = [dictionary.doc2bow(text) for text in texts]
corpora.MmCorpus.serialize('/tmp/deerwester.mm', corpus) # store to disk, for later use
for c in corpus:
print(c)
class MyCorpus(object):
def __iter__(self):
for line in open('datasets/mycorpus.txt'):
# assume there's one document per line, tokens separated by whitespace
yield dictionary.doc2bow(line.lower().split())
corpus_memory_friendly = MyCorpus() # doesn't load the corpus into memory!
print(corpus_memory_friendly)
for vector in corpus_memory_friendly: # load one vector into memory at a time
print(vector)
from six import iteritems
# collect statistics about all tokens
dictionary = corpora.Dictionary(line.lower().split() for line in open('datasets/mycorpus.txt'))
# remove stop words and words that appear only once
stop_ids = [dictionary.token2id[stopword] for stopword in stoplist
if stopword in dictionary.token2id]
once_ids = [tokenid for tokenid, docfreq in iteritems(dictionary.dfs) if docfreq == 1]
# remove stop words and words that appear only once
dictionary.filter_tokens(stop_ids + once_ids)
# remove gaps in id sequence after words that were removed
dictionary.compactify()
print(dictionary)
# create a toy corpus of 2 documents, as a plain Python list
corpus = [[(1, 0.5)], []] # make one document empty, for the heck of it
corpora.MmCorpus.serialize('/tmp/corpus.mm', corpus)
corpora.SvmLightCorpus.serialize('/tmp/corpus.svmlight', corpus)
corpora.BleiCorpus.serialize('/tmp/corpus.lda-c', corpus)
corpora.LowCorpus.serialize('/tmp/corpus.low', corpus)
corpus = corpora.MmCorpus('/tmp/corpus.mm')
print(corpus)
# one way of printing a corpus: load it entirely into memory
print(list(corpus)) # calling list() will convert any sequence to a plain Python list
# another way of doing it: print one document at a time, making use of the streaming interface
for doc in corpus:
print(doc)
corpora.BleiCorpus.serialize('/tmp/corpus.lda-c', corpus)
import gensim
import numpy as np
numpy_matrix = np.random.randint(10, size=[5,2])
corpus = gensim.matutils.Dense2Corpus(numpy_matrix)
numpy_matrix_dense = gensim.matutils.corpus2dense(corpus, num_terms=10)
import scipy.sparse
scipy_sparse_matrix = scipy.sparse.random(5,2)
corpus = gensim.matutils.Sparse2Corpus(scipy_sparse_matrix)
scipy_csc_matrix = gensim.matutils.corpus2csc(corpus)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
Step6: Let's run it
Step7: Problem
|
<ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
print(len(train_dataset[0]))
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random valued following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%\n' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%\n" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Hidden layer
hidden_layer_size = 1024
weights_h = tf.Variable(
tf.truncated_normal([image_size * image_size, hidden_layer_size]))
biases_h = tf.Variable(tf.zeros([hidden_layer_size]))
hidden = tf.nn.relu(tf.matmul(tf_train_dataset, weights_h) + biases_h)
# Output layer
weights_o = tf.Variable(
tf.truncated_normal([hidden_layer_size, num_labels]))
biases_o = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(hidden, weights_o) + biases_o
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_hidden = tf.nn.relu(tf.matmul(tf_valid_dataset, weights_h) + biases_h)
valid_logits = tf.matmul(valid_hidden, weights_o) + biases_o
valid_prediction = tf.nn.softmax(valid_logits)
test_hidden = tf.nn.relu(tf.matmul(tf_test_dataset, weights_h) + biases_h)
test_logits = tf.matmul(test_hidden, weights_o) + biases_o
test_prediction = tf.nn.softmax(test_logits)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%\n" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Init
Step2: Nestly
Step3: Adding DESeq results table of 'NA' if file is not present
Step4: aggregating confusion matrix data
Step5: Plotting accuracy
Step6: Rare taxa
Step7: Sample size of dominants and rares
Step8: Plotting sensitivity ~ taxon_abundance
Step9: Notes
Step10:
|
<ASSISTANT_TASK:>
Python Code:
workDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/'
sourceDir = os.path.join(workDir, 'atomIncorp_evenness')
buildDir = os.path.join(workDir, 'atomIncorp_evenness_abund')
R_dir = '/home/nick/notebook/SIPSim/lib/R/'
import glob
from os.path import abspath
import nestly
from IPython.display import Image, display
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
if not os.path.isdir(buildDir):
os.makedirs(buildDir)
# building tree structure
nest = nestly.Nest()
## varying params
#-- full run --#
#nest.add('percIncorp', [0, 25, 50, 100])
#nest.add('percTaxa', [1, 5, 10, 25, 50])
#nest.add('rep', range(1,21))
#-- full run --#
nest.add('abund_dist_p', ['mean:10,sigma:1','mean:10,sigma:2','mean:10,sigma:3'])
nest.add('percIncorp', [0,5,10,15,20,30,40,50,75,100])
nest.add('rep', range(1,21))
## set params
nest.add('percTaxa', [10], create_dir=False)
nest.add('np', [6], create_dir=False)
nest.add('abs', ['1e9'], create_dir=False)
nest.add('subsample_mean', [30000], create_dir=False)
nest.add('subsample_scale', [5000], create_dir=False)
nest.add('BD_min', [1.71], create_dir=False)
nest.add('BD_max', [1.75], create_dir=False)
nest.add('padj', [0.1], create_dir=False)
nest.add('log2', [0.25], create_dir=False)
nest.add('topTaxaToPlot', [100], create_dir=False)
## input/output files
nest.add('buildDir', [buildDir], create_dir=False)
nest.add('sourceDir', [sourceDir], create_dir=False)
nest.add('frag_file', ['ampFrags_kde_dif'], create_dir=False)
nest.add('comm_file', ['comm.txt'], create_dir=False)
nest.add('R_dir', [R_dir], create_dir=False)
# building directory tree
nest.build(buildDir)
bashFile = os.path.join(buildDir, 'SIPSimRun.sh')
%%writefile $bashFile
#!/bin/bash
export PATH={R_dir}:$PATH
# copying DESeq files
cp {sourceDir}/{abund_dist_p}/{percIncorp}/{rep}/{comm_file} .
cp {sourceDir}/{abund_dist_p}/{percIncorp}/{rep}/OTU_n2_abs{abs}_sub-norm_DESeq2 .
cp {sourceDir}/{abund_dist_p}/{percIncorp}/{rep}/{frag_file}_incorp_BD-shift.txt .
## DESeq2
DESeq2_rare-dominant.r \
OTU_n2_abs{abs}_sub-norm_DESeq2 \
{comm_file} \
-o OTU_n2_abs{abs}_sub-norm_DESeq2
## Confusion matrix
### dominant
DESeq2_confuseMtx.r \
{frag_file}_incorp_BD-shift.txt \
OTU_n2_abs{abs}_sub-norm_DESeq2_dom \
--padj {padj} \
-o DESeq2-cMtx_dom
### rare
DESeq2_confuseMtx.r \
{frag_file}_incorp_BD-shift.txt \
OTU_n2_abs{abs}_sub-norm_DESeq2_rare \
--padj {padj} \
-o DESeq2-cMtx_rare
!chmod 775 $bashFile
!cd $workDir; \
nestrun -j 20 --template-file $bashFile -d atomIncorp_evenness_abund --log-file log.txt
p = os.path.join(workDir, 'atomIncorp_evenness_abund', 'mean*/*/*', 'DESeq2-cMtx_rare_byClass.csv')
rareFiles = glob.glob(p)
p = os.path.join(workDir, 'atomIncorp_evenness_abund', 'mean*/*/*', 'DESeq2-cMtx_dom_byClass.csv')
domFiles = glob.glob(p)
print len(rareFiles)
print len(domFiles)
byClass_tmp = [
['Sensitivity','NA','NA',0,0],
['Specificity','NA','NA',0,0],
['Pos Pred Value','NA','NA',0,0],
['Neg Pred Value','NA','NA',0,0],
['Prevalence','NA','NA',0,0],
['Detection Rate','NA','NA',0,0],
['Detection Prevalence','NA','NA',0,0],
['Balanced Accuracy','NA','NA',0,0]
]
byClass_tmp = pd.DataFrame(byClass_tmp, columns=['','byClass','abund_dist_p','percIncorp','rep'])
# finding files not present
rareFiles = set([x.rstrip('_rare_byClass.csv') for x in rareFiles])
domFiles = set([x.rstrip('_dom_byClass.csv') for x in domFiles])
# in rare but not in dominant list
missingFiles = list(rareFiles - domFiles)
cols = ['abund_dist_p','percIncorp','rep']
for f in missingFiles:
df_tmp = byClass_tmp.copy()
x = f.split('/')
df_tmp.loc[:,cols] = x[-4:-1]
outFile = '_'.join([f, 'dom_byClass.csv'])
df_tmp.to_csv(outFile)
## byClass
### dominant
!cd $workDir; \
nestagg delim \
-d atomIncorp_evenness_abund \
-k abund_dist_p,percIncorp,rep \
-o ./atomIncorp_evenness_abund/DESeq2-cMtx_dom_byClass.csv \
DESeq2-cMtx_dom_byClass.csv
### rare
!cd $workDir; \
nestagg delim \
-d atomIncorp_evenness_abund \
-k abund_dist_p,percIncorp,rep \
-o ./atomIncorp_evenness_abund/DESeq2-cMtx_rare_byClass.csv \
DESeq2-cMtx_rare_byClass.csv
%%R -i workDir
setwd(workDir)
byClass = read.csv('./atomIncorp_evenness_abund/DESeq2-cMtx_dom_byClass.csv')
byClass %>% head
%%R -w 700
to.keep = c('Sensitivity', 'Specificity', 'Balanced Accuracy')
byClass.f = byClass %>%
filter(X %in% to.keep) %>%
mutate(percIncorp = as.character(percIncorp))
byClass.f$percIncorp = factor(byClass.f$percIncorp, levels=sort(unique(as.numeric(byClass.f$percIncorp))))
ggplot(byClass.f, aes(percIncorp, byClass, fill=abund_dist_p)) +
geom_boxplot() +
labs(y='Value', x='Atom percent 13C') +
facet_grid(X ~ .) +
theme(
text = element_text(size=16)
)
%%R -i workDir
setwd(workDir)
byClass = read.csv('./atomIncorp_evenness_abund/DESeq2-cMtx_rare_byClass.csv')
byClass %>% head
%%R -w 700
to.keep = c('Sensitivity', 'Specificity', 'Balanced Accuracy')
byClass.f = byClass %>%
filter(X %in% to.keep) %>%
mutate(percIncorp = as.character(percIncorp))
byClass.f$percIncorp = factor(byClass.f$percIncorp, levels=sort(unique(as.numeric(byClass.f$percIncorp))))
ggplot(byClass.f, aes(percIncorp, byClass, fill=abund_dist_p)) +
geom_boxplot() +
labs(y='Value', x='Atom percent 13C') +
facet_grid(X ~ .) +
theme(
text = element_text(size=16)
)
%%R -i workDir
setwd(workDir)
dom.files = grep('DESeq2_dom$', list.files('./atomIncorp_evenness_abund/', recursive=TRUE), value=TRUE)
rare.files = grep('DESeq2_rare$', list.files('./atomIncorp_evenness_abund/', recursive=TRUE), value=TRUE)
read_taxa = function(file.name, dir='atomIncorp_evenness_abund'){
f = paste(c(dir, file.name), collapse='/')
deseq.res = readRDS(f)
n.taxa = length(unique(rownames(deseq.res)))
return(n.taxa)
}
dom.taxa = list()
for(f in dom.files){
dom.taxa[[f]] = read_taxa(f)
}
rare.taxa = list()
for(f in rare.files){
rare.taxa[[f]] = read_taxa(f)
}
%%R
tbl.dom = as.data.frame(do.call(rbind, dom.taxa))
tbl.dom$abund = 'Dominant'
tbl.rare = as.data.frame(do.call(rbind, rare.taxa))
tbl.rare$abund = 'Rare'
tbl.dom.rare = rbind(tbl.dom, tbl.rare)
colnames(tbl.dom.rare)[1] = 'N.taxa'
tbl.dom.rare$file = rownames(tbl.dom.rare)
tbl.dom.rare = tbl.dom.rare %>%
separate(file, c('abund_dist_p','percIncorp','rep','file'), sep='/')
tbl.dom.rare %>% head
%%R -w 700 -h 400
tbl.dom.rare$percIncorp = factor(tbl.dom.rare$percIncorp, levels=sort(unique(as.numeric(tbl.dom.rare$percIncorp))))
#tbl.dom.rare$abund_dist_p = factor(tbl.dom.rare$abund_dist_p, levels=sort(unique(as.numeric(tbl.dom.rare$abund_dist_p))))
ggplot(tbl.dom.rare, aes(percIncorp, N.taxa, fill=abund_dist_p)) +
geom_boxplot(position='dodge') +
labs(y='Number of taxa', x='Atom percent 13C') +
facet_grid(abund ~ ., scales='free_y') +
theme(
text = element_text(size=16)
)
%%bash -s "$R_dir"
find atomIncorp_evenness_abund/ -name "*DESeq2" | \
perl -pe 's/(.+\/).+/\1/' | \
xargs -P 30 -I % bash -c \
"$1DESeq2_addInfo.r %OTU_n2_abs1e9_sub-norm_DESeq2 %comm.txt %ampFrags_kde_dif_incorp_BD-shift.txt > %OTU_n2_abs1e9_sub-norm_DESeq2_info"
%%R -i workDir
setwd(workDir)
deseq.files = grep('_DESeq2_info$', list.files('./atomIncorp_evenness_abund/', recursive=TRUE), value=TRUE)
tbl.l = list()
for(f in deseq.files){
f = paste(c('./atomIncorp_evenness_abund/', f), collapse='/')
tbl.l[[f]] = readRDS(f) %>% mutate(file = f)
}
tbl = do.call(rbind, tbl.l)
rownames(tbl) = seq(1,nrow(tbl))
tbl %>% head
%%R
tbl.e = tbl %>%
mutate(file = gsub('.+//','', file)) %>%
separate(file, c('abund_dist_p','percIncorp','rep','file'), sep='/') #%>%
#unite(percIncorp_percTaxa, percIncorp, percTaxa, sep='_', remove=FALSE)
tbl.e %>% head
%%R
BD.shift.cut = 0.05
padj.cut = 0.1
clsfy = function(guess,known){
if(is.na(guess) | is.na(known)){
return(NA)
}
if(guess == TRUE){
if(guess == known){
return('TP')
} else {
return('FP')
}
} else
if(guess == FALSE){
if(guess == known){
return('TN')
} else {
return('FN')
}
} else {
stop('Error: true or false needed')
}
}
tbl.e = tbl.e %>%
mutate(true_incorp = BD_shift > BD.shift.cut,
DESeq_incorp = padj < padj.cut,
cls = mapply(clsfy, DESeq_incorp, true_incorp))
tbl.e %>% head
%%R
tbl.e$taxon = reorder(tbl.e$taxon, dense_rank(tbl.e$mean_rel_abund_perc))
tbl.e$percIncorp = factor(tbl.e$percIncorp, levels=sort(unique(as.numeric(tbl.e$percIncorp))))
#tbl.e$percTaxa = factor(tbl.e$percTaxa, levels=sort(unique(as.numeric(tbl.e$percTaxa))))
%%R -w 1000
ggplot(tbl.e, aes(taxon, mean_rel_abund_perc, color=cls)) +
geom_point(alpha=0.5) +
facet_grid(abund_dist_p ~ percIncorp) +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
%%R
clsfy = function(guess,known){
if(is.na(guess) | is.na(known)){
return(NA)
}
if(guess == TRUE){
if(guess == known){
return(1)
} else {
return(NA)
}
} else
if(guess == FALSE){
if(guess == known){
return(NA)
} else {
return(0)
}
} else {
stop('Error: true or false needed')
}
}
tbl.s.c = tbl.e %>%
mutate(TP_FN = mapply(clsfy, DESeq_incorp, true_incorp)) %>%
filter(! is.na(TP_FN)) %>%
group_by(percIncorp, abund_dist_p) %>%
summarize(cor.pearson = cor(mean_rel_abund_perc, TP_FN))
tbl.s.c %>% head
%%R -w 700 -h 350
ggplot(tbl.s.c, aes(percIncorp, cor.pearson, fill=abund_dist_p)) +
geom_bar(stat='identity', position='dodge', width=0.5) +
labs(x='Atom percent 13C', y='r_pb', title='Point-biserial correlation coefficients') +
theme(
text = element_text(size=16)
)
%%R -i workDir
setwd(workDir)
x.deseq = './validation/OTU_n2_abs1e10_sub20000_DESeq2'
deseq.res = readRDS(x.deseq)
deseq.res = as.data.frame(deseq.res)
deseq.res$taxon = rownames(deseq.res)
deseq.res %>% head
%%R -i workDir
setwd(workDir)
x.comm = 'validation/comm.txt'
comm = read.delim(x.comm, sep='\t')
comm.s = comm %>%
group_by(taxon_name) %>%
summarize(mean_rel_abund_perc = mean(rel_abund_perc)) %>%
mutate(rank = dense_rank(mean_rel_abund_perc))
comm.s %>% head
%%R -i workDir
setwd(workDir)
x.BD = 'validation/ampFrags_kde_dif_incorp_BD-shift.txt'
BD.shift = read.delim(x.BD, sep='\t')
BD.shift = BD.shift %>%
filter(lib2 == 2)
BD.shift %>% head
%%R
tbl.j = inner_join(deseq.res, BD.shift, c('taxon' = 'taxon'))
tbl.j = inner_join(tbl.j, comm.s, c('taxon' = 'taxon_name'))
tbl.j %>% head
%%R
BD.shift.cut = 0.05
padj.cut = 0.1
clsfy = function(guess,known){
if(guess == TRUE){
if(guess == known){
return('TP')
} else {
return('FP')
}
} else
if(guess == FALSE){
if(guess == known){
return('TN')
} else {
return('FN')
}
} else {
stop('Error: true or false needed')
}
}
tbl.j = tbl.j %>%
mutate(true_incorp = BD_shift > BD.shift.cut,
DESeq_incorp = padj < padj.cut,
cls = mapply(clsfy, DESeq_incorp, true_incorp))
tbl.j %>% head
%%R -w 900
tbl.j$taxon = reorder(tbl.j$taxon, tbl.j$mean_rel_abund_perc)
ggplot(tbl.j, aes(taxon, mean_rel_abund_perc, color=cls)) +
geom_point()
%%R -i workDir
setwd(workDir)
deseq.res = readRDS('./validation/OTU_n2_abs1e10_sub20000_DESeq2')
%%R
comm = read.delim('./validation/comm.txt', sep='\t')
comm.s = comm %>%
group_by(taxon_name) %>%
summarize(mean_rel_abund_perc = mean(rel_abund_perc))
comm.s %>% head
%%R
tmp = deseq.res %>%
as.data.frame()
tmp$taxon_name = rownames(tmp)
tmp = inner_join(tmp, comm.s, c('taxon_name' = 'taxon_name'))
tmp = tmp %>%
mutate(dom_rare = mean_rel_abund_perc > 1)
tmp %>% head
%%R
tmp %>% filter(dom_rare == FALSE) %>% nrow %>% print
tmp %>% filter(dom_rare == TRUE) %>% nrow %>% print
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the word_to_vec_map.
Step3: You've loaded
Step5: Expected Output
Step6: Run the cell below to test your code, this may take 1-2 minutes.
Step7: Expected Output
Step8: Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity.
Step9: As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not suprising, and the result seems acceptable.
Step11: Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch!
Step13: Expected Output
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from w2v_utils import *
words, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')
word_to_vec_map
# GRADED FUNCTION: cosine_similarity
def cosine_similarity(u, v):
Cosine similarity reflects the degree of similariy between u and v
Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
distance = 0.0
### START CODE HERE ###
# Compute the dot product between u and v (≈1 line)
dot = np.dot(u,v)
# Compute the L2 norm of u (≈1 line)
norm_u = np.sqrt(np.sum(u*u))
# Compute the L2 norm of v (≈1 line)
norm_v = np.sqrt(np.sum(v*v))
# Compute the cosine similarity defined by formula (1) (≈1 line)
cosine_similarity = dot/(norm_u*norm_v)
### END CODE HERE ###
return cosine_similarity
father = word_to_vec_map["father"]
mother = word_to_vec_map["mother"]
ball = word_to_vec_map["ball"]
crocodile = word_to_vec_map["crocodile"]
france = word_to_vec_map["france"]
italy = word_to_vec_map["italy"]
paris = word_to_vec_map["paris"]
rome = word_to_vec_map["rome"]
print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother))
print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile))
print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy))
# GRADED FUNCTION: complete_analogy
def complete_analogy(word_a, word_b, word_c, word_to_vec_map):
Performs the word analogy task as explained above: a is to b as c is to ____.
Arguments:
word_a -- a word, string
word_b -- a word, string
word_c -- a word, string
word_to_vec_map -- dictionary that maps words to their corresponding vectors.
Returns:
best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity
# convert words to lower case
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()
### START CODE HERE ###
# Get the word embeddings v_a, v_b and v_c (≈1-3 lines)
e_a, e_b, e_c = word_to_vec_map[word_a], word_to_vec_map[word_b], word_to_vec_map[word_c]
### END CODE HERE ###
words = word_to_vec_map.keys()
max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number
best_word = None # Initialize best_word with None, it will help keep track of the word to output
# loop over the whole word vector set
for w in words:
# to avoid best_word being one of the input words, pass on them.
if w in [word_a, word_b, word_c] :
continue
### START CODE HERE ###
# Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line)
cosine_sim = cosine_similarity(e_b-e_a, word_to_vec_map[w]-e_c)
# If the cosine_sim is more than the max_cosine_sim seen so far,
# then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)
if cosine_sim > max_cosine_sim:
max_cosine_sim = cosine_sim
best_word = w
### END CODE HERE ###
return best_word
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map)))
g = word_to_vec_map['woman'] - word_to_vec_map['man']
print(g)
print ('List of names and their similarities with constructed vector:')
# girls and boys name
name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin']
for w in name_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
print('Other words and their similarities:')
word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist',
'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer']
for w in word_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
def neutralize(word, g, word_to_vec_map):
Removes the bias of "word" by projecting it on the space orthogonal to the bias axis.
This function ensures that gender neutral words are zero in the gender subspace.
Arguments:
word -- string indicating the word to debias
g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender)
word_to_vec_map -- dictionary mapping words to their corresponding vectors.
Returns:
e_debiased -- neutralized word vector representation of the input "word"
### START CODE HERE ###
# Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line)
e = None
# Compute e_biascomponent using the formula give above. (≈ 1 line)
e_biascomponent = None
# Neutralize e by substracting e_biascomponent from it
# e_debiased should be equal to its orthogonal projection. (≈ 1 line)
e_debiased = None
### END CODE HERE ###
return e_debiased
e = "receptionist"
print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g))
e_debiased = neutralize("receptionist", g, word_to_vec_map)
print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g))
def equalize(pair, bias_axis, word_to_vec_map):
Debias gender specific words by following the equalize method described in the figure above.
Arguments:
pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor")
bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender
word_to_vec_map -- dictionary mapping words to their corresponding vectors
Returns
e_1 -- word vector corresponding to the first word
e_2 -- word vector corresponding to the second word
### START CODE HERE ###
# Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines)
w1, w2 = None
e_w1, e_w2 = None
# Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line)
mu = None
# Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines)
mu_B = None
mu_orth = None
# Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines)
e_w1B = None
e_w2B = None
# Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines)
corrected_e_w1B = None
corrected_e_w2B = None
# Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines)
e1 = None
e2 = None
### END CODE HERE ###
return e1, e2
print("cosine similarities before equalizing:")
print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g))
print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g))
print()
e1, e2 = equalize(("man", "woman"), g, word_to_vec_map)
print("cosine similarities after equalizing:")
print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g))
print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: let us summon citlalatonac powers and create synthetic stellar data for K2-100
Step2: We can see that the behavior of the signal is similar to the one obtained analysing the real data set. See Figure 2 of Barragán et al., 2019. Now that we have the stellar variability model, we can add the planetary signal in the RV-like data set.
Step3: At this point we have a model of the three signals created following the same underlying GP $G(t)$. Now it is time to create data taken at the times $\mathbf{t}$ that we can specify.
Step4: Now we have the times $\mathbf{t}$ at which we want to create our observations, we can do this by calling the create_data method.
Step5: Adding white noise to our data
Step6: Adding red noise to our data
Step7: Save the file
|
<ASSISTANT_TASK:>
Python Code:
#Imort modules
from __future__ import print_function, division, absolute_import
import numpy as np
#Import citlalatonac from pyaneti_extras, note that pyaneti has to be compiled in your machine
#and pyaneti has to be in your PYTHONPATH, e.g., you have to add in your bashrc file
#export PYTHONPATH=${PYTHONPATH}:/pathtopyaneti/pyaneti
#and replacing pathtopyaneti with the location of pyaneti in your machine
from pyaneti_extras.citlalatonac import citlali, create_real_times
#citli is the class that creates the spectroscopic-like time-series
#Do all the previous description with one Python command
star = citlali(tmin=7760,tmax=7860,nseries=3,kernel='QPK',kernel_parameters=[31.2,0.558,4.135],
amplitudes=[0.0058,0.0421,0.0242,0.0,0.020,-0.086],seed=1,time_series=['rhk','bis'])
#Let us see how the 3 time-series look
star.plot()
#let us create the lists with the both planet parameters that we will pass to the add_planet attribute
k2100_pars = [7140.71951,0.0106,1.6739035,0,np.pi/2]
#Let us call the add_planet instance for the instrument i1 and let's call the planet 'b'
star.add_planet(k2100_pars,'K2-100b')
#Let us plot the time-series to see if we have added the planetary signal
star.plot()
#Get the times at which K2-100 is observed from La Palma between the time of our synthetic data
t_k2100 = create_real_times(star.time.min(),star.time.max(),ndata=50,air_mass_limit=1.5,
tformat='mjd',star='K2-100',observatory='lapalma')
#Let us create the data at times t
star.create_data(t=t_k2100)
#Let us see how our observations
star.plot()
#The input vector err has to have one white noise term per each time-series
star.add_white_noise(err=[0.001,0.005,0.050])
star.plot()
star.add_red_noise(se_parameters=[0.001,1])
star.plot()
fname = 'K2100-synthetic.dat'
star.save_data(fname)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Scatterplot matrix of JSON data
Step2: Scatterplot of a DataFrame
|
<ASSISTANT_TASK:>
Python Code:
try:
import pycandela
except ImportError:
# Import hack for when in the pycandela source tree.
import sys
sys.path.append('..')
import pycandela
import requests
data = requests.get('https://raw.githubusercontent.com/vega/vega-datasets/gh-pages/data/iris.json').json()
pycandela.components.ScatterPlotMatrix(
data=data, color='species', fields=['sepalLength', 'sepalWidth', 'petalLength', 'petalWidth'], width=800)
from pandas import DataFrame
df = DataFrame.from_records(data)
pycandela.components.ScatterPlot(data=df, color='species', x='sepalLength', y='sepalWidth')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Every plot starts with a call to ggplot, and then requires adding a geom to specify what kind of plot you'd like to create.
Step2: aes creates an "aesthetic mapping", which maps hail expressions to aspects of the plot. There is a predefined list of aesthetics supported by every geom. Most take an x and y at least.
Step3: Or points
Step4: There are optional aesthetics too. If we want, we could color the points based on whether they're even or odd
Step5: Note that the color aesthetic by default just takes in an expression that evaluates to strings, and it assigns a discrete color to each string.
Step6: But that is coloring the line as well, causing us to end up with interlocking blue and orange lines, which isn't what we want. For that reason, it's possible to define aesthetics that only apply to certain geoms.
Step7: All geoms can take in their own aesthetic mapping, which lets them specify aesthetics specific to them. And geom_point still inherits the x and y aesthetics from the mapping defined in ggplot().
Step8: Let's filter the data to 2007 for our first experiments
Step9: If we want to see how many countries from each continent we have, we can use geom_bar, which just takes in an x aesthetic and then implicitly counts how many values of each x there are.
Step10: To make it a little prettier, let's color per continent as well. We use fill to specify color of shapes (as opposed to color for points and lines. color on a bar chart sets the color of the bar outline.)
Step11: Maybe we instead want to see not the number of countries per continent, but the number of people living on each continent. We can do this with geom_bar as well by specifying a weight.
Step12: Histograms are similar to bar plots, except they break a continuous x axis into bins. Let's import the iris dataset for this.
Step13: Let's make a histogram
Step14: By default histogram plots groups stacked on top of each other, which is not always easy to interpret. We can specify the position argument to histogram to get different behavior. "dodge" puts the bars next to each other
Step15: And "identity" plots them over each other. It helps to set an alpha value to make them slightly transparent in these cases
Step16: Labels and Axes
|
<ASSISTANT_TASK:>
Python Code:
ht = hl.utils.range_table(10)
ht = ht.annotate(squared = ht.idx**2)
fig = ggplot(ht, aes(x=ht.idx, y=ht.squared)) + geom_line()
fig.show()
fig = ggplot(ht, aes(x=ht.idx, y=ht.squared)) + geom_col()
fig.show()
fig = ggplot(ht, aes(x=ht.idx, y=ht.squared)) + geom_point()
fig.show()
fig = ggplot(ht, aes(x=ht.idx, y=ht.squared, color=hl.if_else(ht.idx % 2 == 0, "even", "odd"))) + geom_point()
fig.show()
fig = (ggplot(ht, aes(x=ht.idx, y=ht.squared, color=hl.if_else(ht.idx % 2 == 0, "even", "odd"))) +
geom_line() +
geom_point()
)
fig.show()
fig = (ggplot(ht, aes(x=ht.idx, y=ht.squared)) +
geom_line() +
geom_point(aes(color=hl.if_else(ht.idx % 2 == 0, "even", "odd")))
)
fig.show()
gp = hl.Table.from_pandas(plotly.data.gapminder())
gp.describe()
gp_2007 = gp.filter(gp.year == 2007)
ggplot(gp_2007, aes(x=gp_2007.continent)) + geom_bar()
ggplot(gp_2007, aes(x=gp_2007.continent)) + geom_bar(aes(fill=gp_2007.continent))
ggplot(gp_2007, aes(x=gp_2007.continent)) + geom_bar(aes(fill=gp_2007.continent, weight=gp_2007.pop))
iris = hl.Table.from_pandas(plotly.data.iris())
iris.describe()
ggplot(iris, aes(x=iris.sepal_length, fill=iris.species)) + geom_histogram()
ggplot(iris, aes(x=iris.sepal_length, fill=iris.species)) + geom_histogram(position="dodge")
ggplot(iris, aes(x=iris.sepal_length, fill=iris.species)) + geom_histogram(position="identity", alpha=0.8)
(ggplot(iris, aes(x=iris.sepal_length, fill=iris.species)) +
geom_histogram(position="identity", alpha=0.8) +
xlab("Sepal Length") + ylab("Number of samples") + ggtitle("Sepal length by flower type")
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Stroop Task
Step3: 5. Now, perform the statistical test and report your results. What is your confidence level and your critical statistic value? Do you reject the null hypothesis or fail to reject it? Come to a conclusion in terms of the experiment task. Did the results match up with your expectations?
Step5: Calculate the critical t value
Step7: Calculate the t value
Step9: Confidence Interval
Step10: Our results show that t is within the critical region, meaning
|
<ASSISTANT_TASK:>
Python Code:
import math
import pandas as pd
import scipy.stats as st
from IPython.display import Latex
from IPython.display import Math
from IPython.display import display
%matplotlib inline
path = r'./stroopdata.csv'
df_stroop = pd.read_csv(path)
df_stroop
mu_congruent = round(df_stroop['Congruent'].mean(),4)
mu_incongruent = round(df_stroop['Incongruent'].mean(),4)
text = r
\begin{{align}}
\mu_{{congruent}}={}\\
\mu_{{incongruent}}={}
\end{{align}}.format(mu_congruent, mu_incongruent)
Latex(text)
df_stroop.plot(kind="bar")
df_diff = df_stroop['Incongruent']-df_stroop['Congruent']
mu_D = round(df_diff.mean(),4)
std_D = round(df_diff.std(),4)
text = r
\begin{{align}}
\mu_D = {}\\
\sigma_D = {}\\
\end{{align}}.format(mu_D,std_D)
display(Latex(text))
n = df_diff.count()
df = n-1
alpha = .05
t_critical = round(st.t.ppf(1-alpha,df),3)
text = rt_{{critical}}=t_{{{},{}}}={}.format(alpha,df,t_critical)
display(Math(text))
# Calculate t
SEM = round(std_D/math.sqrt(n),4)
t = round(mu_D/SEM,3)
text = rt = \frac{{\mu_D}}{{SEM}} = \
\frac{{{}}}{{{}}} = {}.format(mu_D,SEM,t)
display(Math(text))
two_tailed_t_critical = round(st.t.ppf(alpha/2,df),3)
m_error = -1*round(two_tailed_t_critical * SEM,4)
text = r
\begin{{align}}
{} \pm {}\\
[{},{}]\\
\end{{align}}.format(mu_D,m_error,mu_D-m_error,mu_D+m_error)
display(Latex(text))
import IPython.display as disp
def css_styling():
styles = open("../css/custom.css", "r").read()
return disp.HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some digging into the cellpy structure
Step2: Defining filenames etc
Step3: Loading and looking at what we got
Step4: dfsummary_made is wrong in the cellpyfile
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
import matplotlib.pyplot as plt
%matplotlib inline
import cellpy.parameters.prms as prms
from cellpy import cellreader
from cellpy import log
log.setup_logging(default_level="DEBUG")
# print settings
prm_dicts = [d for d in dir(prms) if not d.startswith("_")]
for d in prm_dicts:
if isinstance(getattr(prms, d), (dict,)) and not d.startswith("excel"):
print(f"\n--------------------{d}----------------------")
for variable in getattr(prms, d):
print(f"{variable}: ", end="")
print(eval("prms.%s['%s']" % (d, variable)))
current_file_path = os.getcwd()
relative_test_data_dir = "../testdata"
test_data_dir = os.path.abspath(os.path.join(current_file_path, relative_test_data_dir))
test_data_dir_raw = os.path.join(test_data_dir, "data")
test_res_file = "20160805_test001_45_cc_01.res"
test_res_file_full = os.path.join(test_data_dir_raw, test_res_file)
test_data_dir_out = os.path.join(test_data_dir, "out")
test_data_dir_cellpy = os.path.join(test_data_dir, "hdf5")
test_cellpy_file = "20160805_test001_45_cc.h5"
test_cellpy_file_tmp = "tmpfile.h5"
test_cellpy_file_full = os.path.join(test_data_dir_cellpy, test_cellpy_file)
test_cellpy_file_tmp_full = os.path.join(test_data_dir_cellpy, test_cellpy_file_tmp)
test_run_name = "20160805_test001_45_cc"
assert os.path.isfile(test_res_file_full)
assert os.path.isfile(test_cellpy_file_full)
assert os.path.isdir(test_data_dir)
# os.listdir(test_data_dir)
celldata = cellreader.CellpyData()
# celldata.from_raw(test_res_file_full)
# celldata.set_mass(0.34)
celldata.load(test_cellpy_file_full)
data = celldata.cell
for x in dir(data):
if not x.startswith("__"):
if not x in ["summary", "raw", "steps"]:
print(f"{x}: {getattr(data,x)}")
data.summary.head()
data.raw.head()
data.steps.head()
celldata.make_summary()
print(data.summary_made)
# lets see what cycles we have
cycles = celldata.get_cycle_numbers()
print(cycles)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TensorFlow Probability on JAX
Step2: We can install TFP on JAX with the latest nightly builds of TFP.
Step3: Let's import some useful Python libraries.
Step4: Let's also import some basic JAX functionality.
Step5: Importing TFP on JAX
Step6: Demo
Step7: We can define the model using tfd.JointDistributionCoroutine. We'll put standard normal priors on both the weights and the bias term then write a target_log_prob function that pins the sampled labels to the data.
Step8: We sample from dist to produce an initial state for MCMC. We can then define a function that takes in a random key and an initial state, and produces 500 samples from a No-U-Turn-Sampler (NUTS). Note that we can use JAX transformations like jit to compile our NUTS sampler using XLA.
Step9: Let's use our samples to perform Bayesian model averaging (BMA) by averaging the predicted probabilies of each set of weights.
Step10: We can vmap(classifier_probs) over the set of samples to get the predicted class probabilities for each of our samples. We then compute the average accuracy across each sample, and the accuracy from Bayesian model averaging.
Step11: Looks like BMA reduces our error rate by almost a third!
Step12: Sampling from a distribution requires explicitly passing in a PRNGKey (or list of integers) as the seed keyword argument. Failing to explicitly pass in a seed will throw an error.
Step13: The shape semantics for distributions remain the same in JAX, where distributions will each have an event_shape and a batch_shape and drawing many samples will add additional sample_shape dimensions.
Step14: On the other hand, a tfd.Normal parameterized with vectors will have a scalar event shape and vector batch shape.
Step15: The semantics of taking log_prob of samples works the same in JAX too.
Step16: Because JAX DeviceArrays are compatible with libraries like NumPy and Matplotlib, we can feed samples directly into a plotting function.
Step17: Distribution methods are compatible with JAX transformations.
Step18: Because TFP distributions are registered as JAX pytree nodes, we can write functions with distributions as inputs or outputs and transform them using jit, but they are not yet supported as arguments to vmap-ed functions.
Step19: Transformed distributions
Step20: Joint distributions
Step21: Other distributions
Step22: Hidden Markov models are also supported.
Step23: A few distributions like PixelCNN are not supported yet due to strict dependencies on TensorFlow or XLA incompatibilities.
Step24: Bijectors are compatible with JAX transformations like jit, grad and vmap.
Step25: Some bijectors, like RealNVP and FFJORD are not yet supported.
Step26: Unlike TFP on TF, we are required to pass a PRNGKey into sample_chain using the seed keyword argument.
Step27: To run multiple chains, we can either pass a batch of states into sample_chain or use vmap (though we have not yet explored performance differences between the two approaches).
Step28: Optimizers
Step29: BFGS can find the minimum of this loss.
Step30: So can L-BFGS.
Step31: To vmap L-BFGS, let's set up a function that optimizes the loss for a single starting point.
Step32: Caveats
Step33: Random functions in JAX consume a key to deterministically produce a random variate, meaning they should not be used again. For example, we can use key to sample a normally distributed value, but we should not use key again elsewhere. Furthermore, passing the same value into random.normal will produce the same value.
Step34: So how do we ever draw multiple samples from a single key? The answer is key splitting. The basic idea is that we can split a PRNGKey into multiple, and each of the new keys can be treated as an independent source of randomness.
Step35: Key splitting is deterministic but is chaotic, so each new key can now be used to draw a distinct random sample.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip uninstall tensorflow -y -q
!pip install -Uq tfp-nightly[jax] > /dev/null
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn import datasets
sns.set(style='white')
import jax.numpy as jnp
from jax import grad
from jax import jit
from jax import random
from jax import value_and_grad
from jax import vmap
from tensorflow_probability.substrates import jax as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
tfpk = tfp.math.psd_kernels
iris = datasets.load_iris()
features, labels = iris['data'], iris['target']
num_features = features.shape[-1]
num_classes = len(iris.target_names)
Root = tfd.JointDistributionCoroutine.Root
def model():
w = yield Root(tfd.Sample(tfd.Normal(0., 1.),
sample_shape=(num_features, num_classes)))
b = yield Root(
tfd.Sample(tfd.Normal(0., 1.), sample_shape=(num_classes,)))
logits = jnp.dot(features, w) + b
yield tfd.Independent(tfd.Categorical(logits=logits),
reinterpreted_batch_ndims=1)
dist = tfd.JointDistributionCoroutine(model)
def target_log_prob(*params):
return dist.log_prob(params + (labels,))
init_key, sample_key = random.split(random.PRNGKey(0))
init_params = tuple(dist.sample(seed=init_key)[:-1])
@jit
def run_chain(key, state):
kernel = tfp.mcmc.NoUTurnSampler(target_log_prob, 1e-3)
return tfp.mcmc.sample_chain(500,
current_state=state,
kernel=kernel,
trace_fn=lambda _, results: results.target_log_prob,
num_burnin_steps=500,
seed=key)
states, log_probs = run_chain(sample_key, init_params)
plt.figure()
plt.plot(log_probs)
plt.ylabel('Target Log Prob')
plt.xlabel('Iterations of NUTS')
plt.show()
def classifier_probs(params):
dists, _ = dist.sample_distributions(seed=random.PRNGKey(0),
value=params + (None,))
return dists[-1].distribution.probs_parameter()
all_probs = jit(vmap(classifier_probs))(states)
print('Average accuracy:', jnp.mean(all_probs.argmax(axis=-1) == labels))
print('BMA accuracy:', jnp.mean(all_probs.mean(axis=0).argmax(axis=-1) == labels))
dist = tfd.Normal(0., 1.)
print(dist.log_prob(0.))
tfd.Normal(0., 1.).sample(seed=random.PRNGKey(0))
dist = tfd.MultivariateNormalDiag(
loc=jnp.zeros(5),
scale_diag=jnp.ones(5)
)
print('Event shape:', dist.event_shape)
print('Batch shape:', dist.batch_shape)
dist = tfd.Normal(
loc=jnp.ones(5),
scale=jnp.ones(5),
)
print('Event shape:', dist.event_shape)
print('Batch shape:', dist.batch_shape)
dist = tfd.Normal(jnp.zeros(5), jnp.ones(5))
s = dist.sample(sample_shape=(10, 2), seed=random.PRNGKey(0))
print(dist.log_prob(s).shape)
dist = tfd.Independent(tfd.Normal(jnp.zeros(5), jnp.ones(5)), 1)
s = dist.sample(sample_shape=(10, 2), seed=random.PRNGKey(0))
print(dist.log_prob(s).shape)
sns.distplot(tfd.Normal(0., 1.).sample(1000, seed=random.PRNGKey(0)))
plt.show()
sns.distplot(jit(vmap(lambda key: tfd.Normal(0., 1.).sample(seed=key)))(
random.split(random.PRNGKey(0), 2000)))
plt.show()
x = jnp.linspace(-5., 5., 100)
plt.plot(x, jit(vmap(grad(tfd.Normal(0., 1.).prob)))(x))
plt.show()
@jit
def random_distribution(key):
loc_key, scale_key = random.split(key)
loc, log_scale = random.normal(loc_key), random.normal(scale_key)
return tfd.Normal(loc, jnp.exp(log_scale))
random_dist = random_distribution(random.PRNGKey(0))
print(random_dist.mean(), random_dist.variance())
dist = tfd.TransformedDistribution(
tfd.Normal(0., 1.),
tfb.Sigmoid()
)
sns.distplot(dist.sample(1000, seed=random.PRNGKey(0)))
plt.show()
dist = tfd.JointDistributionSequential([
tfd.Normal(0., 1.),
lambda x: tfd.Normal(x, 1e-1)
])
plt.scatter(*dist.sample(1000, seed=random.PRNGKey(0)), alpha=0.5)
plt.show()
joint = tfd.JointDistributionNamed(dict(
e= tfd.Exponential(rate=1.),
n= tfd.Normal(loc=0., scale=2.),
m=lambda n, e: tfd.Normal(loc=n, scale=e),
x=lambda m: tfd.Sample(tfd.Bernoulli(logits=m), 12),
))
joint.sample(seed=random.PRNGKey(0))
Root = tfd.JointDistributionCoroutine.Root
def model():
e = yield Root(tfd.Exponential(rate=1.))
n = yield Root(tfd.Normal(loc=0, scale=2.))
m = yield tfd.Normal(loc=n, scale=e)
x = yield tfd.Sample(tfd.Bernoulli(logits=m), 12)
joint = tfd.JointDistributionCoroutine(model)
joint.sample(seed=random.PRNGKey(0))
k1, k2, k3 = random.split(random.PRNGKey(0), 3)
observation_noise_variance = 0.01
f = lambda x: jnp.sin(10*x[..., 0]) * jnp.exp(-x[..., 0]**2)
observation_index_points = random.uniform(
k1, [50], minval=-1.,maxval= 1.)[..., jnp.newaxis]
observations = f(observation_index_points) + tfd.Normal(
loc=0., scale=jnp.sqrt(observation_noise_variance)).sample(seed=k2)
index_points = jnp.linspace(-1., 1., 100)[..., jnp.newaxis]
kernel = tfpk.ExponentiatedQuadratic(length_scale=0.1)
gprm = tfd.GaussianProcessRegressionModel(
kernel=kernel,
index_points=index_points,
observation_index_points=observation_index_points,
observations=observations,
observation_noise_variance=observation_noise_variance)
samples = gprm.sample(10, seed=k3)
for i in range(10):
plt.plot(index_points, samples[i], alpha=0.5)
plt.plot(observation_index_points, observations, marker='o', linestyle='')
plt.show()
initial_distribution = tfd.Categorical(probs=[0.8, 0.2])
transition_distribution = tfd.Categorical(probs=[[0.7, 0.3],
[0.2, 0.8]])
observation_distribution = tfd.Normal(loc=[0., 15.], scale=[5., 10.])
model = tfd.HiddenMarkovModel(
initial_distribution=initial_distribution,
transition_distribution=transition_distribution,
observation_distribution=observation_distribution,
num_steps=7)
print(model.mean())
print(model.log_prob(jnp.zeros(7)))
print(model.sample(seed=random.PRNGKey(0)))
tfb.Exp().inverse(1.)
bij = tfb.Shift(1.)(tfb.Scale(3.))
print(bij.forward(jnp.ones(5)))
print(bij.inverse(jnp.ones(5)))
b = tfb.FillScaleTriL(diag_bijector=tfb.Exp(), diag_shift=None)
print(b.forward(x=[0., 0., 0.]))
print(b.inverse(y=[[1., 0], [.5, 2]]))
b = tfb.Chain([tfb.Exp(), tfb.Softplus()])
# or:
# b = tfb.Exp()(tfb.Softplus())
print(b.forward(-jnp.ones(5)))
jit(vmap(tfb.Exp().inverse))(jnp.arange(4.))
x = jnp.linspace(0., 1., 100)
plt.plot(x, jit(grad(lambda x: vmap(tfb.Sigmoid().inverse)(x).sum()))(x))
plt.show()
target_log_prob = tfd.MultivariateNormalDiag(jnp.zeros(2), jnp.ones(2)).log_prob
def run_chain(key, state):
kernel = tfp.mcmc.NoUTurnSampler(target_log_prob, 1e-1)
return tfp.mcmc.sample_chain(1000,
current_state=state,
kernel=kernel,
trace_fn=lambda _, results: results.target_log_prob,
seed=key)
states, log_probs = jit(run_chain)(random.PRNGKey(0), jnp.zeros(2))
plt.figure()
plt.scatter(*states.T, alpha=0.5)
plt.figure()
plt.plot(log_probs)
plt.show()
states, log_probs = jit(run_chain)(random.PRNGKey(0), jnp.zeros([10, 2]))
plt.figure()
for i in range(10):
plt.scatter(*states[:, i].T, alpha=0.5)
plt.figure()
for i in range(10):
plt.plot(log_probs[:, i], alpha=0.5)
plt.show()
minimum = jnp.array([1.0, 1.0]) # The center of the quadratic bowl.
scales = jnp.array([2.0, 3.0]) # The scales along the two axes.
# The objective function and the gradient.
def quadratic_loss(x):
return jnp.sum(scales * jnp.square(x - minimum))
start = jnp.array([0.6, 0.8]) # Starting point for the search.
optim_results = tfp.optimizer.bfgs_minimize(
value_and_grad(quadratic_loss), initial_position=start, tolerance=1e-8)
# Check that the search converged
assert(optim_results.converged)
# Check that the argmin is close to the actual value.
np.testing.assert_allclose(optim_results.position, minimum)
# Print out the total number of function evaluations it took. Should be 5.
print("Function evaluations: %d" % optim_results.num_objective_evaluations)
optim_results = tfp.optimizer.lbfgs_minimize(
value_and_grad(quadratic_loss), initial_position=start, tolerance=1e-8)
# Check that the search converged
assert(optim_results.converged)
# Check that the argmin is close to the actual value.
np.testing.assert_allclose(optim_results.position, minimum)
# Print out the total number of function evaluations it took. Should be 5.
print("Function evaluations: %d" % optim_results.num_objective_evaluations)
def optimize_single(start):
return tfp.optimizer.lbfgs_minimize(
value_and_grad(quadratic_loss), initial_position=start, tolerance=1e-8)
all_results = jit(vmap(optimize_single))(
random.normal(random.PRNGKey(0), (10, 2)))
assert all(all_results.converged)
for i in range(10):
np.testing.assert_allclose(optim_results.position[i], minimum)
print("Function evaluations: %s" % all_results.num_objective_evaluations)
key = random.PRNGKey(0) # Creates a key with value [0, 0]
print(key)
print(random.normal(key))
key1, key2 = random.split(key, num=2)
print(key1, key2)
print(random.normal(key1), random.normal(key2))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data goes all the way back to 1947 and is updated quarterly.
Step2: Let's go plot for fun. 275 rows are definitely small enough to just put right into a Pandas Dataframe
|
<ASSISTANT_TASK:>
Python Code:
# import the dataset
from quantopian.interactive.data.quandl import fred_gdp
# Since this data is public domain and provided by Quandl for free, there is no _free version of this
# data set, as found in the premium sets. This import gets you the entirety of this data set.
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
import matplotlib.pyplot as plt
fred_gdp.sort('asof_date')
fred_gdp.count()
gdp_df = odo(fred_gdp, pd.DataFrame)
gdp_df.plot(x='asof_date', y='value')
plt.xlabel("As Of Date (asof_date)")
plt.ylabel("GDP (billions)")
plt.title("United States GDP")
plt.legend().set_visible(False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: USE-CASE
Step4: In the example above, like most of what will follow, we used the MC way to evaluate the p-value.
Step7: Is dice crooked ?
Step10: USE-CASE
Step13: USE-CASE
Step16: USE-CASE
Step17: we see that the p-value is smaller using the chi2 statistic as test statistic.
Step18: lets focus only in the bin 9 with signal value = 5
Step19: a point to keep in mind is that the background comes with uncertainty so we eventually have a range of p-values
|
<ASSISTANT_TASK:>
Python Code:
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# se have:
n_h = 140
n_t = 110
observations = (n_h, n_t)
n_observations = n_h + n_t
print observations, n_observations,
# We define the null hypothesis and the test statistic
def run_null_hypothesis(n_observations):
the model of Null hypothesis
sample = [random.choice('HT') for _ in range(n_observations)]
df = pd.DataFrame(sample)
value_counts = df[0].value_counts()
n_heads = value_counts['H']
n_tails = value_counts['T']
return (n_heads, n_tails)
def test_statistic((n_heads, n_tails)):
Computes the test statistic
return abs(n_heads - n_tails)
test_stat_H0 = test_statistic(run_null_hypothesis(n_observations))
test_stat_H1 = test_statistic(observations)
print "Test Statistic for Null Hypothesis H0:", test_stat_H0
print "Test Statistic for Hypothesis H1:", test_stat_H1
# we perform iterations for good statistics
N_ITER = 1000
test_stat_H0_v = [test_statistic(run_null_hypothesis(n_observations)) for _ in range(N_ITER)]
p_value = 1. * sum([1 for test_stat_H0 in test_stat_H0_v if test_stat_H0 >= test_stat_H1])/N_ITER
print "The p-value is: ", p_value
p = 0
for i in range(140, 250):
p += stats.distributions.binom.pmf(k, 250, 0.5)
pval = 1-p
print "The p-value using the frequentist approach is: " , pval
observations = {1:8, 2:9, 3:19, 4:5, 5:8, 6:11}
observations_frequencies = np.array(observations.values())
n_dice_drops = np.sum(observations_frequencies)
print n_dice_drops
def run_null_hypothesis(n_dice_drops):
the model of Null hypothesis
dice_values = [1, 2, 3, 4, 5, 6]
rolls = np.random.choice(dice_values, n_dice_drops, replace=True)
return np.array(dict(pd.DataFrame(rolls)[0].value_counts()).values())
def test_statistic(dice_frequencies, n_dice_drops):
Computes the test statistic
expected_frequencies = np.ones(6) * n_dice_drops / 6.
return sum(abs(dice_frequencies - expected_frequencies))
test_stat_H0 = test_statistic(run_null_hypothesis(n_dice_drops), n_dice_drops)
test_stat_H1 = test_statistic(observations_frequencies, n_dice_drops)
print "Test Statistic for Null Hypothesis H0:", test_stat_H0
print "Test Statistic for Hypothesis H1:", test_stat_H1
# we perform iterations for good statistics
N_ITER = 1000
test_stat_H0_v = [test_statistic(run_null_hypothesis(n_dice_drops), n_dice_drops) for _ in range(N_ITER)]
p_value = 1. * sum([1 for test_stat_H0 in test_stat_H0_v if test_stat_H0 >= test_stat_H1])/N_ITER
print "The p-value is: ", p_value
d1 = np.random.normal(38.601, 1.42, 1000)
d2 = np.random.normal(38.523, 1.42, 1000)
plt.figure(1)
plt.subplot(211)
count, bins, ignored = plt.hist(d1, 30, normed=True)
plt.figure(1)
plt.subplot(211)
count, bins, ignored = plt.hist(d2, 30, normed=True)
# plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
# np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
# linewidth=2, color='r')
plt.show()
# one way to model the null hypothesis is by permutations, shuffle values of the two distributions and treat them as one
d_all = [i for i in d1] + [ i for i in d2]
np.random.shuffle(d_all)
count, bins, ignored = plt.hist(d_all, 30, normed=True)
plt.show()
def run_null_hypothesis(d1, d2):
the model of Null hypothesis - treat the two distributions as one
d_all = [i for i in d1] + [ i for i in d2]
np.random.shuffle(d_all)
return (d_all[:len(d1)], d_all[len(d1):])
def test_statistic(d1, d2):
Computes the test statistic
test_stat = abs(np.mean(d1) - np.mean(d2))
return test_stat
test_stat_H0 = test_statistic(*run_null_hypothesis(d1, d2))
test_stat_H1 = test_statistic(d1, d2)
print "Test Statistic for Null Hypothesis H0:", test_stat_H0
print "Test Statistic for Hypothesis H1:", test_stat_H1
# we perform iterations for good statistics
N_ITER = 1000
test_stat_H0_v = [test_statistic(*run_null_hypothesis(d1, d2)) for _ in range(N_ITER)]
p_value = 1. * sum([1 for test_stat_H0 in test_stat_H0_v if test_stat_H0 >= test_stat_H1])/N_ITER
print "The p-value is: ", p_value
# The p-value here is not small.
# It means that we expect by chance to see an effect as big as the observed about 80% of time.
data = np.random.multivariate_normal([0, 0], [[1, .75],[.75, 1]], 1000)
x = data[:, 0]
y = data[:, 1]
plt.scatter(x, y)
# we can make the null hypothesis model just by shuffling the data of one variable
x2 = x.copy()
np.random.shuffle(x2)
plt.scatter(x2, y)
def run_null_hypothesis(x, y):
the model of Null hypothesis - treat the two distributions as one
x2 = x.copy()
np.random.shuffle(x2)
return (x2, y)
def test_statistic(x, y):
Computes the test statistic
test_stat = abs(np.corrcoef(x, y)[0][1])
return test_stat
test_stat_H0 = test_statistic(*run_null_hypothesis(x, y))
test_stat_H1 = test_statistic(x, y)
print "Test Statistic for Null Hypothesis H0:", test_stat_H0
print "Test Statistic for Hypothesis H1:", test_stat_H1
# we perform iterations for good statistics
N_ITER = 1000
test_stat_H0_v = [test_statistic(*run_null_hypothesis(d1, d2)) for _ in range(N_ITER)]
p_value = 1. * sum([1 for test_stat_H0 in test_stat_H0_v if test_stat_H0 >= test_stat_H1])/N_ITER
print "The p-value is: ", p_value
observations = {1:8, 2:9, 3:19, 4:5, 5:8, 6:11}
observations_frequencies = np.array(observations.values())
n_dice_drops = np.sum(observations_frequencies)
print n_dice_drops
def run_null_hypothesis(n_dice_drops):
the model of Null hypothesis
dice_values = [1, 2, 3, 4, 5, 6]
rolls = np.random.choice(dice_values, n_dice_drops, replace=True)
return np.array(dict(pd.DataFrame(rolls)[0].value_counts()).values())
def test_statistic(dice_frequencies, n_dice_drops):
Computes the test statistic
expected_frequencies = np.ones(6) * n_dice_drops / 6.
return sum( (dice_frequencies - expected_frequencies)**2 / expected_frequencies)
test_stat_H0 = test_statistic(run_null_hypothesis(n_dice_drops), n_dice_drops)
test_stat_H1 = test_statistic(observations_frequencies, n_dice_drops)
print "Test Statistic for Null Hypothesis H0:", test_stat_H0
print "Test Statistic for Hypothesis H1:", test_stat_H1
# we perform iterations for good statistics
N_ITER = 1000
test_stat_H0_v = [test_statistic(run_null_hypothesis(n_dice_drops), n_dice_drops) for _ in range(N_ITER)]
p_value = 1. * sum([1 for test_stat_H0 in test_stat_H0_v if test_stat_H0 >= test_stat_H1])/N_ITER
print "The p-value is: ", p_value
# Lets say we have already a histogram with the bins values below:
x_obs = {1:1, 2:2, 3:2, 4:0, 5:3, 6:1, 7:1, 8:2, 9:5, 10:6,
11:1, 12:0, 13:1, 14:2, 15:1, 16:3, 17:1, 18:0, 19:1, 20:0}
x_bgr = {1:1.2, 2:1.8, 3:1.8, 4:1.9, 5:1.9, 6:2, 7:2, 8:2, 9:1.8, 10:1.8,
11:1.7, 12:1.7, 13:1.7, 14:1.6, 15:1.6, 16:1.6, 17:1.5, 18:1.5, 19:1.1, 20:0.3}
_ = plt.bar(x_obs.keys(), x_obs.values(), color='b')
_ = plt.bar(x_bgr.keys(), x_bgr.values(), alpha=0.6, color='r')
# lets say that the red is what we know as background (e.g. from monde carlo) and blue is the observed signal.
# Is this signal statistical significant ?
# The H0 would say that both those distributions come from the same process.
# So we can construct the H0 model by adding those values and then splitting them in two parts.
from scipy import stats
pmf_values = []
N_obs = 5
N_bgr = 1.8
for i in range(0, N_obs-1):
pmf_values.append(stats.distributions.poisson.pmf(i, N_bgr))
pval = 1-np.sum(pmf_values)
print 'The p-value is ', pval
from scipy import stats
pmf_values = []
N_obs = 11
N_bgr = 3.2
for i in range(0, N_obs-1):
pmf_values.append(stats.distributions.poisson.pmf(i, N_bgr))
pval = 1-np.sum(pmf_values)
print 'The p-value is ', pval
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 2 - Outline of the Assignment
Step4: Expected output
Step6: Expected output
Step8: Expected output
Step10: Expected output
Step12: <table style="width
Step14: Expected Output
Step16: Expected Output
Step18: Expected output with sigmoid
Step20: Expected Output
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v2 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x)*0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h)*0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(2,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1])*0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)],
parameters['b' + str(l)], activation = 'relu')
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, parameters['W' + str(L)],
parameters['b' + str(L)], activation = 'sigmoid')
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = - np.mean(Y * np.log(AL) + (1-Y) * np.log(1-AL))
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = np.dot(dZ, A_prev.T) / m
db = np.mean(dZ, axis = 1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
print(dZ.shape, db.shape, b.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[-1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache , 'sigmoid')
### END CODE HERE ###
for l in reversed(range(L - 1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 2)], current_cache , 'relu')
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dA1 = "+ str(grads["dA1"]))
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l+1)] -= grads['dW' + str(l+1)] * learning_rate
parameters["b" + str(l+1)] -= grads['db' + str(l+1)] * learning_rate
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Restart the kernel
Step2: Set up your Google Cloud project
Step3: Otherwise, set your project ID here.
Step4: Set project ID
Step5: Timestamp
Step6: Create a Cloud Storage bucket
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Import libraries and define constants
Step10: Write Dockerfile
Step11: Create training application code
Step16: In the next cell, you write the contents of the training script, task.py. This file downloads the horses or humans dataset from TensorFlow datasets and trains a tf.keras functional model using MirroredStrategy from the tf.distribute module.
Step17: Build the Container
Step18: Create and run hyperparameter tuning job on Vertex AI
Step19: Create a CustomJob.
Step20: Then, create and run a HyperparameterTuningJob.
Step21: It will nearly take 50 mintues to complete the job successfully.
|
<ASSISTANT_TASK:>
Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
# Install necessary dependencies
! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
import os
PROJECT_ID = "qwiklabs-gcp-00-b9e7121a76ba" # Replace your Project ID here
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "qwiklabs-gcp-00-b9e7121a76ba" # Replace your Project ID here
! gcloud config set project $PROJECT_ID
# Import necessary librarary
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
BUCKET_URI = "gs://qwiklabs-gcp-00-b9e7121a76ba" # Replace your Bucket name here
REGION = "us-central1" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://qwiklabs-gcp-00-b9e7121a76ba": # Replace your Bucket name here
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
print(BUCKET_URI)
# Create your bucket
! gsutil mb -l $REGION $BUCKET_URI
# Give access to your Cloud Storage bucket
! gsutil ls -al $BUCKET_URI
# Import necessary libraries
import os
import sys
from google.cloud import aiplatform
from google.cloud.aiplatform import hyperparameter_tuning as hpt
%%writefile Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-gpu.2-5
WORKDIR /
# Installs hypertune library
RUN pip install cloudml-hypertune
# Copies the trainer code to the docker image.
COPY trainer /trainer
# Sets up the entry point to invoke the trainer.
ENTRYPOINT ["python", "-m", "trainer.task"]
# Create trainer directory
! mkdir trainer
%%writefile trainer/task.py
import argparse
import hypertune
import tensorflow as tf
import tensorflow_datasets as tfds
def get_args():
Parses args. Must include all hyperparameters you want to tune.
parser = argparse.ArgumentParser()
parser.add_argument(
'--learning_rate', required=True, type=float, help='learning rate')
parser.add_argument(
'--momentum', required=True, type=float, help='SGD momentum value')
parser.add_argument(
'--units',
required=True,
type=int,
help='number of units in last hidden layer')
parser.add_argument(
'--epochs',
required=False,
type=int,
default=10,
help='number of training epochs')
args = parser.parse_args()
return args
def preprocess_data(image, label):
Resizes and scales images.
image = tf.image.resize(image, (150, 150))
return tf.cast(image, tf.float32) / 255., label
def create_dataset(batch_size):
Loads Horses Or Humans dataset and preprocesses data.
data, info = tfds.load(
name='horses_or_humans', as_supervised=True, with_info=True)
# Create train dataset
train_data = data['train'].map(preprocess_data)
train_data = train_data.shuffle(1000)
train_data = train_data.batch(batch_size)
# Create validation dataset
validation_data = data['test'].map(preprocess_data)
validation_data = validation_data.batch(64)
return train_data, validation_data
def create_model(units, learning_rate, momentum):
Defines and compiles model.
inputs = tf.keras.Input(shape=(150, 150, 3))
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(inputs)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(64, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(units, activation='relu')(x)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(
loss='binary_crossentropy',
optimizer=tf.keras.optimizers.SGD(
learning_rate=learning_rate, momentum=momentum),
metrics=['accuracy'])
return model
def main():
args = get_args()
# Create Strategy
strategy = tf.distribute.MirroredStrategy()
# Scale batch size
GLOBAL_BATCH_SIZE = 64 * strategy.num_replicas_in_sync
train_data, validation_data = create_dataset(GLOBAL_BATCH_SIZE)
# Wrap model variables within scope
with strategy.scope():
model = create_model(args.units, args.learning_rate, args.momentum)
# Train model
history = model.fit(
train_data, epochs=args.epochs, validation_data=validation_data)
# Define Metric
hp_metric = history.history['val_accuracy'][-1]
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=hp_metric,
global_step=args.epochs)
if __name__ == '__main__':
main()
# Set the IMAGE_URI
IMAGE_URI = f"gcr.io/{PROJECT_ID}/horse-human:hypertune"
# Build the docker image
! docker build -f Dockerfile -t $IMAGE_URI ./
# Push it to Google Container Registry:
! docker push $IMAGE_URI
# Define required specifications
worker_pool_specs = [
{
"machine_spec": {
"machine_type": "n1-standard-4",
"accelerator_type": "NVIDIA_TESLA_T4",
"accelerator_count": 2,
},
"replica_count": 1,
"container_spec": {"image_uri": IMAGE_URI},
}
]
metric_spec = {"accuracy": "maximize"}
parameter_spec = {
"learning_rate": hpt.DoubleParameterSpec(min=0.001, max=1, scale="log"),
"momentum": hpt.DoubleParameterSpec(min=0, max=1, scale="linear"),
"units": hpt.DiscreteParameterSpec(values=[64, 128, 512], scale=None),
}
print(BUCKET_URI)
# Create a CustomJob
JOB_NAME = "horses-humans-hyperparam-job" + TIMESTAMP
my_custom_job = # TODO 1: Your code goes here(
display_name=JOB_NAME,
project=PROJECT_ID,
worker_pool_specs=worker_pool_specs,
staging_bucket=BUCKET_URI,
)
# Create and run HyperparameterTuningJob
hp_job = # TODO 2: Your code goes here(
display_name=JOB_NAME,
custom_job=my_custom_job,
metric_spec=metric_spec,
parameter_spec=parameter_spec,
max_trial_count=15,
parallel_trial_count=3,
project=PROJECT_ID,
search_algorithm=None,
)
hp_job.run()
# Set this to true only if you'd like to delete your bucket
delete_bucket = # TODO 3: Your code goes here
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_URI
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Providing more meaningful annotation information
Step2: Viewing location of sensors over brain surface
Step3: Selecting channels appropriate for detecting neural responses
Step4: Converting from raw intensity to optical density
Step5: Evaluating the quality of the data
Step6: In this example we will mark all channels with a SCI less than 0.5 as bad
Step7: At this stage it is appropriate to inspect your data
Step8: Removing heart rate from signal
Step9: Extract epochs
Step10: Next we define the range of our epochs, the rejection criteria,
Step11: View consistency of responses across trials
Step12: We can also view the epoched data for the control condition and observe
Step13: View consistency of responses across channels
Step14: Plot standard fNIRS response image
Step15: View topographic representation of activity
Step16: Compare tapping of left and right hands
Step17: And we also view the HbR activity for the two conditions.
Step18: And we can plot the comparison at a single time point for two conditions.
Step19: Lastly, we can also look at the individual waveforms to see what is
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
from itertools import compress
import mne
fnirs_data_folder = mne.datasets.fnirs_motor.data_path()
fnirs_cw_amplitude_dir = op.join(fnirs_data_folder, 'Participant-1')
raw_intensity = mne.io.read_raw_nirx(fnirs_cw_amplitude_dir, verbose=True)
raw_intensity.load_data()
raw_intensity.annotations.set_durations(5)
raw_intensity.annotations.rename({'1.0': 'Control',
'2.0': 'Tapping/Left',
'3.0': 'Tapping/Right'})
unwanted = np.nonzero(raw_intensity.annotations.description == '15.0')
raw_intensity.annotations.delete(unwanted)
subjects_dir = op.join(mne.datasets.sample.data_path(), 'subjects')
brain = mne.viz.Brain(
'fsaverage', subjects_dir=subjects_dir, background='w', cortex='0.5')
brain.add_sensors(
raw_intensity.info, trans='fsaverage',
fnirs=['channels', 'pairs', 'sources', 'detectors'])
brain.show_view(azimuth=20, elevation=60, distance=400)
picks = mne.pick_types(raw_intensity.info, meg=False, fnirs=True)
dists = mne.preprocessing.nirs.source_detector_distances(
raw_intensity.info, picks=picks)
raw_intensity.pick(picks[dists > 0.01])
raw_intensity.plot(n_channels=len(raw_intensity.ch_names),
duration=500, show_scrollbars=False)
raw_od = mne.preprocessing.nirs.optical_density(raw_intensity)
raw_od.plot(n_channels=len(raw_od.ch_names),
duration=500, show_scrollbars=False)
sci = mne.preprocessing.nirs.scalp_coupling_index(raw_od)
fig, ax = plt.subplots()
ax.hist(sci)
ax.set(xlabel='Scalp Coupling Index', ylabel='Count', xlim=[0, 1])
raw_od.info['bads'] = list(compress(raw_od.ch_names, sci < 0.5))
raw_haemo = mne.preprocessing.nirs.beer_lambert_law(raw_od, ppf=0.1)
raw_haemo.plot(n_channels=len(raw_haemo.ch_names),
duration=500, show_scrollbars=False)
fig = raw_haemo.plot_psd(average=True)
fig.suptitle('Before filtering', weight='bold', size='x-large')
fig.subplots_adjust(top=0.88)
raw_haemo = raw_haemo.filter(0.05, 0.7, h_trans_bandwidth=0.2,
l_trans_bandwidth=0.02)
fig = raw_haemo.plot_psd(average=True)
fig.suptitle('After filtering', weight='bold', size='x-large')
fig.subplots_adjust(top=0.88)
events, event_dict = mne.events_from_annotations(raw_haemo)
fig = mne.viz.plot_events(events, event_id=event_dict,
sfreq=raw_haemo.info['sfreq'])
fig.subplots_adjust(right=0.7) # make room for the legend
reject_criteria = dict(hbo=80e-6)
tmin, tmax = -5, 15
epochs = mne.Epochs(raw_haemo, events, event_id=event_dict,
tmin=tmin, tmax=tmax,
reject=reject_criteria, reject_by_annotation=True,
proj=True, baseline=(None, 0), preload=True,
detrend=None, verbose=True)
epochs.plot_drop_log()
epochs['Tapping'].plot_image(combine='mean', vmin=-30, vmax=30,
ts_args=dict(ylim=dict(hbo=[-15, 15],
hbr=[-15, 15])))
epochs['Control'].plot_image(combine='mean', vmin=-30, vmax=30,
ts_args=dict(ylim=dict(hbo=[-15, 15],
hbr=[-15, 15])))
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(15, 6))
clims = dict(hbo=[-20, 20], hbr=[-20, 20])
epochs['Control'].average().plot_image(axes=axes[:, 0], clim=clims)
epochs['Tapping'].average().plot_image(axes=axes[:, 1], clim=clims)
for column, condition in enumerate(['Control', 'Tapping']):
for ax in axes[:, column]:
ax.set_title('{}: {}'.format(condition, ax.get_title()))
evoked_dict = {'Tapping/HbO': epochs['Tapping'].average(picks='hbo'),
'Tapping/HbR': epochs['Tapping'].average(picks='hbr'),
'Control/HbO': epochs['Control'].average(picks='hbo'),
'Control/HbR': epochs['Control'].average(picks='hbr')}
# Rename channels until the encoding of frequency in ch_name is fixed
for condition in evoked_dict:
evoked_dict[condition].rename_channels(lambda x: x[:-4])
color_dict = dict(HbO='#AA3377', HbR='b')
styles_dict = dict(Control=dict(linestyle='dashed'))
mne.viz.plot_compare_evokeds(evoked_dict, combine="mean", ci=0.95,
colors=color_dict, styles=styles_dict)
times = np.arange(-3.5, 13.2, 3.0)
topomap_args = dict(extrapolate='local')
epochs['Tapping'].average(picks='hbo').plot_joint(
times=times, topomap_args=topomap_args)
times = np.arange(4.0, 11.0, 1.0)
epochs['Tapping/Left'].average(picks='hbo').plot_topomap(
times=times, **topomap_args)
epochs['Tapping/Right'].average(picks='hbo').plot_topomap(
times=times, **topomap_args)
epochs['Tapping/Left'].average(picks='hbr').plot_topomap(
times=times, **topomap_args)
epochs['Tapping/Right'].average(picks='hbr').plot_topomap(
times=times, **topomap_args)
fig, axes = plt.subplots(nrows=2, ncols=4, figsize=(9, 5),
gridspec_kw=dict(width_ratios=[1, 1, 1, 0.1]))
vmin, vmax, ts = -8, 8, 9.0
evoked_left = epochs['Tapping/Left'].average()
evoked_right = epochs['Tapping/Right'].average()
evoked_left.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 0],
vmin=vmin, vmax=vmax, colorbar=False,
**topomap_args)
evoked_left.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 0],
vmin=vmin, vmax=vmax, colorbar=False,
**topomap_args)
evoked_right.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 1],
vmin=vmin, vmax=vmax, colorbar=False,
**topomap_args)
evoked_right.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 1],
vmin=vmin, vmax=vmax, colorbar=False,
**topomap_args)
evoked_diff = mne.combine_evoked([evoked_left, evoked_right], weights=[1, -1])
evoked_diff.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 2:],
vmin=vmin, vmax=vmax, colorbar=True,
**topomap_args)
evoked_diff.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 2:],
vmin=vmin, vmax=vmax, colorbar=True,
**topomap_args)
for column, condition in enumerate(
['Tapping Left', 'Tapping Right', 'Left-Right']):
for row, chroma in enumerate(['HbO', 'HbR']):
axes[row, column].set_title('{}: {}'.format(chroma, condition))
fig.tight_layout()
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(6, 4))
mne.viz.plot_evoked_topo(epochs['Left'].average(picks='hbo'), color='b',
axes=axes, legend=False)
mne.viz.plot_evoked_topo(epochs['Right'].average(picks='hbo'), color='r',
axes=axes, legend=False)
# Tidy the legend:
leg_lines = [line for line in axes.lines if line.get_c() == 'b'][:1]
leg_lines.append([line for line in axes.lines if line.get_c() == 'r'][0])
fig.legend(leg_lines, ['Left', 'Right'], loc='lower right')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: In addition to needing a train/test split, we need to ensure reasonable class balance. A simple approach to this is simply to shuffle both lists before taking a random sample without replacement.
Step4: Feature Engineering
Step5: Classifier
Step6: Overall, the accuracy is low, but interestingly, there is a pattern. We never mistake seriations which have an "early" event from those with a "late" event, but we have trouble telling a early split from an early coalescence, and trouble telling a late split from a late coalescence. This is a slightly weird result, actually.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import cPickle as pickle
from copy import deepcopy
from sklearn.utils import shuffle
%matplotlib inline
plt.style.use("fivethirtyeight")
sns.set()
all_graphs = pickle.load(open("train-cont-graphs.pkl",'r'))
all_labels = pickle.load(open("train-cont-labels.pkl",'r'))
def train_test_split(graph_list, label_list, test_fraction=0.20):
Randomly splits a set of graphs and labels into training and testing data sets. We need a custom function
because the dataset isn't a numeric matrix, but a list of NetworkX Graph objects. In case there is class
structure (i.e., we filled the arrays first with instances of one class, then another class...) we consistently
shuffle both lists.
graph_list, label_list = shuffle(graph_list, label_list)
rand_ix = np.random.random_integers(0, len(graph_list), size=int(len(graph_list) * test_fraction))
print "random indices: %s" % rand_ix
test_graphs = []
test_labels = []
train_graphs = []
train_labels = []
# first copy the chosen test values, without deleting anything since that would alter the indices
for ix in rand_ix:
test_graphs.append(graph_list[ix])
test_labels.append(label_list[ix])
# now copy the indices that are NOT in the test index list
for ix in range(0, len(graph_list)):
if ix in rand_ix:
continue
train_graphs.append(graph_list[ix])
train_labels.append(label_list[ix])
return (train_graphs, train_labels, test_graphs, test_labels)
train_graphs, train_labels, test_graphs, test_labels = train_test_split(all_graphs, all_labels, test_fraction=0.1)
print "train size: %s" % len(train_graphs)
print "test size: %s" % len(test_graphs)
def graphs_to_eigenvalue_matrix(graph_list, num_eigenvalues = None):
Given a list of NetworkX graphs, returns a numeric matrix where rows represent graphs,
and columns represent the reverse sorted eigenvalues of the Laplacian matrix for each graph,
possibly trimmed to only use the num_eigenvalues largest values. If num_eigenvalues is
unspecified, all eigenvalues are used.
# we either use all of the eigenvalues, or the number requested (and zero-pad if needed)
if num_eigenvalues is None:
ev_used = n
else:
ev_used = num_eigenvalues
data_mat = np.zeros((len(graph_list),ev_used))
for ix in range(0, len(graph_list)):
spectrum = sorted(nx.spectrum.laplacian_spectrum(graph_list[ix], weight=None), reverse=True)
# if the spectrum is shorter than the number of eigenvalues used (due to multiplicity), zero pad the result
if len(spectrum) < ev_used:
spectrum = np.lib.pad(spectrum, (0,ev_used-len(spectrum)), 'constant', constant_values=(0,0))
data_mat[ix,:] = spectrum[0:ev_used]
return data_mat
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
train_matrix = graphs_to_eigenvalue_matrix(train_graphs, num_eigenvalues=20)
test_matrix = graphs_to_eigenvalue_matrix(test_graphs, num_eigenvalues=20)
print train_matrix.shape
print test_matrix.shape
clf = GradientBoostingClassifier(n_estimators = 250)
clf.fit(train_matrix, train_labels)
pred_label = clf.predict(test_matrix)
cm = confusion_matrix(test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(test_labels, pred_label)
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
pipeline = Pipeline([
('clf', GradientBoostingClassifier())
])
params = {
'clf__learning_rate': [5.0,2.0,1.0, 0.75, 0.5, 0.25, 0.1, 0.05, 0.01, 0.005],
'clf__n_estimators': [10,25,50,100,250,500,1000]
}
grid_search = GridSearchCV(pipeline, params, cv=5, n_jobs = -1, verbose = 1)
grid_search.fit(train_matrix, train_labels)
print("Best score: %0.3f" % grid_search.best_score_)
print("Best parameters:")
best_params = grid_search.best_estimator_.get_params()
for param in sorted(params.keys()):
print("param: %s: %r" % (param, best_params[param]))
pred_label = grid_search.predict(test_matrix)
cm = confusion_matrix(test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(test_labels, pred_label)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Downloading Data
Step2: Using KeOps with a GPyTorch Model
Step3: Compute RMSE
|
<ASSISTANT_TASK:>
Python Code:
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
import urllib.request
import os.path
from scipy.io import loadmat
from math import floor
if not os.path.isfile('../3droad.mat'):
print('Downloading \'3droad\' UCI dataset...')
urllib.request.urlretrieve('https://www.dropbox.com/s/f6ow1i59oqx05pl/3droad.mat?dl=1', '../3droad.mat')
data = torch.Tensor(loadmat('../3droad.mat')['data'])
import numpy as np
N = data.shape[0]
# make train/val/test
n_train = int(0.8 * N)
train_x, train_y = data[:n_train, :-1], data[:n_train, -1]
test_x, test_y = data[n_train:, :-1], data[n_train:, -1]
# normalize features
mean = train_x.mean(dim=-2, keepdim=True)
std = train_x.std(dim=-2, keepdim=True) + 1e-6 # prevent dividing by 0
train_x = (train_x - mean) / std
test_x = (test_x - mean) / std
# normalize labels
mean, std = train_y.mean(),train_y.std()
train_y = (train_y - mean) / std
test_y = (test_y - mean) / std
# make continguous
train_x, train_y = train_x.contiguous(), train_y.contiguous()
test_x, test_y = test_x.contiguous(), test_y.contiguous()
output_device = torch.device('cuda:0')
train_x, train_y = train_x.to(output_device), train_y.to(output_device)
test_x, test_y = test_x.to(output_device), test_y.to(output_device)
# We will use the simplest form of GP model, exact inference
class ExactGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.keops.MaternKernel(nu=2.5))
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood().cuda()
model = ExactGPModel(train_x, train_y, likelihood).cuda()
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
import time
training_iter = 50
for i in range(training_iter):
start_time = time.time()
# Zero gradients from previous iteration
optimizer.zero_grad()
# Output from model
output = model(train_x)
# Calc loss and backprop gradients
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f lengthscale: %.3f noise: %.3f' % (
i + 1, training_iter, loss.item(),
model.covar_module.base_kernel.lengthscale.item(),
model.likelihood.noise.item()
))
optimizer.step()
print(time.time() - start_time)
# Get into evaluation (predictive posterior) mode
model.eval()
likelihood.eval()
# Test points are regularly spaced along [0,1]
# Make predictions by feeding model through likelihood
with torch.no_grad(), gpytorch.settings.fast_pred_var():
observed_pred = likelihood(model(test_x))
torch.sqrt(torch.mean(torch.pow(observed_pred.mean - test_y, 2)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lorena Barba's tutorial, see this notebook for more detail on the Jupyter notebook.
Step2: IPython works by starting a web server on your PC and then serving pages from this server. This web server has an IP address of 127.0.0.1 (localhost), on port 8888, i.e., browser address 127.0.0.1
Step3: You can see all the IPython options by typing ipython --help.
Step4: Once in the Sophos control panel, select the following menu option
Step5: If the 127.0.0.1 loopback IP address does not appear in the list, add it as shown above.
Step6: Notebook cells
Step7: Command-line conversion
Step8: There are 'magics' to support a number of other languages in the IPython notebook. From the IPython notebook website
Step9: To learn more about a magic execute ? followed by the magic as in
Step10: BTW, you can also display the docstring of any python function by
Step11: The %who command without any arguments will list all variables that exist in the global scope. Passing a parameter like str will list only variables of that type.
Step12: Use the %%writefile magic to write contents to a file. The file can be read in normal Python or it can be read and popped up in a window by %pycat.
Step13: The %edit magic is supposed to start up an editor from within IPython - I never got that to work under windows.
Step14: The%run magic runs a scipt along the stated path and prints the results to the cell output. Useful to run external scripts not coded in the notebook itself. Just be sure to copy the script with the notebook. The next cell writes a script to the current directory and then the following cell executes it.
Step15: %run can also run Jupyter/IPython notebooks and insert the output of the notebook into the current result cell.
Step16: The %store command lets you pass variables between two different notebooks.
Step18: % pastebin 'file.py' to upload code to pastebin and get the url returned.
Step19: Magics for using IPython for numeric and scientific work
Step20: Writing functions in other languages
Step21: UK English spell checker
Step22: You can also access the built in Python help system, by typing help(objectname), and then execute the cell.
Step23: Tab-completion help is also available, just press TAB after the period
Step24: You can also obtain the docstring by prepending a function with a question mark and then executing.
Step25: <a name="NotebooksRemember">
Step26: A similar error occurs in Pandas if the first cell creates a dataframe, and the second cell adds a new column to the dataframe. If the second cell is executed a number of times, many columns will be added, which was not the intention with the code if executed linearly from start to end.
Step27: Clearing the IPython memory
Step28: By just importing seaborn, the Matplotlib graphs is given a different style. If seaborn is not installed do conda install seaborn.
Step29: Matplotlib in qt window
Step30: Images can also be included as markdown by using the following format
Step31: Embedding other media
Step32: Embed a video from YouTube.
Step33: Embed an external web page.
Step35: Embed a video from local file system
Step36: Interactive widgets
Step37: The following two cells illustrate how a slider is used in the widgets.interactive function to test the value of the slider and then do something with the value. The example below shows how to pass 'fixed' or non-widget parameters to the function. Any number of such widgets may be passed, but they must all be named.
Step39: The following is an example by Ketcheson, Ahmadia and Granger taken from
Step40: An example at https
Step41: https
Step42: Interdependent widgets
Step43: The following information is somewhat esoteric, you need not go into this
Step44: Test
Step45: Notebook file format
Step47: Running notebook servers
Step48: HTML formatting in dynamic output display
Step50: Displaying tables in HTML
Step51: Fine-tuning IPython typographic output appearance
Step52: But you can instruct IPython to use default display as follows
Step53: Making slides from IPython notebooks
Step54: Class Descriptors
Step55: Class and Instance Attributes
Step56: Python and module versions, and dates
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import display
from IPython.display import Image
from IPython.display import HTML
display(Image(filename='images/portalpage.png'))
display(Image(filename='images/ipaddress.png'))
display(Image(filename='images/sophos01.png'))
display(Image(filename='images/sophos02.png'))
display(Image(filename='images/ipython-newnotebook.png'))
HTML('<img src="images/convertfile.png" width=600 height=300/>')
import os
# test to see if this is Linux or Windows
if os.path == '/':
!ls *.ipynb
else:
!dir *.ipynb
%lsmagic
?%timeit
#remove only variable b
a=1; b=2; c=3; b1m=4; b2m=5; b3m=6; b4m=7; b2s=8
%reset_selective -f ^b$
%who_ls
#remove all variables starting with the letter b
a=1; b=2; c=3; b1m=4; b2m=5; b3m=6; b4m=7; b2s=8
%reset_selective -f b
%who_ls
one = "for the money"
two = "for the show"
three = "to get ready now go cat go"
%who str
%%writefile test.txt
This is a test file!
It can contain anything I want...
more...
#open the file and read its contents
with open('test.txt', 'r') as fi:
print('{}'.format(' '.join(fi.readlines())))
%pycat test.txt
# The line below sets the environment variable OMP_NUM_THREADS
%env OMP_NUM_THREADS=4
%%file helloipython.py
print('Hello IPython!')
%run helloipython.py
%run ./PlayStats.ipynb
%timeit range(100)
from IPython.display import Latex
Latex(r\begin{eqnarray}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{eqnarray})
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
5
a = 6
7
from IPython.display import YouTubeVideo
# a talk about the ICalico spell checker extension
YouTubeVideo('Km3AtRynWFQ')
from collections import defaultdict
# defaultdict?
display(Image(filename='images/introspection.png'))
# help(defaultdict)
display(Image(filename='images/python-help.png'))
# defaultdict.
display(Image(filename='images/tabcompletion.png'))
display(Image(filename='images/ipythonhelp.png'))
#run this cell once
a = 5
#run this cell several times
a = a + 1
print(a)
%reload_ext autoreload
%autoreload 2
import numpy as np
%matplotlib inline
import pylab as pl
import numpy as np
t = np.arange(0.0, 2.0, 0.01)
s = np.sin(2*np.pi*t)
pl.plot(t, s)
pl.xlabel('time (s)')
pl.ylabel('voltage (mV)')
pl.title('About as simple as it gets, folks')
pl.grid(True)
# savefig("test.png")
# show()
import seaborn as sns
t = np.arange(0.0, 2.0, 0.01)
s = np.sin(2*np.pi*t)
pl.plot(t, s)
pl.xlabel('time (s)')
pl.ylabel('voltage (mV)')
pl.title('About as simple as it gets, folks')
pl.grid(True)
# savefig("test.png")
# show()
HTML('<img src="images/ipythonhelp.png" width=400 height=200/>')
display(Image(filename='images/ipythonhelp.png', width=250, height=250))
# by default Image data are embedded
picUrl = 'https://raw.githubusercontent.com/NelisW/pyradi/master/pyradi/doc/_images/pyradi.png'
Embed = Image(picUrl)
display(Embed)
# if kwarg `url` is given, the embedding is assumed to be false
# SoftLinked = Image(url=picUrl)
# In each case, embed can be specified explicitly with the `embed` kwarg
# ForceEmbed = Image(url=picUrl, embed=True)
from IPython.display import SVG
SVG(filename='images/solidangleflatplate.svg')
from IPython.display import YouTubeVideo
# a talk about IPython at Sage Days at U. Washington, Seattle.
# Video credit: William Stein.
if False:
YouTubeVideo('1j_HxD4iLn8')
if False:
HTML('<iframe src=https://en.wikipedia.org/wiki/Einstein width=700 height=350></iframe>')
# display a locally saved video file.
# it seems that only webm format works here
import io
import base64
from IPython.core.display import HTML
filename = './images/interpolationSphere.webm'
# video = io.open(filename, 'r+b').read()
# encoded = base64.b64encode(video)
# HTML(data='''<video alt="Data set video" controls>
# <source src="data:video/mp4;base64,{0}" type="video/mp4" />
# </video>'''.format(encoded.decode('ascii')))
HTML(
<div align="middle">
<video width="40%" controls>
<source src="{}" type="video/mp4">
</video></div>.format(filename))
# import IPython.html.widgets as widgets
from IPython.display import display
import ipywidgets
from ipywidgets import widgets
[n for n in dir(ipywidgets) if n[0] == n[0].upper() and not n[0] == '_']
xx = widgets.FloatSlider(
value=7.5,
min=5.0,
max=10.0,
step=0.1,
description='Test:',
)
y = ipywidgets.Checkbox(
description='Check me',
value=True,
)
w = ipywidgets.Dropdown(
options = [ 'test 1', 'option 2', 'selection 3',],
value='option 2',
description='Number:',
)
#use ordered dic to get required sorting sequence
from collections import OrderedDict
foclens = [ 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1., 1.1, 1.2, 1.3, 1.4, 1.5]
m = ipywidgets.Dropdown(
options = OrderedDict([(str(x), str(x)) for x in foclens]) ,
value='0.4',
description='Focal length:',
)
from IPython.display import display
display(xx)
display(y)
display(w)
display(m)
# http://stackoverflow.com/questions/28529157/dynamically-changing-dropdowns-in-ipython-notebook-widgets-and-spyre
# from IPython.html import widgets
from IPython.display import display
geo={'USA':['CHI','NYC'],'Russia':['MOW','LED']}
def print_city(city):
print(city)
def select_city(country):
cityW.options = geo[country]
scW = ipywidgets.Select(options=geo.keys())
init = scW.value
cityW = ipywidgets.Select(options=geo[init])
j = ipywidgets.interactive(print_city, city=cityW)
i = ipywidgets.interactive(select_city, country=scW)
display(i)
display(j)
def doSomething(scale, thx):
print('scale={} thx={} product={}'.format(scale, thx, scale * thx))
return (scale, thx)
scale = 5.0
v = ipywidgets.interactive(doSomething, scale=ipywidgets.fixed(scale),
thx=ipywidgets.FloatSlider(value=128, min=0.0, max=255.0, step=1))
display(v)
form = widgets.VBox()
first = widgets.Text(description="First Name:")
last = widgets.Text(description="Last Name:")
students = widgets.VBox(visible=True, children=[
widgets.Checkbox(description="Student1:", value=False),
widgets.Checkbox(description="Student2:", value=False),
])
student = widgets.Checkbox(description="Student:", value=False)
school_info = widgets.VBox(visible=False, children=[
widgets.Text(description="School:"),
widgets.IntText(description="Grade:", min=0, max=12)
])
pet = widgets.Text(description="Pet's Name:")
form.children = [first, last, student, students, school_info, pet]
display(form)
def on_student_toggle(name, value):
if value:
school_info.visible = True
else:
school_info.visible = False
student.observe (on_student_toggle, 'value')
students.children[0].observe(on_student_toggle, 'value')
students.children[1].observe(on_student_toggle, 'value')
form = widgets.VBox()
first = widgets.Text(description="First Name:")
last = widgets.Text(description="Last Name:")
student = widgets.Checkbox(description="Student:", value=False)
school_info = widgets.VBox(visible=False, children=[
widgets.Text(description="School:"),
widgets.IntText(description="Grade:", min=0, max=12)
])
pet = widgets.Text(description="Pet's Name:")
form.children = [first, last, student, school_info, pet]
display(form)
def on_student_toggle(name, value):
if value:
school_info.visible = True
else:
school_info.visible = False
student.observe(on_student_toggle, 'value')
from IPython.display import display
float_range = widgets.FloatSlider()
string = widgets.Text(value='hi')
container = widgets.Box(children=[float_range, string])
container.border_color = 'red'
container.border_style = 'dotted'
container.border_width = 3
display(container) # Displays the `container` and all of it's children.
def on_string_change(name, value):
print(value)
# string.on_trait_change(on_string_change,'value')
string.on_submit(on_string_change,'value')
# Import matplotlib (plotting) and numpy (numerical arrays).
# This enables their use in the Notebook.
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Import IPython's interact function which is used below to
# build the interactive widgets
from ipywidgets import interact#, interactive, fixed, interact_manual
# import ipywidgets as widgets
def plot_sine(frequency=4.0, grid_points=12, plot_original=True):
Plot discrete samples of a sine wave on the interval ``[0, 1]``.
x = np.linspace(0, 1, grid_points + 2)
y = np.sin(2 * frequency * np.pi * x)
xf = np.linspace(0, 1, 1000)
yf = np.sin(2 * frequency * np.pi * xf)
fig, ax = plt.subplots(figsize=(8, 6))
ax.set_xlabel('x')
ax.set_ylabel('signal')
ax.set_title('Aliasing in discretely sampled periodic signal')
if plot_original:
ax.plot(xf, yf, color='red', linestyle='solid', linewidth=2)
ax.plot(x, y, marker='o', linewidth=2)
# The interact function automatically builds a user interface for exploring the
# plot_sine function.
interact(plot_sine, frequency=(1.0, 22.0, 0.5), grid_points=(10, 60, 1), plot_original=True);
n_weights = 10
weight_sliders = [widgets.FloatSlider(value=0,min=-2,max=2,step=0.1,description=f's{i}',
disabled=False,continuous_update=False,orientation='horizontal',
readout=True,readout_format='.2f') for i in range(n_weights)]
def PlotSuper(**kwargs):
def f(x):
y=0
for i,weight in enumerate(kwargs.values()):
if i==0:
y+=weight
else:
y+=weight*np.sin(x*i)
return y
vf = np.vectorize(f)
xx= np.arange(0,6,0.1)
plt.plot(xx,vf(xx))
plt.gca().set_ylim(-5,5)
kwargs = {f's{i}':slider for i,slider in enumerate(weight_sliders)}
interact(PlotSuper,**kwargs)
from ipywidgets import Button, Layout
b = Button(description='(50% width, 80px height) button',
layout=Layout(width='50%', height='80px'))
b
c = Button(description='Another button with the same layout', layout=b.layout)
c
from ipywidgets import Button, HBox, VBox
words = ['correct', 'horse', 'battery', 'staple']
items = [Button(description=w) for w in words]
left_box = VBox([items[0], items[1]])
right_box = VBox([items[2], items[3]])
HBox([left_box, right_box])
from ipywidgets import IntSlider, Label
IntSlider(description=r'\(\int_0^t f\)')
from ipywidgets import Layout, Button, Box
items_layout = Layout( width='auto') # override the default width of the button to 'auto' to let the button grow
box_layout = Layout(display='flex',
flex_flow='column',
align_items='stretch',
border='solid',
width='50%')
words = ['correct', 'horse', 'battery', 'staple']
items = [Button(description=word, layout=items_layout, button_style='danger') for word in words]
box = Box(children=items, layout=box_layout)
box
from ipywidgets import Layout, Button, Box, VBox
# Items flex proportionally to the weight and the left over space around the text
items_auto = [
Button(description='weight=1; auto', layout=Layout(flex='1 1 auto', width='auto'), button_style='danger'),
Button(description='weight=3; auto', layout=Layout(flex='3 1 auto', width='auto'), button_style='danger'),
Button(description='weight=1; auto', layout=Layout(flex='1 1 auto', width='auto'), button_style='danger'),
]
# Items flex proportionally to the weight
items_0 = [
Button(description='weight=1; 0%', layout=Layout(flex='1 1 0%', width='auto'), button_style='danger'),
Button(description='weight=3; 0%', layout=Layout(flex='3 1 0%', width='auto'), button_style='danger'),
Button(description='weight=1; 0%', layout=Layout(flex='1 1 0%', width='auto'), button_style='danger'),
]
box_layout = Layout(display='flex',
flex_flow='row',
align_items='stretch',
width='70%')
box_auto = Box(children=items_auto, layout=box_layout)
box_0 = Box(children=items_0, layout=box_layout)
VBox([box_auto, box_0])
from ipywidgets import Layout, Button, Box, FloatText, Textarea, Dropdown, Label, IntSlider
form_item_layout = Layout(
display='flex',
flex_flow='row',
justify_content='space-between'
)
form_items = [
Box([Label(value='Age of the captain'), IntSlider(min=40, max=60)], layout=form_item_layout),
Box([Label(value='Egg style'),
Dropdown(options=['Scrambled', 'Sunny side up', 'Over easy'])], layout=form_item_layout),
Box([Label(value='Ship size'),
FloatText()], layout=form_item_layout),
Box([Label(value='Information'),
Textarea()], layout=form_item_layout)
]
form = Box(form_items, layout=Layout(
display='flex',
flex_flow='column',
border='solid 2px',
align_items='stretch',
width='50%'
))
form
from ipywidgets import Layout, Button, Box
item_layout = Layout(height='100px', min_width='40px')
items = [Button(layout=item_layout, description=str(i), button_style='warning') for i in range(40)]
box_layout = Layout(overflow_x='scroll',
border='3px solid black',
width='500px',
height='',
flex_flow='row',
display='flex')
carousel = Box(children=items, layout=box_layout)
VBox([Label('Scroll horizontally:'), carousel])
def makeplot(title,display_trend,marker,amplitude,step_size,periods,noise_scale,offset,trend):
pass
def interact_hookup(f, controls):
from ipywidgets import Output
out = Output()
def observer(change):
out.clear_output()
kwargs = {k:v.value for k,v in controls.items()}
with out:
f(**kwargs)
for k,w in controls.items():
w.observe(observer, 'value')
observer(None)
return out
w = dict(
title=widgets.Text(value='Hello World', placeholder='Type something', description='Title:', disabled=False),
display_trend=widgets.ToggleButton(value=False, description='Display Trend', icon='check'),
marker=widgets.RadioButtons(options=['x', 'o', '.'], value='x', description='Marker:'),
amplitude=widgets.FloatSlider(value=1, min=-5, max=5, description='Amplitude:'),
step_size=widgets.FloatSlider(value=0.1, min=0.01, max=0.1, step=0.01, description='Step size:'),
periods=widgets.FloatSlider(value=5, min=1, max=20, description='Periods:'),
noise_scale=widgets.FloatSlider(value=0.1, min=0.01, max=2, description='Noise:'),
offset=widgets.FloatSlider(value=0, min=-5, max=5, description='Offset:'),
trend=widgets.FloatSlider(value=1, min=-5, max=5, description='Trend:'),
)
output = interact_hookup(makeplot, w)
UI = VBox([
HBox([
VBox([
w['title'],
w['display_trend'],
w['marker'],
]),
VBox([
w['amplitude'],
w['step_size'],
w['periods'],
w['noise_scale'],
w['offset'],
w['trend'],
])
]),
output
])
display(UI)
import numpy as np
from ipywidgets import HBox,VBox,Button,FloatSlider,FloatProgress,interactive
# set up the widgets with precalculated values
# these sliders and prog bars are visible and are updated below in the softmax function
sliders = {'1':[2.5,0.31], '2':[-1.,0.009], '3':[3.2,0.633], '4':[0.5,0.043]}
sld = {key:FloatSlider(min=-5.0, max=+5.0, value=f'{sliders[key][0]}', step=0.05,description=f'$z^L_{key}$') for key in sliders}
prb = {key:FloatProgress(value=f'{sliders[key][1]}',min=0,max=1.0,step=0.01,description=f'$a^L_{key}$',bar_style='info',orientation='horizontal') for key in sliders}
# build and display the widget grid in pairs of sliders and prog bars
lstD = [HBox([sld[key], prb[key]]) for key in sld]
display(VBox(lstD))
# function is invoked if any of the sliders change
# and the result is used to change the progress bar
def softmax(**lstZ):
sum = 0
for key in lstZ:
sum += np.exp(lstZ[key])
for key in lstZ:
prb[key].value = np.exp(lstZ[key])/sum
# `interactive` does not display/show the widgets, already done above.
w = interactive(softmax, **sld )
def update_progress(progress, bar_length=20):
from IPython.display import clear_output
if isinstance(progress, int):
progress = float(progress)
if not isinstance(progress, float):
progress = 0
if progress < 0:
progress = 0
if progress >= 1:
progress = 1
block = int(round(bar_length * progress))
clear_output(wait = True)
text = "Progress: [{0}] {1:.1f}%".format( "#" * block + "-" * (bar_length - block), progress * 100)
print(text)
import time
print('before')
#Replace this with a real computation
number_of_elements = 10
for i in range(number_of_elements):
time.sleep(0.1)
# progress must be a float between 0 and 1
update_progress((i+1) / number_of_elements,bar_length=40)
print('after')
import pyradi.ryutils as ryutils
import time
print('before')
#Replace this with a real computation
number_of_elements = 10
for i in range(number_of_elements):
time.sleep(0.1)
# progress must be a float between 0 and 1
ryutils.update_progress((i+1) / number_of_elements,bar_length=40)
print('after')
import nbformat
nb = nbformat.read('01-IPythonHintsAndTips.ipynb', as_version=4)
nb.cells[0:5]
import markdown
class MD(str):
def _repr_html_(self):
return markdown.markdown(self)
import math
a = 2
MD(
Dynamic demonstration
--------------
This is a mixture of markdown **and** html:<br>
The square root of {0} <font color="green">used to be</font> somewhere near {1}.format(a,math.sqrt(a)))
from IPython.display import display, HTML
for x in range(3):
display(HTML("<p><i>Length</i> <b>" + str(x) + "</b>"))
class ListTable(list):
Overridden list class which takes a 2-dimensional list of
the form [[1,2,3],[4,5,6]], and renders an HTML Table in
IPython Notebook.
def _repr_html_(self):
html = ["<table>"]
for row in self:
html.append("<tr>")
for col in row:
html.append("<td>{0}</td>".format(col))
html.append("</tr>")
html.append("</table>")
return ''.join(html)
import random
table = ListTable()
table.append(['x', 'y', 'x-y', '(x-y)**2'])
for i in range(7):
x = random.uniform(0, 10)
y = random.uniform(0, 10)
table.append([x, y, x-y, (x-y)**2])
table
def poly2latex(p):
terms = ['%.2g' % p.coef[0]]
if len(p) > 1:
term = 'x'
c = p.coef[1]
if c!=1:
term = ('%.2g ' % c) + term
terms.append(term)
if len(p) > 2:
for i in range(2, len(p)):
term = 'x^%d' % i
c = p.coef[i]
if c!=1:
term = ('%.2g ' % c) + term
terms.append(term)
px = '$P(x)=%s$' % '+'.join(terms)
dom = r', domain: $[%.2g,\ %.2g]$' % tuple(p.domain)
return px+dom
import numpy as np
p = np.polynomial.Polynomial([1,2,3], [-10, 10])
from IPython.display import Latex
Latex(poly2latex(p))
ip = get_ipython()
latex_formatter = ip.display_formatter.formatters['text/latex']
latex_formatter.for_type_by_name('numpy.polynomial.polynomial',
'Polynomial', poly2latex)
p2 = np.polynomial.Polynomial([-20, 71, -15, 1])
p2
htmlContent = ''
def header(text):
raw_html = '<h1>' + str(text) + '</h1>'
return raw_html
def box(text):
raw_html = '<div style="border:1px dotted black;padding:2em;">'+str(text)+'</div>'
return raw_html
def addContent(raw_html):
global htmlContent
htmlContent += raw_html
# Example
addContent( header("This is an autogenerated header") )
addContent( box("This is some text in a box") )
from IPython.core.display import HTML
HTML(htmlContent)
class C:
def method(self):
pass
C.method is C.method
class C:
@classmethod
def method(cls):
pass
print(C.method is C.method)
print(id(C.method)==id(C.method))
a = C.method
b = C.method
print(id(a)==id(b))
C.__dict__
print(type(C.method))
print(type(C.__dict__['method']))
class Foo:
_num_instances = 0
def __init__(self):
self._num_instances += 1
# self.__class__._num_instances += 1
f1 = Foo()
f2 = Foo()
print(Foo._num_instances)
try:
import pyradi.ryutils as ryutils
print(ryutils.VersionInformation('matplotlib,numpy,pyradi,scipy,pandas'))
except:
print("pyradi.ryutils not found")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define transforms
Step2: Prepare train and test sets, loaders
Step3: Display a batch of images
Step4: Define the model
Step5: <div class="alert alert-info"><strong>Why <tt>(54x54x16)</tt>?</strong><br>
Step6: Looking at the trainable parameters
Step7: Train the model
Step8: Save the trained model
Step9: Evaluate model performance
Step10: Download a pretrained model
Step11: <div class="alert alert-info">This model uses <a href='https
Step12: Modify the classifier
Step13: Define loss function & optimizer
Step14: Train the model
Step15: Run a new image through the model
|
<ASSISTANT_TASK:>
Python Code:
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import datasets, transforms, models # add models to the list
from torchvision.utils import make_grid
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# ignore harmless warnings
import warnings
warnings.filterwarnings("ignore")
train_transform = transforms.Compose([
transforms.RandomRotation(10), # rotate +/- 10 degrees
transforms.RandomHorizontalFlip(), # reverse 50% of images
transforms.Resize(224), # resize shortest side to 224 pixels
transforms.CenterCrop(224), # crop longest side to 224 pixels at center
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
test_transform = transforms.Compose([
transforms.Resize(224),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
root = '../Data/CATS_DOGS'
train_data = datasets.ImageFolder(os.path.join(root, 'train'), transform=train_transform)
test_data = datasets.ImageFolder(os.path.join(root, 'test'), transform=test_transform)
torch.manual_seed(42)
train_loader = DataLoader(train_data, batch_size=10, shuffle=True)
test_loader = DataLoader(test_data, batch_size=10, shuffle=True)
class_names = train_data.classes
print(class_names)
print(f'Training images available: {len(train_data)}')
print(f'Testing images available: {len(test_data)}')
# Grab the first batch of 10 images
for images,labels in train_loader:
break
# Print the labels
print('Label:', labels.numpy())
print('Class:', *np.array([class_names[i] for i in labels]))
im = make_grid(images, nrow=5) # the default nrow is 8
# Inverse normalize the images
inv_normalize = transforms.Normalize(
mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225],
std=[1/0.229, 1/0.224, 1/0.225]
)
im_inv = inv_normalize(im)
# Print the images
plt.figure(figsize=(12,4))
plt.imshow(np.transpose(im_inv.numpy(), (1, 2, 0)));
class ConvolutionalNetwork(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 3, 1)
self.conv2 = nn.Conv2d(6, 16, 3, 1)
self.fc1 = nn.Linear(54*54*16, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 2)
def forward(self, X):
X = F.relu(self.conv1(X))
X = F.max_pool2d(X, 2, 2)
X = F.relu(self.conv2(X))
X = F.max_pool2d(X, 2, 2)
X = X.view(-1, 54*54*16)
X = F.relu(self.fc1(X))
X = F.relu(self.fc2(X))
X = self.fc3(X)
return F.log_softmax(X, dim=1)
torch.manual_seed(101)
CNNmodel = ConvolutionalNetwork()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(CNNmodel.parameters(), lr=0.001)
CNNmodel
def count_parameters(model):
params = [p.numel() for p in model.parameters() if p.requires_grad]
for item in params:
print(f'{item:>8}')
print(f'________\n{sum(params):>8}')
count_parameters(CNNmodel)
import time
start_time = time.time()
epochs = 3
max_trn_batch = 800
max_tst_batch = 300
train_losses = []
test_losses = []
train_correct = []
test_correct = []
for i in range(epochs):
trn_corr = 0
tst_corr = 0
# Run the training batches
for b, (X_train, y_train) in enumerate(train_loader):
# Limit the number of batches
if b == max_trn_batch:
break
b+=1
# Apply the model
y_pred = CNNmodel(X_train)
loss = criterion(y_pred, y_train)
# Tally the number of correct predictions
predicted = torch.max(y_pred.data, 1)[1]
batch_corr = (predicted == y_train).sum()
trn_corr += batch_corr
# Update parameters
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Print interim results
if b%200 == 0:
print(f'epoch: {i:2} batch: {b:4} [{10*b:6}/8000] loss: {loss.item():10.8f} \
accuracy: {trn_corr.item()*100/(10*b):7.3f}%')
train_losses.append(loss)
train_correct.append(trn_corr)
# Run the testing batches
with torch.no_grad():
for b, (X_test, y_test) in enumerate(test_loader):
# Limit the number of batches
if b == max_tst_batch:
break
# Apply the model
y_val = CNNmodel(X_test)
# Tally the number of correct predictions
predicted = torch.max(y_val.data, 1)[1]
tst_corr += (predicted == y_test).sum()
loss = criterion(y_val, y_test)
test_losses.append(loss)
test_correct.append(tst_corr)
print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed
torch.save(CNNmodel.state_dict(), 'CustomImageCNNModel.pt')
plt.plot(train_losses, label='training loss')
plt.plot(test_losses, label='validation loss')
plt.title('Loss at the end of each epoch')
plt.legend();
plt.plot([t/80 for t in train_correct], label='training accuracy')
plt.plot([t/30 for t in test_correct], label='validation accuracy')
plt.title('Accuracy at the end of each epoch')
plt.legend();
print(test_correct)
print(f'Test accuracy: {test_correct[-1].item()*100/3000:.3f}%')
AlexNetmodel = models.alexnet(pretrained=True)
AlexNetmodel
for param in AlexNetmodel.parameters():
param.requires_grad = False
torch.manual_seed(42)
AlexNetmodel.classifier = nn.Sequential(nn.Linear(9216, 1024),
nn.ReLU(),
nn.Dropout(0.4),
nn.Linear(1024, 2),
nn.LogSoftmax(dim=1))
AlexNetmodel
# These are the TRAINABLE parameters:
count_parameters(AlexNetmodel)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(AlexNetmodel.classifier.parameters(), lr=0.001)
import time
start_time = time.time()
epochs = 1
max_trn_batch = 800
max_tst_batch = 300
train_losses = []
test_losses = []
train_correct = []
test_correct = []
for i in range(epochs):
trn_corr = 0
tst_corr = 0
# Run the training batches
for b, (X_train, y_train) in enumerate(train_loader):
if b == max_trn_batch:
break
b+=1
# Apply the model
y_pred = AlexNetmodel(X_train)
loss = criterion(y_pred, y_train)
# Tally the number of correct predictions
predicted = torch.max(y_pred.data, 1)[1]
batch_corr = (predicted == y_train).sum()
trn_corr += batch_corr
# Update parameters
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Print interim results
if b%200 == 0:
print(f'epoch: {i:2} batch: {b:4} [{10*b:6}/8000] loss: {loss.item():10.8f} \
accuracy: {trn_corr.item()*100/(10*b):7.3f}%')
train_losses.append(loss)
train_correct.append(trn_corr)
# Run the testing batches
with torch.no_grad():
for b, (X_test, y_test) in enumerate(test_loader):
if b == max_tst_batch:
break
# Apply the model
y_val = AlexNetmodel(X_test)
# Tally the number of correct predictions
predicted = torch.max(y_val.data, 1)[1]
tst_corr += (predicted == y_test).sum()
loss = criterion(y_val, y_test)
test_losses.append(loss)
test_correct.append(tst_corr)
print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed
print(test_correct)
print(f'Test accuracy: {test_correct[-1].item()*100/3000:.3f}%')
x = 2019
im = inv_normalize(test_data[x][0])
plt.imshow(np.transpose(im.numpy(), (1, 2, 0)));
test_data[x][0].shape
# CNN Model Prediction:
CNNmodel.eval()
with torch.no_grad():
new_pred = CNNmodel(test_data[x][0].view(1,3,224,224)).argmax()
print(f'Predicted value: {new_pred.item()} {class_names[new_pred.item()]}')
# AlexNet Model Prediction:
AlexNetmodel.eval()
with torch.no_grad():
new_pred = AlexNetmodel(test_data[x][0].view(1,3,224,224)).argmax()
print(f'Predicted value: {new_pred.item()} {class_names[new_pred.item()]}')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hello, World
Step2: That works, but we can also first store the content in a variable, and then print out the variable.
Step3: Hello, Images
Step4: Next, let's locate a dataset to display. Start by going to the Earth Engine Public Data Catalog (https
Step5: Type in the term SRTM in the search box, click the search button, and then select the dataset SRTM Digital Elevation Data Version 4 from the list of results. This will bring up a data description page for the SRTM Digital Elevation Data 30m dataset. The data description page provide a short description of the dataset and links to the data provider, but the key piece of information that we need for working with the dataset in Earth Engine is the Image ID, which for this dataset is CGIAR/SRTM90_V4. Let's use the Image ID to store a reference to this image dataset
Step6: And now, we can print out information about the dataset, using the .getInfo() method.
Step7: What is returned by the .getInfo() command is a Python dictionary. If needed, we could parse out this information and make use of it in our analysis.
Step8: Ok, we can see the outlines of the continents, but there is not a lot of contrast between different elevation areas. So let's improve upon that, but adding some visualization parameters.
Step9: By default, the .getThumbUrl() method returns the entire extent of the image, which in this case is global. We can also specify a region, to show a smaller area.
Step10: Load and Filter an Image Collection
Step11: Playing with Image Bands
Step12: And by changing the bands displayed, we can also display a false color image.
Step13: Play with Reducing Image Collections
Step14: ImageCollection.mosaic Reducer
Step15: ImageCollection.median Reducer
Step16: ImageCollection.min Reducer
Step17: ImageCollection.max Reducer
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
print("Hello, world!")
string = "Hello, world!"
print(string)
import ee
ee.Initialize()
Image('http://www.google.com/earth/outreach/images/tutorials_eeintro_05_data_catalog.png')
srtm = ee.Image("CGIAR/SRTM90_V4")
info = srtm.getInfo()
print(info)
from IPython.display import Image
Image(url=srtm.getThumbUrl())
Image(url=srtm.getThumbUrl({'min':0, 'max':3000}))
point = ee.Geometry.Point(-122.0918, 37.422)
region_bay_area = point.buffer(50000).bounds().getInfo()['coordinates']
Image(url=srtm.getThumbUrl({'min':0, 'max':1000, 'region':region_bay_area}))
# Create a reference to the image collection
l8 = ee.ImageCollection('LANDSAT/LC8_L1T_TOA')
# Filter the collection down to a two week period
filtered = l8.filterDate('2013-05-01', '2013-05-15');
# Use the mosaic reducer, to select the most recent pixel in areas of overlap
l8_image = filtered.mosaic()
# Define a region roughly covering California
point = ee.Geometry.Point(-118, 37)
region_california = point.buffer(500000).bounds().getInfo()['coordinates']
# And finally display the image.
Image(url=l8_image.getThumbUrl({'region':region_california}))
Image(url=l8_image.getThumbUrl({
'region':region_california,
'bands':'B4,B3,B2',
'min':0,
'max':0.3
}))
Image(url=l8_image.getThumbUrl({
'region':region_california,
'bands':'B5,B4,B3',
'min':0,
'max':0.3
}))
filtered = l8.filterDate('2013-01-01', '2014-01-01')
l8_image = filtered.mosaic()
Image(url=l8_image.getThumbUrl({
'region':region_california,
'bands':'B4,B3,B2',
'min':0,
'max':0.3
}))
l8_image = filtered.median()
Image(url=l8_image.getThumbUrl({
'region':region_california,
'bands':'B4,B3,B2',
'min':0,
'max':0.3
}))
l8_image = filtered.min()
Image(url=l8_image.getThumbUrl({
'region':region_california,
'bands':'B4,B3,B2',
'min':0,
'max':0.3
}))
l8_image = filtered.max()
Image(url=l8_image.getThumbUrl({
'region':region_california,
'bands':'B4,B3,B2',
'min':0,
'max':0.3
}))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Narrowband FM Demodulator
|
<ASSISTANT_TASK:>
Python Code:
# From the docstring
#x = sdr.capture(Tc, fo=88700000.0, fs=2400000.0, gain=40, device_index=0)
x = sdr.capture(Tc=5,fo=162.4e6,fs=2.4e6,gain=40,device_index=0)
sdr.complex2wav('capture_162475.wav',2400000,x)
fs, x = sdr.wav2complex('capture_162475.wav')
psd(x,2**10,2400);
def NBFM_demod(x,fs=2.4e6,file_name='test.wav',B1=50e3,N1=10,B2=5e3,N2=5):
Narrowband FM Demodulator
b = signal.firwin(64,2*B1/float(fs))
# Filter and decimate (should be polyphase)
y = signal.lfilter(b,1,x)
z = ss.downsample(y,N1)
# Apply complex baseband discriminator
z_bb = sdr.discrim(z)
z_bb -= mean(z_bb)
# Design 2nd decimation lowpass filter
bb = signal.firwin(64,2*B2/(float(fs)/N1))
# Filter and decimate
zz_bb = signal.lfilter(bb,1,z_bb)
# Decimate by N2
z_out = ss.downsample(zz_bb,N2)
# Save to wave file
ss.to_wav(file_name, 48000, z_out)
print('Done!')
return z_bb, z_out
z_bb, z_demod = NBFM_demod(x,file_name='NOAA_cos_demod.wav')
psd(z_demod,2**10,2400/50);
xlabel(r'Frequency (kHz)');
ylim([-80,-20])
Audio('NOAA_cos_demod.wav')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2D trajectory interpolation
Step2: Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays
Step3: Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.interpolate import interp1d
traj = np.load('trajectory.npz')
x = traj['x']
y = traj['y']
t = traj['t']
assert isinstance(x, np.ndarray) and len(x)==40
assert isinstance(y, np.ndarray) and len(y)==40
assert isinstance(t, np.ndarray) and len(t)==40
newt = np.linspace(t[0],t[len(t) - 1],200)
cont_x = interp1d(t, x, kind='cubic')
cont_y = interp1d(t, y, kind='cubic')
newx = cont_x(newt)
newy = cont_y(newt)
assert newt[0]==t.min()
assert newt[-1]==t.max()
assert len(newt)==200
assert len(newx)==200
assert len(newy)==200
plt.plot(newt, newx)
plt.plot(newt, newy)
plt.plot(t, x, marker='o', linestyle='')
plt.plot(t, y, marker='o', linestyle='')
assert True # leave this to grade the trajectory plot
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Translating between currencies requires a number of different choices
Step2: Every translation is based on countries and years. By default, the Translator assumes you want the USD value of a currency in a year based on market exchange rates using GDP deflators.
Step3: You can further translate 20 2010 Euros into 2015 US Dollars as
Step4: Additional Options
Step5: Similarly, you can use Purchasing Power Parity rather than Market Exchange Rates
|
<ASSISTANT_TASK:>
Python Code:
from salamanca.currency import Translator
xltr = Translator()
xltr.exchange(20, iso='AUT', yr=2010)
xltr.exchange(20, fromiso='AUT', toiso='USA', yr=2010) # equivalent to the above defaults
xltr.exchange(20, fromiso='AUT', toiso='USA',
fromyr=2010, toyr=2015)
xltr.exchange(20, fromiso='AUT', toiso='USA',
fromyr=2010, toyr=2015,
inflation_method='cpi')
xltr.exchange(20, fromiso='AUT', toiso='USA',
fromyr=2010, toyr=2015,
units='PPP')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Perhaps we should file a bug that smarts doesn't show stereochem here.
Step2: Make some example stereochemistries
Step3: Make RGroup decomposition!
Step4: We need to call process after all molecules are added. To optimize the RGroups.
Step5: The RGroupDecomposition code is quite compatible with the python pandas integration.
Step6: The first two (0 and 1) are different due to the sterochemistry difference.
Step7: Let's try the same with stereochemistry in the core.
|
<ASSISTANT_TASK:>
Python Code:
from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
IPythonConsole.ipython_useSVG=True
from rdkit.Chem import rdRGroupDecomposition
from IPython.display import HTML
from rdkit import rdBase
rdBase.DisableLog("rdApp.debug")
import pandas as pd
from rdkit.Chem import PandasTools
m = Chem.MolFromSmarts("C1CCO[C@@](*)(*)1")
m
el = "NOPS"
mols = []
for e in el:
smi = "C1CCO[C@@H](%s)1"%e
print(smi)
m = Chem.MolFromSmiles(smi)
mols.append(m)
smi = "C1CCO[C@H](%s)1"%e
print(smi)
m = Chem.MolFromSmiles(smi)
mols.append(m)
for e2 in el:
if e != e2:
smi = "C1CCO[C@@](%s)(%s)1"%(e,e2)
m = Chem.MolFromSmiles(smi)
if m:
print(smi)
mols.append(m)
from rdkit.Chem import Draw
Draw.MolsToGridImage(mols)
hmols = [Chem.AddHs(m) for m in mols]
Draw.MolsToGridImage(hmols)
core = Chem.MolFromSmarts("C1CCO[C@@](*)(*)1")
rgroups = rdRGroupDecomposition.RGroupDecomposition(core)
for i,m in enumerate(mols):
rgroups.Add(m)
if i == 10:
break
rgroups.Process()
groups = rgroups.GetRGroupsAsColumns()
frame = pd.DataFrame(groups)
PandasTools.ChangeMoleculeRendering(frame)
HTML(frame.to_html())
core = Chem.MolFromSmarts("C1CCOC1")
rgroups = rdRGroupDecomposition.RGroupDecomposition(core)
for m in mols:
rgroups.Add(m)
rgroups.Process()
frame = pd.DataFrame(rgroups.GetRGroupsAsColumns())
PandasTools.ChangeMoleculeRendering(frame)
HTML(frame.to_html())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: This gives us a nice way to move from our preference $x_i$ to a probability of switching styles. Here $\beta$ is inversely related to noise. For large $\beta$, the noise is small and we basically map $x > 0$ to a 100% probability of switching, and $x<0$ to a 0% probability of switching. As $\beta$ gets smaller, the probabilities get less and less distinct.
Step3: Now we'll create a function which will return an instance of the HipsterStep class with the appropriate settings
Step4: Exploring this data
Step5: To run the simulation a number of steps we execute sim.step(Nsteps) giving us a matrix of identities for each invidual at each timestep.
Step6: Now we can simply go right ahead and visualize this data using an Image Element type, defining the dimensions and bounds of the space.
Step7: Now that you know how to run the simulation and access the data have a go at exploring the effects of different parameters on the population dynamics or apply some custom analyses to this data. Here are two quick examples of what you can do
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import holoviews as hv
hv.notebook_extension(bokeh=True, width=90)
%%output backend='matplotlib'
%%opts NdOverlay [aspect=1.5 figure_size=200 legend_position='top_left']
x = np.linspace(-1, 1, 1000)
curves = hv.NdOverlay(key_dimensions=['$\\beta$'])
for beta in [0.1, 0.5, 1, 5]:
curves[beta] = hv.Curve(zip(x, 0.5 * (1 + np.tanh(beta * x))), kdims=['$x$'],
vdims=['$\\phi(x;\\beta)$'])
curves
class HipsterStep(object):
Class to implement hipster evolution
Parameters
----------
initial_style : length-N array
values > 0 indicate one style, while values <= 0 indicate the other.
is_hipster : length-N array
True or False, indicating whether each person is a hipster
influence_matrix : N x N array
Array of non-negative values. influence_matrix[i, j] indicates
how much influence person j has on person i
delay_matrix : N x N array
Array of positive integers. delay_matrix[i, j] indicates the
number of days delay between person j's influence on person i.
def __init__(self, initial_style, is_hipster,
influence_matrix, delay_matrix,
beta=1, rseed=None):
self.initial_style = initial_style
self.is_hipster = is_hipster
self.influence_matrix = influence_matrix
self.delay_matrix = delay_matrix
self.rng = np.random.RandomState(rseed)
self.beta = beta
# make s array consisting of -1 and 1
self.s = -1 + 2 * (np.atleast_2d(initial_style) > 0)
N = self.s.shape[1]
# make eps array consisting of -1 and 1
self.eps = -1 + 2 * (np.asarray(is_hipster) > 0)
# create influence_matrix and delay_matrix
self.J = np.asarray(influence_matrix, dtype=float)
self.tau = np.asarray(delay_matrix, dtype=int)
# validate all the inputs
assert self.s.ndim == 2
assert self.s.shape[1] == N
assert self.eps.shape == (N,)
assert self.J.shape == (N, N)
assert np.all(self.J >= 0)
assert np.all(self.tau > 0)
@staticmethod
def phi(x, beta):
return 0.5 * (1 + np.tanh(beta * x))
def step_once(self):
N = self.s.shape[1]
# iref[i, j] gives the index for the j^th individual's
# time-delayed influence on the i^th individual
iref = np.maximum(0, self.s.shape[0] - self.tau)
# sref[i, j] gives the previous state of the j^th individual
# which affects the current state of the i^th individual
sref = self.s[iref, np.arange(N)]
# m[i] is the mean of weighted influences of other individuals
m = (self.J * sref).sum(1) / self.J.sum(1)
# From m, we use the sigmoid function to compute a transition probability
transition_prob = self.phi(-self.eps * m * self.s[-1], beta=self.beta)
# Now choose steps stochastically based on this probability
new_s = np.where(transition_prob > self.rng.rand(N), -1, 1) * self.s[-1]
# Add this to the results, and return
self.s = np.vstack([self.s, new_s])
return self.s
def step(self, N):
for i in range(N):
self.step_once()
return self.s
def get_sim(Npeople=500, hipster_frac=0.8, initial_state_frac=0.5, delay=20, log10_beta=0.5, rseed=42):
rng = np.random.RandomState(rseed)
initial_state = (rng.rand(1, Npeople) > initial_state_frac)
is_hipster = (rng.rand(Npeople) > hipster_frac)
influence_matrix = abs(rng.randn(Npeople, Npeople))
influence_matrix.flat[::Npeople + 1] = 0
delay_matrix = 1 + rng.poisson(delay, size=(Npeople, Npeople))
return HipsterStep(initial_state, is_hipster, influence_matrix, delay_matrix=delay_matrix,
beta=10 ** log10_beta, rseed=rseed)
sim = get_sim(hipster_frac=0.8)
result = sim.step(200)
result
%%opts Image [width=600]
hv.Image(result.T, bounds=(0, 0, 100, 500),
kdims=['Time', 'individual'], vdims=['State'])
%%opts Curve [width=350] Image [width=350]
hipster_frac = hv.HoloMap(kdims=['Hipster Fraction'])
for i in np.linspace(0.1, 1, 10):
sim = get_sim(hipster_frac=i)
hipster_frac[i] = hv.Image(sim.step(200).T, (0, 0, 500, 500), group='Population Dynamics',
kdims=['Time', 'individual'], vdims=['Bearded'])
(hipster_frac + hipster_frac.reduce(individual=np.mean).to.curve('Time', 'Bearded'))
%%opts Overlay [width=600] Curve (color='black')
aggregated = hipster_frac.table().aggregate(['Time', 'Hipster Fraction'], np.mean, np.std)
aggregated.to.curve('Time') * aggregated.to.errorbars('Time')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Attribute Information
Step2: Bivariate Analysis - Inflammation
Step3: The Logistic Regression Model for Inflammation
Step4: Bivariate Analysis - Nephritis
Step5: The Logistic Regression Model for Nephritis
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
%pylab inline
pylab.style.use('ggplot')
import seaborn as sns
data_df = pd.read_csv('diagnosis.csv', sep='\t', decimal=',', header=None)
data_df.head()
data_df.columns = ['temp', 'nausea', 'lumber_pain', 'urine_pushing', 'micturiation_pain',
'burning', 'inflammation', 'nephritis']
for c in data_df.columns[1:]:
data_df.loc[:, c] = data_df.loc[:, c].map(lambda v : 1.0 if v == 'yes' else 0.0)
data_df.head()
feature_df = data_df.loc[:, data_df.columns[:-2]]
target_df = data_df.loc[:, data_df.columns[-2:]]
feature_df.head()
target_df.head()
sns.lmplot(data=data_df, x='temp', hue='inflammation', y='inflammation', fit_reg=False)
for fname in feature_df.columns[1:]:
pylab.figure()
sns.countplot(y=fname, hue="inflammation", data=data_df)
import statsmodels.formula.api as sm
inflammation_model = sm.logit(
formula='inflammation ~ lumber_pain + urine_pushing + micturiation_pain',
data=data_df
)
inflammation_result = inflammation_model.fit(method='lbfgs')
inflammation_result.summary()
sns.lmplot(data=data_df, x='temp', hue='nephritis', y='nephritis', fit_reg=False)
for fname in feature_df.columns[1:]:
pylab.figure()
sns.countplot(y=fname, hue="nephritis", data=data_df)
nephritis_model = sm.ols(
formula='nephritis ~ temp + nausea + urine_pushing + lumber_pain + burning',
data=data_df
)
nephritis_result = nephritis_model.fit()
nephritis_result.summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Connect To Neo4j
Step2: Once we've instantiated our Driver, we can use Session objects to execute queries against Neo4j. Here we'll use session.run() to execute a Cypher query. Cypher is the query language for graphs that we use with Neo4j (you can think of Cypher as SQL for graphs).
Step4: If we inspect the datamodel in Neo4j we can see that we have inormation about Tweets and specifically Users mentioned in tweets.
Step6: Next, we need some relationships to visualize. In this case we are interested in visualizing user interactions, specifically where users have mentioned users in Tweets.
Step7: Now we can visualize this mentions network using Graphistry. We'll specify the nodes and relationships for our graph. We'll also use the troll property to color the known Troll nodes red, setting them apart from other users in the graph.
Step9: After running the above Python cell you should see an interactive Graphistry visualization like this
Step11: Now that we've calculated PageRank for each User node we need to create a new pandas DataFrame for our user nodes by querying Neo4j
|
<ASSISTANT_TASK:>
Python Code:
# import required dependencies
from neo4j.v1 import GraphDatabase, basic_auth
from pandas import DataFrame
import graphistry
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
# instantiate Neo4j driver instance
# be sure to replace the connection string and password with your own
driver = GraphDatabase.driver("bolt://34.201.165.36:34532", auth=basic_auth("neo4j", "capitals-quality-loads"))
# neo4j-driver hello world
# execute a simple query to count the number of nodes in the database and print the result
with driver.session() as session:
results = session.run("MATCH (a) RETURN COUNT(a) AS num")
for record in results:
print(record)
# Create User DataFrame by querying Neo4j, converting the results into a pandas DataFrame
with driver.session() as session:
results = session.run(
MATCH (u:User)
WITH u.user_key AS screen_name, CASE WHEN "Troll" IN labels(u) THEN 5 ELSE 0 END AS troll
RETURN screen_name, troll)
users = DataFrame(results.data())
# show the first 5 rows of the DataFrame
users[:5]
# Query for tweets mentioning a user and create a DataFrame adjacency list using screen_name
# where u1 posted a tweet(s) that mentions u2
# num is the number of time u1 mentioned u2 in the dataset
with driver.session() as session:
results = session.run(
MATCH (u1:User)-[:POSTED]->(:Tweet)-[:MENTIONS]->(u2:User)
RETURN u1.user_key AS u1, u2.user_key AS u2, COUNT(*) AS num
)
mentions = DataFrame(results.data())
mentions[:5]
viz = graphistry.bind(source="u1", destination="u2", node="screen_name", point_color="troll").nodes(users).edges(mentions)
viz.plot()
# run PageRank on the projected mentions graph and update nodes by adding a pagerank property score
with driver.session() as session:
session.run(
CALL algo.pageRank("MATCH (t:User) RETURN id(t) AS id",
"MATCH (u1:User)-[:POSTED]->(:Tweet)-[:MENTIONS]->(u2:User)
RETURN id(u1) as source, id(u2) as target", {graph:'cypher', write: true})
)
# create a new users DataFrame, now including PageRank score for each user
with driver.session() as session:
results = session.run(
MATCH (u:User)
WITH u.user_key AS screen_name, u.pagerank AS pagerank, CASE WHEN "Troll" IN labels(u) THEN 5 ELSE 0 END AS troll
RETURN screen_name, pagerank, troll)
users = DataFrame(results.data())
users[:5]
# render the Graphistry visualization, binding node size to PageRank score
viz = graphistry.bind(source="u1", destination="u2", node="screen_name", point_size="pagerank", point_color="troll").nodes(users).edges(mentions)
viz.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Alternatively, the dtypes property returns the dtype of every column in the DataFrame
Step2: Data types tell us something about how pandas is storing the data internally. float64 means that it's using a 64-bit floating point number; int64 means a similarly sized integer instead, and so on.
Step3: A DataFrame or Series index has its own dtype, too
Step4: Pandas also supports more exotic data types, such as categorical data and timeseries data. Because these data types are more rarely used, we will omit them until a much later section of this tutorial.
Step5: Replacing missing values is a common operation. Pandas provides a really handy method for this problem
Step6: Or we could fill each missing value with the first non-null value that appears sometime after the given record in the database. This is known as the backfill strategy.
|
<ASSISTANT_TASK:>
Python Code:
#$HIDE_INPUT$
import pandas as pd
reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
pd.set_option('max_rows', 5)
reviews.price.dtype
reviews.dtypes
reviews.points.astype('float64')
reviews.index.dtype
reviews[pd.isnull(reviews.country)]
reviews.region_2.fillna("Unknown")
reviews.taster_twitter_handle.replace("@kerinokeefe", "@kerino")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Playing with the number of hidden units
Step2: With random forest
|
<ASSISTANT_TASK:>
Python Code:
import os
os.environ['THEANO_FLAGS']='mode=FAST_COMPILE,optimizer=None,device=cpu,floatX=float32'
import numpy as np
import sklearn.cross_validation as skcv
#x = np.linspace(0, 5*np.pi, num=10000, dtype=np.float32)
x = np.linspace(0, 4*np.pi, num=10000, dtype=np.float32)
y = np.cos(x)
train, test = skcv.train_test_split(np.arange(x.shape[0]))
print train.shape
print test.shape
import pylab as pl
%matplotlib inline
pl.plot(x, y)
X_train = x[train].reshape(-1, 1)
y_train = y[train]
print "x_train : ", X_train.min(), X_train.max()
print X_train.shape
print "y_train : ", y_train.min(), y_train.max()
print y_train.shape
assert X_train.dtype == np.float32
assert y_train.dtype == np.float32
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
model = Sequential()
model.add(Dense(1, 4, init='lecun_uniform'))
model.add(Activation('tanh'))
model.add(Dense(4, 1, init='lecun_uniform'))
model.add(Activation('tanh'))
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer=sgd)
print model.get_weights()
history = model.fit(scaler.transform(X_train), y_train, nb_epoch=10, batch_size=64, shuffle=True)
y_pred = model.predict(scaler.transform(x.reshape(-1, 1)))
model.get_weights()
pl.plot(x, y_pred, c='r', label='y_pred')
pl.plot(x, y, c='b', label='y')
pl.legend()
def train_plot_prediction(n_hidden):
model = Sequential()
model.add(Dense(1, n_hidden, init='lecun_uniform'))
model.add(Activation('tanh'))
model.add(Dense(n_hidden, 1, init='lecun_uniform'))
model.add(Activation('tanh'))
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer=sgd)
history = model.fit(scaler.transform(X_train), y_train, nb_epoch=5, batch_size=64, shuffle=True,
verbose=False)
y_pred = model.predict(scaler.transform(x.reshape(-1, 1)))
pl.figure(figsize=(10, 4))
pl.subplot(211)
pl.title('train loss')
pl.plot(history.epoch, history.history['loss'], label='loss')
pl.subplot(212)
pl.title('prediction vs ground truth')
pl.plot(x, y_pred, c='r', label='y_pred')
pl.plot(x, y, c='b', label='y')
pl.legend()
pl.tight_layout()
train_plot_prediction(1)
train_plot_prediction(2)
train_plot_prediction(3)
train_plot_prediction(4)
train_plot_prediction(5)
train_plot_prediction(10)
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators=10, max_depth=10).fit(scaler.transform(X_train), y_train)
y_pred_rf = rf.predict(scaler.transform(x.reshape(-1, 1)))
pl.figure(figsize=(10, 4))
pl.plot(x, y_pred_rf, c='r', label='y_pred')
pl.plot(x, y, c='b', label='y')
pl.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Step 2
Step3: We can check (as it obviously must) that this matches our numbers if Nobj equals the total number of objects in our
Step4: Step 3 Now for each bin we mgith want to draw numbers from a normal distribution with size = number of objects in that bin
Step5: So, what we need to do is loop through the bins, and keep appending X to some list
Step6: XVals is a list of lists. The 0th element of XVals is a list which has all of the Xs sampled
Step7: And we can find their means and std deviations
Step8: Randomness and Reproducibility
Step9: The next 50 will be different
|
<ASSISTANT_TASK:>
Python Code:
# Code to generate the toy example (let us not worry how this code works)
nums = np.arange(1000, 6000, 1000) \
+ np.round(np.random.RandomState(0).normal(0., 200., size=5,)).astype(np.int)
df = pd.DataFrame(dict(Numbers=nums, meanX=np.power(nums, 0.5)/5.,
stdX=np.power(nums, 0.1)))
df
df['frequencies'] = df.Numbers / df.Numbers.sum()
df
numObjectsPerBin = np.round(df.frequencies * 50000).astype(np.int)
print(numObjectsPerBin)
np.round(df.frequencies * df.Numbers.sum()).astype(np.int)
m, s = df.ix[0, ['meanX', 'stdX']] # Now the mean of the 0th bin is assigned to m, and std to s
X = np.random.normal(m, s, size=numObjectsPerBin.ix[0] )
print(X)
XVals = []
for i in range(len(df)):
m, s = df.ix[i, ['meanX', 'stdX']]
# We will convert the numpy array to list, but that may not be necessary
X = np.random.normal(m, s, size=numObjectsPerBin.ix[i]).tolist()
XVals.append(X)
XVals
np.array(map(len, XVals)) / np.float(sum(map(len, XVals)))
totalobjs = sum(map(len, XVals))
map(np.mean, XVals)
map(np.std, XVals)
seed = 1
rng = np.random.RandomState(seed)
rng.normal(0, 1, size= 50)
rng.normal(0, 1, size=50)
# But if you want to reproduce the first 50, you can do so by using the same seed
rng = np.random.RandomState(seed)
rng.normal(0,1, size=60)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: $\langle \rho_{diffuse, cosmic}\rangle$
Step2: $\langle n_{e,cosmic}\rangle$
Step3: $\langle DM_{cosmic}\rangle$
Step4: $\langle DM_{halos}\rangle$ and $\langle DM_{IGM}\rangle$
|
<ASSISTANT_TASK:>
Python Code:
# imports
from importlib import reload
import numpy as np
from scipy.interpolate import InterpolatedUnivariateSpline as IUS
from astropy import units as u
from frb.halos.models import ModifiedNFW
from frb.halos import models as frb_halos
from frb.halos import hmf as frb_hmf
from frb.dm import igm as frb_igm
from frb.figures import utils as ff_utils
from matplotlib import pyplot as plt
plt.rcParams['font.size'] = 17
help(frb_igm.f_diffuse)
# Define redshifts
zvals = np.linspace(0, 8)
# Get <n_e>
f_diffuse, rho_diffuse = frb_igm.f_diffuse(zvals, return_rho = True)
# Plot
fig, axs = plt.subplots(2,1, sharex=True, figsize = (8,7))
fig.tight_layout()
ax1 = axs[0]
ax1.plot(zvals, f_diffuse, lw=2)
ax1.set_ylabel(r'$\langle f_{diffuse, cosmic}\rangle$')
ax2 = axs[1]
ax2.plot(zvals, rho_diffuse.to('Msun*Mpc**-3'), lw=2)
ax2.set_yscale("log")
ax2.set_xlabel('z')
ax2.set_ylabel(r'$\langle \rho_{diffuse, cosmic}\rangle$ $M_\odot~Mpc^{-3}$')
plt.show()
help(frb_igm.ne_cosmic)
# Define redshifts
zvals = np.linspace(0, 8)
# Get <n_e>
avg_ne = frb_igm.ne_cosmic(zvals)
# Visualize
fig = plt.figure(figsize = (10, 6))
plt.plot(zvals, avg_ne, label=r'$\langle n_{e, cosmic}\rangle$', lw=2)
plt.yscale("log")
plt.legend(loc = "upper left")
plt.xlabel('z')
plt.ylabel(r'$\langle n_{e, cosmic}\rangle$ [$cm^{-3}$]')
plt.show()
help(frb_igm.average_DM)
DM_cosmic, zvals = frb_igm.average_DM(8, cumul=True)
# Visualize
fig = plt.figure(figsize = (10, 6))
plt.plot(zvals, DM_cosmic, lw=2)
plt.xlabel('z')
plt.ylabel(r'$\langle DM_{cosmic}\rangle$ $pc~cm^{-3}$')
plt.show()
help(frb_igm.average_DMhalos)
# evaluation
frb_igm.average_DMhalos(0.1)
# get cumulative DM_halos
dm, zvals = frb_igm.average_DMhalos(0.1, cumul = True)
dm
zvals
fhot_array = [0.2, 0.5, 0.75]
rmax_array = [0.5, 1.0 , 2.0]
# <DM_halos> for different f_hot
fig, axs = plt.subplots(2,1, sharex=True, figsize = (8,7))
fig.tight_layout()
ax1 = axs[0]
for f_hot in fhot_array:
DM_halos, zeval = frb_igm.average_DMhalos(3, f_hot = f_hot, cumul=True)
ax1.plot(zeval, DM_halos, label="{:0.1f}".format(f_hot))
ax1.legend(title="f_hot")
ax1.set_ylabel(r'$\langle DM_{halos}\rangle$ $pc~cm^{-3}$')
# <DM_halos> for different rmax
ax2 = axs[1]
for rmax in rmax_array:
DM_halos, zeval = frb_igm.average_DMhalos(3, rmax = rmax, cumul = True)
ax2.plot(zeval, DM_halos, label="{:0.1f}".format(rmax))
ax2.legend(title="rmax")
ax2.set_xlabel('z')
ax2.set_ylabel(r'$\langle DM_{halos}\rangle$ $pc~cm^{-3}$')
plt.show()
# Limits of calculation
frb_igm.average_DMhalos(3.1)
# Failure above redshift 5
frb_igm.average_DMhalos(5.1)
help(frb_igm.average_DMIGM)
# Sanity check. <DM_cosmic> - (<DM_halos> + <DM_IGM) = 0
dm, zvals = frb_igm.average_DM(0.1, cumul= True)
dm_halos, _ = frb_igm.average_DMhalos(0.1, cumul = True)
dm_igm, _ = frb_igm.average_DMIGM(0.1, cumul = True)
plt.plot(zvals, dm - dm_halos - dm_igm)
plt.ylabel(r"DM $pc~cm^{-3}$")
plt.xlabel("z")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Representing Numbers (5 Points)
Step2: 4. Lists and Slicing (11 Points)
Step3: 5. Numpy (12 Points)
Step4: 6. Plotting (16 Points)
|
<ASSISTANT_TASK:>
Python Code:
#1.1
import math
print('{:.4}'.format(math.pi))
#1.2 - no for loop
print('{:5.4}'.format(math.sqrt(1)))
print('{:5.4}'.format(math.sqrt(2)))
print('{:5.4}'.format(math.sqrt(3)))
print('{:5.4}'.format(math.sqrt(4)))
print('{:5.4}'.format(math.sqrt(5)))
#1.2 - for loop
for i in range(1, 6):
print('{:5.4}'.format(math.sqrt(i)))
print('{0:} {0:}'.format('test'))
#3.1
z = -27
if z > 10:
print('greater than 10')
#3.2
if z < -25 and z > -35:
print('special')
#3.3
12 % 5
#3.4
if z % 2 == 0:
print('even')
#3.5
if z % 2 == 0:
print('even')
else:
print('odd')
if z > 0:
print ('positive')
elif z < 0:
print('negative')
if abs(z) > 10:
print('greater in magnitude than 10')
#4.1
x = list(range(2,20,2))
x
#4.2
x[-3]
#4.3
x[:3]
#4.4
x = list(range(1,11))
x[-1] = 100
x
#4.5
x = list(range(1,5))
for thing in x:
print(2**thing)
#5.1
import numpy as np
np.arange(1, 10, 0.25)
#5.2
np.linspace(1,9.75,36)
#5.3
np.arange(0,1.05, 0.05)
#5.4
np.linspace(0,1,21)
#5.5
x = np.arange(0,8)
print(3**x)
%matplotlib inline
import matplotlib.pyplot as plt
#6.1
x = np.linspace(-np.pi, 0, 100)
y = np.cos(x)
plt.plot(x,y)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
#6.2
x = np.linspace(-2, 2, 100)
y = np.exp(-x**2)
plt.plot(x,y)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
#6.3
plt.style.use('seaborn-darkgrid')
plt.figure(figsize=(4,3))
x = np.linspace(-1, 1, 100)
y1 = x
y2 = x**2
y3 = x**3
plt.plot(x,y1, label='x')
plt.plot(x,y2, label='x^2')
plt.plot(x,y3, label='x^3')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.title('Nice Title')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We can see what values are stored in each attribute like this
Step2: Methods
Step3: To use a method you need to add () to the end of the method
Step4: Real data
Step5: One of the nice things about creating a Class is that all of the methods within the Class know about each other.
Step6: As you modify your Class all of the methods within the Class know about the modifications.
Step12: Documentation
Step13: Now you can see the documentation on the module within the Notebooks by typing
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from astropy import units as u
class SpaceRock(object):
def __init__(self, name=None, ab_mag=None, albedo=None):
self.name = name
self.ab_mag = ab_mag
self.albedo = albedo
# Create some fake data:
my_name = "Geralt of Rivia"
my_ab_mag = 5.13
my_albedo = 0.131
# Initialize a SpaceRock object:
new_asteroid = SpaceRock(name=my_name, ab_mag=my_ab_mag, albedo=my_albedo)
new_asteroid.name
new_asteroid.albedo
class SpaceRock(object):
def __init__(self, name=None, ab_mag=None, albedo=None):
self.name = name
self.ab_mag = ab_mag
self.albedo = albedo
def diameter(self):
result = (1329.0 / np.sqrt(self.albedo)) * (10 ** (-0.2 * self.ab_mag))
return result * u.km
new_asteroid = SpaceRock(name=my_name, ab_mag=my_ab_mag, albedo=my_albedo)
new_asteroid.diameter()
from astropy.table import QTable
rock_table = QTable.read('MainBelt_small.csv', format='ascii.csv')
print(rock_table)
my_name = rock_table['Name']
my_ab_mag = rock_table['H']
my_albedo = rock_table['A']
rocks = SpaceRock(name=my_name, ab_mag=my_ab_mag, albedo=my_albedo)
rocks.diameter()
class SpaceRock(object):
def __init__(self, name=None, ab_mag=None, albedo=None):
self.name = name
self.ab_mag = ab_mag
self.albedo = albedo
def diameter(self):
result = (1329.0 / np.sqrt(self.albedo)) * (10 ** (-0.2 * self.ab_mag))
return result * u.km
def two_diameter(self):
result = 2.0 * self.diameter()
return result
rocks = SpaceRock(name=my_name, ab_mag=my_ab_mag, albedo=my_albedo)
rocks.diameter()
rocks.two_diameter()
class SpaceRock(object):
def __init__(self, name = None, ab_mag = None, albedo = None,
semi_major= None, ecc = None):
self.name = name
self.ab_mag = ab_mag
self.albedo = albedo
self.semi_major = semi_major
self.ecc = ecc
def diameter(self):
result = (1329.0 / np.sqrt(self.albedo)) * (10 ** (-0.2 * self.ab_mag))
return result * u.km
def two_diameter(self):
result = 2.0 * self.diameter()
return result
def find_perihelion(self):
result = self.semi_major * ( 1.0 - self.ecc )
return result * u.AU
my_name = rock_table['Name']
my_ab_mag = rock_table['H']
my_albedo = rock_table['A']
my_semi_major = rock_table['a']
my_ecc = rock_table['ecc']
more_rocks = (SpaceRock(name=my_name, ecc = my_ecc, semi_major=my_semi_major,
ab_mag=my_ab_mag, albedo=my_albedo))
more_rocks.diameter()
more_rocks.find_perihelion()
for idx,value in enumerate(more_rocks.find_perihelion()):
rock_name = more_rocks.name[idx]
my_string = "The Asteroid {0} has a perihelion distance of {1:.2f}".format(rock_name, value)
print(my_string)
class SpaceRock(object):
Container for Asteroids
def __init__(self, name = None, ab_mag = None, albedo = None,
semi_major= None, ecc = None):
Parameters
----------
name : string
Name of the target
ab_mag : array-like
Absolute Magnitude of each Asteroid
albedo : array-like
Albedo of each Asteroid
semi_major : array-like
Semi Major Axis of each Asteroid in AU
ecc : array-like
Eccentricity of each Asteroid
self.name = name
self.ab_mag = ab_mag
self.albedo = albedo
self.semi_major = semi_major
self.ecc = ecc
def diameter(self):
Determine the diameter (in km) of the Asteroids
result = (1329.0 / np.sqrt(self.albedo)) * (10 ** (-0.2 * self.ab_mag))
return result * u.km
def two_diameter(self):
Determine twice the diameter (in km) of the Asteroids
result = 2.0 * self.diameter()
return result
def find_perihelion(self):
Determine the perihelion distance of the Asteroids in AU
result = self.semi_major * ( 1.0 - self.ecc )
return result * u.AU
rocks = (SpaceRock(name=my_name, ecc = my_ecc, semi_major=my_semi_major,
ab_mag=my_ab_mag, albedo=my_albedo))
rocks?
rocks.diameter?
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generating Fractal From Random Points - The Chaos Game
Step2: Make A Fractal
Step3: Regular Polygons
Step4: Exploring Further
Step5: Randomness on Large Scales
Step6: Learn More
Step7: For Barnsley's Fern
|
<ASSISTANT_TASK:>
Python Code:
import pickle,glob
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%pylab inline
def placeStartpoint(npts,fixedpts):
#Start Point
#start = (0.5,0.5)
start = (np.random.random(),np.random.random())
if fixedpts == []: #generates a set of random verticies
for i in range(npts):
randx = np.random.random()
randy = np.random.random()
point = (randx,randy)
fixedpts.append(point)
return (start,fixedpts)
def choosePts(npts,fixedpts,frac):
#chooses a vertex at random
#further rules could be applied here
roll = floor(npts*np.random.random())
point = fixedpts[int(roll)]
return point
def placeItteratePts(npts,itt,start,fixedpts,frac):
ittpts = []
for i in range(itt):
point = choosePts(npts,fixedpts,frac) #chooses a vertex at random
# halfway = ((point[0]+start[0])*frac,(point[1]+start[1])*frac) #calculates the halfway point between the starting point and the vertex
halfway = ((point[0]-start[0])*(1.0 - frac)+start[0],(point[1]-start[1])*(1.0 - frac)+start[1])
ittpts.append(halfway)
start = halfway #sets the starting point to the new point
return ittpts
def plotFractal(start,fixedpts,ittpts):
# set axes range
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
plt.axes().set_aspect('equal')
#plots the verticies
plt.scatter(transpose(fixedpts)[0],transpose(fixedpts)[1],alpha=0.8, c='black', edgecolors='none', s=30)
#plots the starting point
plt.scatter(start[0],start[1],alpha=0.8, c='red', edgecolors='none', s=30)
#plots the itterated points
plt.scatter(transpose(ittpts)[0],transpose(ittpts)[1],alpha=0.5, c='blue', edgecolors='none', s=2)
return
def GenerateFractal(npts,frac,itt,reg=False):
#Error Control
if npts < 1 or frac >= 1.0 or frac <= 0.0 or type(npts) is not int or type(frac) is not float or type(itt) is not int:
print "number of points must be a positive integer, compression fraction must be a positive float less than 1.0, itt must be a positive integer"
return
if frac > 0.5:
print "Warning: compression fractions over 1/2 do not lead to fractals"
#Initilize Verticies
if not reg:
fixedpts = [] #Random Verticies
else:
if npts == 3:
fixedpts = [(0.0,0.0),(1.0,0.0),(0.5,0.5*sqrt(3.0))] #Equilateral Triangle (npts = 3)
elif npts == 4:
fixedpts = [(0.0,0.0),(1.0,0.0),(1.0,1.0),(0.0,1.0)] #Square
elif npts == 5:
fixedpts = [(0.0,2./(1+sqrt(5.))),(0.5-2./(5+sqrt(5.)),0.0),(0.5,1.0),(0.5+2./(5+sqrt(5.)),0.0),(1.0,2./(1+sqrt(5.)))] #Regular Pentagon
elif npts == 6:
fixedpts = [(0.0,0.5),(1./4,0.5+.25*sqrt(3.)),(3./4,0.5+.25*sqrt(3.)),(1.0,0.5),(3./4,0.5-.25*sqrt(3.)),(1./4,0.5-.25*sqrt(3.))] #Regular Hexagon
elif npts == 2:
fixedpts = [(0.0,0.0),(1.0,1.0)] #Line
elif npts == 1:
fixedpts = [(0.5,0.5)] #Line
elif npts == 8:
fixedpts = [(0.0,0.0),(1.0,0.0),(1.0,1.0),(0.0,1.0),(0.5,0.0),(1.0,0.5),(0.5,1.0),(0.0,0.5)] #Carpet
else:
print "No regular polygon stored with that many verticies, switching to default with randomly assigned verticies"
fixedpts = [] #Random Verticies
#Compression Fraction
# frac = 1.0/2.0 #Sierpinski's Triangle (npts = 3)
# frac = 1.0/2.0 #Sierpinski's "Square" (filled square, npts = 4)
# frac = 1.0/3.0 #Sierpinski's Pentagon (npts = 5)
# frac = 3.0/8.0 #Sierpinski's Hexagon (npts = 6)
if len(fixedpts) != npts and len(fixedpts) != 0:
print "The number of verticies don't match the length of the list of verticies. If you want the verticies generated at random, set fixedpts to []"
return
if len(fixedpts) != 0:
print "Fractal Dimension = {}".format(-log(npts)/log(frac))
(start, fixedpts) = placeStartpoint(npts,fixedpts)
ittpts = placeItteratePts(npts,itt,start,fixedpts,frac)
plotFractal(start,fixedpts,ittpts)
return
# Call the GenerateFractal function with a number of verticies, a number of itterations, and the compression fraction
# The starting verticies are random by default. An optional input of True will set the verticies to those of a regular polygon.
GenerateFractal(3,.5,5000)
GenerateFractal(3,.5,50000,True)
GenerateFractal(5,1./3,50000,True)
GenerateFractal(6,3./8,50000,True)
GenerateFractal(8,1./3.,50000,True)
GenerateFractal(1,.5,50000,True)
GenerateFractal(2,.5,50000,True)
GenerateFractal(4,.5,50000,True)
GenerateFractal(10,.5,100)
GenerateFractal(10,.5,5000)
GenerateFractal(100,.5,5000)
GenerateFractal(100,.5,100000)
def makeFern(f,itt):
colname = ["percent","a","b","c","d","e","f"]
print pd.DataFrame(data=np.array(f), columns = colname)
x,y = {0.5,0.0}
xypts=[]
if abs(sum(f[j][0] for j in range(len(f)))-1.0) < 10^-10:
print "Probabilities must sum to 1"
return
for i in range(itt):
rand = (np.random.random())
cond = 0.0
for j in range(len(f)):
if (cond <= rand) and (rand <= (cond+f[j][0])):
x = f[j][1]*x+f[j][2]*y+f[j][5]
y = f[j][3]*x+f[j][4]*y+f[j][6]
xypts.append((x,y))
cond = cond + f[j][0]
xmax,ymax = max(abs(transpose(xypts)[0])),max(abs(transpose(xypts)[1]))
plt.axes().set_aspect('equal')
color = transpose([[abs(r)/xmax for r in transpose(xypts)[0]],[abs(g)/ymax for g in transpose(xypts)[1]],[b/itt for b in range(itt)]])
plt.scatter(transpose(xypts)[0],transpose(xypts)[1],alpha=0.5, facecolors=color, edgecolors='none', s=1)
f = ((0.01,0.0,0.0,0.0,0.16,0.0,0.0),
(0.85,0.85,0.08,-0.08,0.85,0.0,1.60),
(0.07,0.20,-0.26,0.23,0.22,0.0,1.60),
(0.07,-0.15,0.28,0.26,0.24,0.0,0.44))
makeFern(f,5000)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Aogaeru load
Step2: LINUX_VMSTAT
Step3: convert the dataframe data into Json format
Step4: TODO
Step5: Access logs
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
from korg import korg
from korg.pattern import PatternRepo
import tarfile
from loganalyser import plot
pr = PatternRepo(['./korg_patterns/'], False)
lg = korg.LineGrokker('%{AOGAERU_LOAD}', pr)
df = pd.DataFrame()
# now grok the aogaeru load log
with open("./data/results.csv") as infile:
for line in infile:
#print line
#print lg.grok(line)
df = df.append(lg.grok(line), ignore_index=True)
df.timestamp = pd.to_datetime(df.timestamp) # convert to timestamp
df.head()
df.dtypes
pr = PatternRepo(['./korg_patterns/'], False)
lg = korg.LineGrokker('%{LINUX_VMSTAT}', pr)
vm = pd.DataFrame()
with tarfile.open('data/supercars-oscounters-13102912.tgz', mode="r:gz") as tar:
#infile = tar.extractfile('./vmstat.txt.%s' % testrun)
infile = tar.extractfile('./vmstat.txt.13102912')
# read first line start and interval
# 20131029 124500 interval 5 sec
for line in infile:
#print line
#print lg.grok(line)
vm = vm.append(lg.grok(line), ignore_index=True)
#vm.timestamp = pd.to_datetime(vm.timestamp) # convert to timestamp
# tz is timezone of data
rng = pd.date_range('10/29/2013 12:45:00', periods=400, freq='5S', tz="Europe/Berlin")
#rng = pd.date_range('10/29/2013 12:45:00', periods=400, freq='5S')
cpu = pd.DataFrame()
cpu['idle'] = vm['id'].astype(int)
cpu['system'] = vm['sy'].astype(int)
cpu['user'] = vm['us'].astype(int)
cpu['wait'] = vm['wa'].astype(int)
# we do not use index since we "to_json" next
#cpu['timestamp'] = rng
#cpu.set_index(['timestamp'], inplace=True)
#cpu.index = pd.to_datetime(rng, utc=True)
cpu['timestamp'] = pd.to_datetime(rng, utc=True) # timestamp in UTC !
cpu.head()
cpu.dtypes
#print cpu.to_csv(orient='records')
print cpu.to_json(orient='records', date_format='epoch')
#HTML(plot.set_styles(['stacked',]))
plot.set_styles(['stacked',])
#HTML(plot.draw_graph('stacked', {'data': cpu.to_json(orient='records')}))
plot.draw_graph('stacked', cpu.to_json(orient='records'))
pr = PatternRepo(['./korg_patterns/'], True)
lg = korg.LineGrokker('%{COMMONAPACHELOG}', pr)
req = pd.DataFrame()
# now grok the aogaeru load log
with open("./data/access_log2") as infile:
for line in infile:
req = req.append(lg.grok(line), ignore_index=True)
req.head()
req.timestamp = pd.to_datetime(req.timestamp) # convert to timestamp
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pseudospectral collocation methods
Step2: Notice that $F$ is symmetric; this is true for any value of $m$. Also, $F$ is essentially unitary -- that is, it is possible to choose a normalization such that $F$ is unitary, but the normalization in common use means that $FF^=mI$ (and $F^{-1}(F^{-1})^ = m^{-1} I$).
Step3: Variable coefficients
Step4: We see that the eigenvalues of $M$ are purely imaginary, regardless of the choice of $a(x)$. This is not surprising, since $M$ is the product of a real diagonal matrix $A$ and a matrix that is similar to $D$.
Step5: Exercise
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from matplotlib import animation
from IPython.display import HTML
font = {'size' : 15}
matplotlib.rc('font', **font)
def F_matrix(m):
F = np.zeros((m,m),dtype=complex)
for j in range(m):
v = np.zeros(m)
v[j] = 1.
F[:,j] = np.fft.fft(v)
return F
print(F_matrix(4))
# Spatial grid
m=64 # Number of grid points in space
L = 2 * np.pi # Width of spatial domain
x = np.arange(-m/2,m/2)*(L/m) # Grid points
dx = x[1]-x[0] # Grid spacing
# Temporal grid
tmax=4.0 # Final time
N = 25 # number grid points in time
k = tmax/N # interval between output times
xi = np.fft.fftfreq(m)*m*2*np.pi/L # Wavenumber "grid"
# (this is the order in which numpy's FFT gives the frequencies)
# Initial data
u = np.sin(2*x)**2 * (x<-L/4)
uhat0 = np.fft.fft(u)
epsilon=0.01 # Diffusion coefficient
a = 1.0 # Advection coefficient
# Store solutions in a list for plotting later
frames = [u.copy()]
# Now we solve the problem
for n in range(1,N+1):
t = n*k
uhat = np.exp(-(1.j*xi*a + epsilon*xi**2)*t) * uhat0
u = np.real(np.fft.ifft(uhat))
frames.append(u.copy())
# Set up plotting
fig = plt.figure(figsize=(9,4)); axes = fig.add_subplot(111)
line, = axes.plot([],[],lw=3)
axes.set_xlim((x[0],x[-1])); axes.set_ylim((0.,1.))
plt.close()
def plot_frame(i):
line.set_data(x,frames[i])
axes.set_title('t='+str(i*k))
# Animate the solution
anim = matplotlib.animation.FuncAnimation(fig, plot_frame,
frames=len(frames),
interval=200)
HTML(anim.to_jshtml())
m = 32
F = F_matrix(m)
Finv = np.linalg.inv(F)
L = 2 * np.pi
xi = np.fft.fftfreq(m)*m*2*np.pi/L
D = np.diag(1.j*xi)
x = np.arange(-m/2,m/2)*(L/m)
M = np.dot(Finv,np.dot(D,F))
def plot_spectrum(a,m=64):
F = F_matrix(m)
Finv = np.linalg.inv(F)
L = 2 * np.pi
xi = np.fft.fftfreq(m)*m*2*np.pi/L
D = np.diag(1.j*xi)
x = np.arange(-m/2,m/2)*(L/m)
A = np.diag(a(x))
M = -np.dot(A,np.dot(Finv,np.dot(D,F)))
lamda = np.linalg.eigvals(M)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8,4),
gridspec_kw={'width_ratios': [3, 1]})
ax1.plot(x,a(x)); ax1.set_xlim(x[0],x[-1])
ax1.set_xlabel('$x$'); ax1.set_ylabel('$a(x)$')
ax2.plot(np.real(lamda),np.imag(lamda),'ko')
ax2.axis('equal')
ax2.set_xlabel('$Re(\lambda)$')
ax2.set_ylabel('$Im(\lambda)$')
plt.tight_layout()
a = lambda x : np.ones(len(x))
plot_spectrum(a,m=32)
a = lambda x : 2 + np.sin(x)
plot_spectrum(a)
#a = lambda x : np.ones(len(x))
a = lambda x : 2 + np.sin(x)
m = 64
F = F_matrix(m)
Finv = np.linalg.inv(F)
L = 2 * np.pi
xi = np.fft.fftfreq(m)*m*2*np.pi/L
D = np.diag(1.j*xi)
x = np.arange(-m/2,m/2)*(L/m)
A = np.diag(a(x))
M = -np.dot(A,np.dot(Finv,np.dot(D,F)))
# Initial data
u = np.sin(2*x)**2 * (x<-L/4)
dx = x[1]-x[0]
dt = 2.0/m/np.max(np.abs(a(x)))/2.
T = 10.
N = int(np.round(T/dt))
frames = [u.copy()]
skip = N//100
# Start with an explicit Euler step
u_new = u + dt*np.dot(M,u)
# Now we solve the problem
for n in range(1,N+1):
t = n*dt
u_old = u.copy()
u = u_new.copy()
u_new = u_old + 2*dt*np.dot(M,u)
if ((n % skip) == 0):
frames.append(u_new.copy())
# Set up plotting
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(8,8), sharex=True)
line1, = ax1.plot([],[],lw=3)
ax1.set_xlim((x[0],x[-1])); ax1.set_ylim((-0.1,1.1))
ax2.plot(x,a(x),lw=3); ax2.set_ylim(0,3.1)
plt.close()
def plot_frame(i):
line1.set_data(x,frames[i])
ax1.set_title('t='+str(i*skip*dt))
# Animate the solution
anim = matplotlib.animation.FuncAnimation(fig, plot_frame,
frames=len(frames),
interval=200)
HTML(anim.to_jshtml())
m = 64
F = F_matrix(m)
Finv = np.linalg.inv(F)
L = 2 * np.pi
xi = np.fft.fftfreq(m)*m*2*np.pi/L
D = np.diag(1.j*xi)
x = np.arange(-m/2,m/2)*(L/m)
# Initial data
u = np.sin(2*x)**2 * (x<-L/4) + 1.
dx = x[1]-x[0]
dt = 2.0/m/np.max(np.abs(a(x)))/2.
T = 5.
N = int(np.round(T/dt))
frames = [u.copy()]
skip = N//100
# Start with an explicit Euler step
A = np.diag(u)
M = -np.dot(A,np.dot(Finv,np.dot(D,F)))
u_new = u + dt*np.dot(M,u)
# Now we solve the problem
for n in range(1,N+1):
t = n*dt
u_old = u.copy()
u = u_new.copy()
A = np.diag(u)
M = -np.dot(A,np.dot(Finv,np.dot(D,F)))
u_new = u_old + 2*dt*np.dot(M,u)
if ((n % skip) == 0):
frames.append(u_new.copy())
# Set up plotting
fig, ax1 = plt.subplots(1, 1, figsize=(8,4))
line1, = ax1.plot([],[],lw=3)
ax1.set_xlim((x[0],x[-1])); ax1.set_ylim((0.,2.1))
plt.close()
def plot_frame(i):
line1.set_data(x,frames[i])
ax1.set_title('t='+str(i*skip*dt))
# Animate the solution
anim = matplotlib.animation.FuncAnimation(fig, plot_frame,
frames=len(frames),
interval=200)
HTML(anim.to_jshtml())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Print the value of the variable in markdown using
Step2: Matplotlib notebook<a id='matnb'></a>
Step3: Animated plots with HTML<a id='ani'></a>
|
<ASSISTANT_TASK:>
Python Code:
a=3.765
import numpy as np
import elephant
import neo
import quantities as pq
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from IPython.display import HTML
import matplotlib.animation as animation
# Load block from ANDA data
block = np.load('/home/papen/git_repos/ANDA2017/data/data2.npy').item()
# Get spiketrain of first regular trial
sts = []
idx = block.annotations['all_trial_ids']
sts.append(block.filter(targdict={'trial_id': idx[0]}, objects=neo.Segment)[0].spiketrains)
# Generate binned time series of spike counts and apply PCA
binsize = 100*pq.ms
binned = elephant.conversion.BinnedSpikeTrain(sts[0], binsize=binsize).to_array()
pca = PCA(n_components=3)
pca.fit(binned.T)
PC = np.matmul(pca.components_,binned)[:3,:]
t = np.arange(len(PC[0,:]))*binsize
%matplotlib notebook
fig = plt.figure(figsize=(10,5))
ax = fig.add_subplot(121, projection='3d')
ax.plot(PC[0,:], PC[1,:], PC[2,:], '-k')
ax.set_xlabel('Comp. 1')
ax.set_ylabel('Comp. 2')
ax.set_zlabel('Comp. 3')
ax = fig.add_subplot(122)
for i in xrange(len(PC[:,0])):
ax.plot(t, PC[i,:], '-', label='PC {}'.format(i))
ax.set_xlabel('time [ms]')
ax.set_ylabel('PC')
plt.legend()
%matplotlib inline
def test(binned):
fig = plt.figure()
ax = fig.add_subplot(111)
line, = ax.plot(t, binned[0,:])
def animate(i, title_num=None):
line.set_ydata(binned[i,:]) # update the data
ax = plt.gca()
ax.set_title('Spike counts of unit {}'.format(i))
return line,
# Init only required for blitting to give a clean slate.
def init():
line.set_ydata([])
return line,
ani = animation.FuncAnimation(fig, animate, np.arange(1, len(binned[:,0])), init_func=init,
interval=500, blit=True)
return ani, ax
ani, ax = test(binned)
ax.set_xlabel('time [ms]')
ax.set_ylabel('spike count')
HTML(ani.to_html5_video())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Binary thresholding
Step2:
Step3: Multi-valued thresholds
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import entrofy
import matplotlib.pyplot as plt
%matplotlib nbagg
df = pd.read_csv('/home/bmcfee/data/vggish-likelihoods-a226b3-maxagg10.csv.gz', index_col=0)
df.head(5)
(df >= 0.5).describe().T.sort_values('freq')
df.median()
N_OUT = 23 * 100
mappers = {col: entrofy.mappers.ContinuousMapper(df[col],
prefix=col,
n_out=2,
boundaries=[0.0, 0.5, 1.0]) for col in df}
idx, score = entrofy.entrofy(df, N_OUT, mappers=mappers,
seed=20180205,
quantile=0.05,
n_trials=10)
df.loc[idx].head(10)
(df.loc[idx] >= 0.5).describe().T.sort_values('freq')
!pwd
idx.to_series().to_json('subsample_idx.json')
mappers = {col: entrofy.mappers.ContinuousMapper(df[col], n_out=4,
boundaries=[0.0, 0.25, 0.5, 0.75, 1.0]) for col in df}
idx, score = entrofy.entrofy(df, 1000, mappers=mappers, n_trials=100)
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.