Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
3,600
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
import random
%matplotlib inline
import matplotlib.pyplot as plt
n_features = 2
def get_data():
data_a = np.random.rand(10, n_features) + 1
data_b = np.random.rand(10, n_features)
plt.scatter(data_a[:, 0], data_a[:, 1], c='r', marker='x')
plt.scatter(data_b[:, 0], data_b[:, 1], c='g', marker='o')
plt.show()
return data_a, data_b
def get_data2():
data_a = np.asarray([[0.1, 0.9], [0.1, 0.8]])
data_b = np.asarray([[0.4,0.05], [0.45, 0.1]])
plt.scatter(data_a[:, 0], data_a[:, 1], c='r', marker='x')
plt.scatter(data_b[:, 0], data_b[:, 1], c='g', marker='o')
plt.xlim([0, 0.5])
plt.ylim([0, 1])
plt.axes().set_aspect('equal')
plt.show()
return data_a, data_b
data_a, data_b = get_data()
n_hidden = 10
with tf.name_scope("input"):
x1 = tf.placeholder(tf.float32, [None, n_features], name="x1")
x2 = tf.placeholder(tf.float32, [None, n_features], name="x2")
dropout_keep_prob = tf.placeholder(tf.float32, name='dropout_prob')
with tf.name_scope("hidden_layer"):
with tf.name_scope("weights"):
w1 = tf.Variable(tf.random_normal([n_features, n_hidden]), name="w1")
tf.summary.histogram("w1", w1)
b1 = tf.Variable(tf.random_normal([n_hidden]), name="b1")
tf.summary.histogram("b1", b1)
with tf.name_scope("output"):
h1 = tf.nn.dropout(tf.nn.relu(tf.matmul(x1,w1) + b1), keep_prob=dropout_keep_prob)
tf.summary.histogram("h1", h1)
h2 = tf.nn.dropout(tf.nn.relu(tf.matmul(x2, w1) + b1), keep_prob=dropout_keep_prob)
tf.summary.histogram("h2", h2)
with tf.name_scope("output_layer"):
with tf.name_scope("weights"):
w2 = tf.Variable(tf.random_normal([n_hidden, 1]), name="w2")
tf.summary.histogram("w2", w2)
b2 = tf.Variable(tf.random_normal([1]), name="b2")
tf.summary.histogram("b2", b2)
with tf.name_scope("output"):
s1 = tf.matmul(h1, w2) + b2
s2 = tf.matmul(h2, w2) + b2
with tf.name_scope("loss"):
s12 = s1 - s2
s12_flat = tf.reshape(s12, [-1])
pred = tf.sigmoid(s12)
lable_p = tf.sigmoid(-tf.ones_like(s12))
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=tf.zeros_like(s12_flat), logits=s12_flat + 1)
loss = tf.reduce_mean(cross_entropy)
tf.summary.scalar("loss", loss)
with tf.name_scope("train_op"):
train_op = tf.train.AdamOptimizer(0.001).minimize(loss)
sess = tf.InteractiveSession()
summary_op = tf.summary.merge_all()
writer = tf.summary.FileWriter("tb_files", sess.graph)
init = tf.global_variables_initializer()
sess.run(init)
for epoch in range(0, 10000):
loss_val, _ = sess.run([loss, train_op], feed_dict={x1:data_a, x2:data_b, dropout_keep_prob:0.5})
if epoch % 100 == 0 :
summary_result = sess.run(summary_op, feed_dict={x1:data_a, x2:data_b, dropout_keep_prob:1})
writer.add_summary(summary_result, epoch)
# print("Epoch {}: Loss {}".format(epoch, loss_val))
grid_size = 10
data_test = []
for y in np.linspace(0., 1., num=grid_size):
for x in np.linspace(0., 1., num=grid_size):
data_test.append([x, y])
def visualize_results(data_test):
plt.figure()
scores_test = sess.run(s1, feed_dict={x1:data_test, dropout_keep_prob:1})
scores_img = np.reshape(scores_test, [grid_size, grid_size])
plt.imshow(scores_img, origin='lower')
plt.colorbar()
visualize_results(data_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's fabricate some data. We'll call get_data() to generate two datasets
Step2: Now, let's define our ranking model. It'll take in two items (x1 and x2), and return a score (s1 and s2) for each item.
Step3: When defining the model, let's organize it into separate scopes. That way, the TensorBoard visualization will look very clean.
Step4: The loss function will involve comparing s1 and s2.
Step5: Start the session and prepare peripheral ops.
Step6: Train the model with the training data.
Step7: Visualize the results on a grid by accumulating a list of points to test.
Step8: Run the model on all the test points and visualize the utility scores of each point by a color.
|
3,601
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import theano
theano.config.floatX = 'float64'
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
import pandas as pd
data = pd.read_csv('../data/radon.csv')
county_names = data.county.unique()
county_idx = data['county_code'].values
n_counties = len(data.county.unique())
import theano.tensor as tt
log_radon_t = tt.vector()
log_radon_t.tag.test_value = np.zeros(1)
floor_t = tt.vector()
floor_t.tag.test_value = np.zeros(1)
county_idx_t = tt.ivector()
county_idx_t.tag.test_value = np.zeros(1, dtype='int32')
minibatch_tensors = [log_radon_t, floor_t, county_idx_t]
with pm.Model() as hierarchical_model:
# Hyperpriors for group nodes
mu_a = pm.Normal('mu_alpha', mu=0., sd=100**2)
sigma_a = pm.Uniform('sigma_alpha', lower=0, upper=100)
mu_b = pm.Normal('mu_beta', mu=0., sd=100**2)
sigma_b = pm.Uniform('sigma_beta', lower=0, upper=100)
# Intercept for each county, distributed around group mean mu_a
# Above we just set mu and sd to a fixed value while here we
# plug in a common group distribution for all a and b (which are
# vectors of length n_counties).
a = pm.Normal('alpha', mu=mu_a, sd=sigma_a, shape=n_counties)
# Intercept for each county, distributed around group mean mu_a
b = pm.Normal('beta', mu=mu_b, sd=sigma_b, shape=n_counties)
# Model error
eps = pm.Uniform('eps', lower=0, upper=100)
# Model prediction of radon level
# a[county_idx] translates to a[0, 0, 0, 1, 1, ...],
# we thus link multiple household measures of a county
# to its coefficients.
radon_est = a[county_idx_t] + b[county_idx_t] * floor_t
# Data likelihood
radon_like = pm.Normal('radon_like', mu=radon_est, sd=eps, observed=log_radon_t)
minibatch_RVs = [radon_like]
def minibatch_gen(data):
rng = np.random.RandomState(0)
while True:
ixs = rng.randint(len(data), size=100)
yield data.log_radon.values[ixs],\
data.floor.values[ixs],\
data.county_code.values.astype('int32')[ixs]
minibatches = minibatch_gen(data)
total_size = len(data)
means, sds, elbos = pm.variational.advi_minibatch(
model=hierarchical_model, n=40000, minibatch_tensors=minibatch_tensors,
minibatch_RVs=minibatch_RVs, minibatches=minibatches,
total_size=total_size, learning_rate=1e-2, epsilon=1.0
)
import matplotlib.pyplot as plt
import seaborn as sns
plt.plot(elbos)
plt.ylim(-5000, 0);
# Inference button (TM)!
with pm.Model():
# Hyperpriors for group nodes
mu_a = pm.Normal('mu_alpha', mu=0., sd=100**2)
sigma_a = pm.Uniform('sigma_alpha', lower=0, upper=100)
mu_b = pm.Normal('mu_beta', mu=0., sd=100**2)
sigma_b = pm.Uniform('sigma_beta', lower=0, upper=100)
# Intercept for each county, distributed around group mean mu_a
# Above we just set mu and sd to a fixed value while here we
# plug in a common group distribution for all a and b (which are
# vectors of length n_counties).
a = pm.Normal('alpha', mu=mu_a, sd=sigma_a, shape=n_counties)
# Intercept for each county, distributed around group mean mu_a
b = pm.Normal('beta', mu=mu_b, sd=sigma_b, shape=n_counties)
# Model error
eps = pm.Uniform('eps', lower=0, upper=100)
# Model prediction of radon level
# a[county_idx] translates to a[0, 0, 0, 1, 1, ...],
# we thus link multiple household measures of a county
# to its coefficients.
radon_est = a[county_idx] + b[county_idx] * data.floor.values
# Data likelihood
radon_like = pm.Normal(
'radon_like', mu=radon_est, sd=eps, observed=data.log_radon.values)
#start = pm.find_MAP()
step = pm.NUTS(scaling=means)
hierarchical_trace = pm.sample(2000, step, start=means, progressbar=False)
from scipy import stats
import seaborn as sns
varnames = means.keys()
fig, axs = plt.subplots(nrows=len(varnames), figsize=(12, 18))
for var, ax in zip(varnames, axs):
mu_arr = means[var]
sigma_arr = sds[var]
ax.set_title(var)
for i, (mu, sigma) in enumerate(zip(mu_arr.flatten(), sigma_arr.flatten())):
sd3 = (-4*sigma + mu, 4*sigma + mu)
x = np.linspace(sd3[0], sd3[1], 300)
y = stats.norm(mu, sigma).pdf(x)
ax.plot(x, y)
if hierarchical_trace[var].ndim > 1:
t = hierarchical_trace[var][i]
else:
t = hierarchical_trace[var]
sns.distplot(t, kde=False, norm_hist=True, ax=ax)
fig.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here, 'log_radon_t' is a dependent variable, while 'floor_t' and 'county_idx_t' determine independent variable.
Step2: Random variable 'radon_like', associated with 'log_radon_t', should be given to the function for ADVI to denote that as observations in the likelihood term.
Step3: On the other hand, 'minibatches' should include the three variables above.
Step4: Then, run ADVI with mini-batch.
Step5: Check the trace of ELBO and compare the result with MCMC.
|
3,602
|
<ASSISTANT_TASK:>
Python Code:
#the seed information
df_seeds = pd.read_csv('../input/WNCAATourneySeeds_SampleTourney2018.csv')
#tour information
df_tour = pd.read_csv('../input/WRegularSeasonCompactResults_PrelimData2018.csv')
df_seeds['seed_int'] = df_seeds['Seed'].apply( lambda x : int(x[1:3]) )
df_winseeds = df_seeds.loc[:, ['TeamID', 'Season', 'seed_int']].rename(columns={'TeamID':'WTeamID', 'seed_int':'WSeed'})
df_lossseeds = df_seeds.loc[:, ['TeamID', 'Season', 'seed_int']].rename(columns={'TeamID':'LTeamID', 'seed_int':'LSeed'})
df_dummy = pd.merge(left=df_tour, right=df_winseeds, how='left', on=['Season', 'WTeamID'])
df_concat = pd.merge(left=df_dummy, right=df_lossseeds, on=['Season', 'LTeamID'])
df_concat['DiffSeed'] = df_concat[['LSeed', 'WSeed']].apply(lambda x : 0 if x[0] == x[1] else 1, axis = 1)
#prepares sample submission
df_sample_sub = pd.read_csv('../input/WSampleSubmissionStage2.csv')
df_sample_sub['Season'] = df_sample_sub['ID'].apply(lambda x : int(x.split('_')[0]) )
df_sample_sub['TeamID1'] = df_sample_sub['ID'].apply(lambda x : int(x.split('_')[1]) )
df_sample_sub['TeamID2'] = df_sample_sub['ID'].apply(lambda x : int(x.split('_')[2]) )
winners = df_concat.rename( columns = { 'WTeamID' : 'TeamID1',
'LTeamID' : 'TeamID2',
'WScore' : 'Team1_Score',
'LScore' : 'Team2_Score'}).drop(['WSeed', 'LSeed', 'WLoc'], axis = 1)
winners['Result'] = 1.0
losers = df_concat.rename( columns = { 'WTeamID' : 'TeamID2',
'LTeamID' : 'TeamID1',
'WScore' : 'Team2_Score',
'LScore' : 'Team1_Score'}).drop(['WSeed', 'LSeed', 'WLoc'], axis = 1)
losers['Result'] = 0.0
train = pd.concat( [winners, losers], axis = 0).reset_index(drop = True)
train['Score_Ratio'] = train['Team1_Score'] / train['Team2_Score']
train['Score_Total'] = train['Team1_Score'] + train['Team2_Score']
train['Score_Pct'] = train['Team1_Score'] / train['Score_Total']
df_sample_sub['Season'].unique()
train_test_inner = pd.merge( train.loc[ train['Season'].isin([2018]), : ].reset_index(drop = True),
df_sample_sub.drop(['ID', 'Pred'], axis = 1),
on = ['Season', 'TeamID1', 'TeamID2'], how = 'inner' )
train_test_inner.head()
team1d_num_ot = train_test_inner.groupby(['Season', 'TeamID1'])['NumOT'].median().reset_index()\
.set_index('Season').rename(columns = {'NumOT' : 'NumOT1'})
team2d_num_ot = train_test_inner.groupby(['Season', 'TeamID2'])['NumOT'].median().reset_index()\
.set_index('Season').rename(columns = {'NumOT' : 'NumOT2'})
num_ot = team1d_num_ot.join(team2d_num_ot).reset_index()
#sum the number of ot calls and subtract by one to prevent overcounting
num_ot['NumOT'] = num_ot[['NumOT1', 'NumOT2']].apply(lambda x : round( x.sum() ), axis = 1 )
num_ot.head()
team1d_score_spread = train_test_inner.groupby(['Season', 'TeamID1'])[['Score_Ratio', 'Score_Pct']].median().reset_index()\
.set_index('Season').rename(columns = {'Score_Ratio' : 'Score_Ratio1', 'Score_Pct' : 'Score_Pct1'})
team2d_score_spread = train_test_inner.groupby(['Season', 'TeamID2'])[['Score_Ratio', 'Score_Pct']].median().reset_index()\
.set_index('Season').rename(columns = {'Score_Ratio' : 'Score_Ratio2', 'Score_Pct' : 'Score_Pct2'})
score_spread = team1d_score_spread.join(team2d_score_spread).reset_index()
#geometric mean of score ratio of team 1 and inverse of team 2
score_spread['Score_Ratio'] = score_spread[['Score_Ratio1', 'Score_Ratio2']].apply(lambda x : ( x[0] * ( x[1] ** -1.0) ), axis = 1 ) ** 0.5
#harmonic mean of score pct
score_spread['Score_Pct'] = score_spread[['Score_Pct1', 'Score_Pct2']].apply(lambda x : 0.5*( x[0] ** -1.0 ) + 0.5*( 1.0 - x[1] ) ** -1.0, axis = 1 ) ** -1.0
score_spread.head()
X_train = train_test_inner.loc[:, ['Season', 'NumOT', 'Score_Ratio', 'Score_Pct']]
train_labels = train_test_inner['Result']
train_test_outer = pd.merge( train.loc[ train['Season'].isin([2014, 2015, 2016, 2017]), : ].reset_index(drop = True),
df_sample_sub.drop(['ID', 'Pred'], axis = 1),
on = ['Season', 'TeamID1', 'TeamID2'], how = 'outer' )
train_test_outer = train_test_outer.loc[ train_test_outer['Result'].isnull(),
['TeamID1', 'TeamID2', 'Season']]
train_test_missing = pd.merge( pd.merge( score_spread.loc[:, ['TeamID1', 'TeamID2', 'Season', 'Score_Ratio', 'Score_Pct']],
train_test_outer, on = ['TeamID1', 'TeamID2', 'Season']),
num_ot.loc[:, ['TeamID1', 'TeamID2', 'Season', 'NumOT']],
on = ['TeamID1', 'TeamID2', 'Season'])
X_test = train_test_missing.loc[:, ['Season', 'NumOT', 'Score_Ratio', 'Score_Pct']]
n = X_train.shape[0]
train_test_merge = pd.concat( [X_train, X_test], axis = 0 ).reset_index(drop = True)
train_test_merge = pd.concat( [pd.get_dummies( train_test_merge['Season'].astype(object) ),
train_test_merge.drop('Season', axis = 1) ], axis = 1 )
train_test_merge = pd.concat( [pd.get_dummies( train_test_merge['NumOT'].astype(object) ),
train_test_merge.drop('NumOT', axis = 1) ], axis = 1 )
X_train = train_test_merge.loc[:(n - 1), :].reset_index(drop = True)
X_test = train_test_merge.loc[n:, :].reset_index(drop = True)
x_max = X_train.max()
x_min = X_train.min()
X_train = ( X_train - x_min ) / ( x_max - x_min + 1e-14)
X_test = ( X_test - x_min ) / ( x_max - x_min + 1e-14)
train_labels.value_counts()
X_train.head()
from sklearn.linear_model import LogisticRegressionCV
model = LogisticRegressionCV(cv=80,scoring="neg_log_loss",random_state=1
#,penalty="l1"
#,Cs= Cs_#list(np.arange(1e-7,1e-9,-0.5e-9)) # [0.5,0.1,0.01,0.001] #list(np.power(1, np.arange(-10, 10)))
#,max_iter=1000, tol=1e-11
#,solver="liblinear"
#,n_jobs=4
)
model.fit(X_train, train_labels)
#---
Cs = model.Cs_
list(np.power(10.0, np.arange(-10, 10)))
dir(model)
sco = model.scores_[1].mean(axis=0)
#---
import matplotlib.pyplot as plt
plt.plot(Cs
#np.log10(Cs)
,sco)
# plt.ylabel('some numbers')
plt.show()
sco.min()
Cs_= list(np.arange(1.1e-9 - 5e-11
,1.051e-9
,0.2e-13))
len(Cs_)
Cs_= list(np.arange(1e-11
,9.04e-11#1.0508e-9
,0.2e-12))
len(Cs_)
#Cs_= list(np.arange(5.6e-13 - ( (0.01e-13)*1)
# ,5.61e-13 - ( (0.01e-13)*1)#1.0508e-9
# ,0.2e-15))
#len(Cs_)
Cs_= list(np.arange(1e-11
,5.5e-11#1.0508e-9
,0.2e-12))
len(Cs_)
Cs_= list(np.arange(1e-14
,5.5e-11#1.0508e-9
,0.2e-12))
len(Cs_)#awsome
#Cs_= list(np.arange(1.5e-11
# ,2.53e-11#1.0508e-9
# ,0.2e-13)) #+[3.761e-11]
#len(Cs_)
#X_train.dtypes
Cs_= list(np.arange(1e-15
,0.51e-10 #1.0508e-9
,0.1e-12))
len(Cs_)#new again
Cs_= list(np.arange(9e-14
,10.1e-13 #1.0508e-9
,0.1e-14))
len(Cs_)#new again cont. lowerlevel
Cs_= list(np.linspace(1e-19
,0.610e-11 #1.0508e-9
,500))
len(Cs_)#new again cont. lowerlevel
Cs_= list(np.logspace(-12.5715
,-12.5714 #1.0508e-9
,500))
len(Cs_)#new again cont. lowerlevel
#LogisticRegressionCV(Cs=10, class_weight=None, cv=107, dual=False,
# fit_intercept=True, intercept_scaling=1.0, max_iter=100,
# multi_class='ovr', n_jobs=1, penalty='l2', random_state=2,
# refit=True, scoring='neg_log_loss', solver='lbfgs', tol=0.0001,
# verbose=0) #-0.7
from sklearn.linear_model import LogisticRegressionCV
model = LogisticRegressionCV(cv=80,scoring="neg_log_loss",random_state=1
#,penalty="l1"
,Cs= Cs_#list(np.arange(1e-7,1e-9,-0.5e-9)) # [0.5,0.1,0.01,0.001] #list(np.power(1, np.arange(-10, 10)))
,max_iter=1000, tol=1e-11
#,solver="liblinear"
,n_jobs=4)
model.fit(X_train, train_labels)
#---
Cs = model.Cs_
list(np.power(10.0, np.arange(-10, 10)))
dir(model)
sco = model.scores_[1].mean(axis=0)
#---
import matplotlib.pyplot as plt
plt.plot(#Cs
np.log10(Cs)
,sco)
# plt.ylabel('some numbers')
index_min = np.argmin(sco)
Cs_[index_min]
plt.axvline(x=np.log10(Cs_[index_min]))
plt.show()
index_min = np.argmin(sco)
Cs_[index_min] #3.761e-11
#list(np.power(10.0, np.arange(-10, 10)))
#list(np.arange(0.5,1e-4,-0.05))
print(sco.max())
#-0.6931471779248422
print(sco.min() < -0.693270048530996)
print(sco.min()+0.693270048530996)
sco.min()
import matplotlib.pyplot as plt
plt.plot(model.scores_[1])
# plt.ylabel('some numbers')
plt.show()
train_test_inner['Pred1'] = model.predict_proba(X_train)[:,1]
train_test_missing['Pred1'] = model.predict_proba(X_test)[:,1]
sub = pd.merge(df_sample_sub,
pd.concat( [train_test_missing.loc[:, ['Season', 'TeamID1', 'TeamID2', 'Pred1']],
train_test_inner.loc[:, ['Season', 'TeamID1', 'TeamID2', 'Pred1']] ],
axis = 0).reset_index(drop = True),
on = ['Season', 'TeamID1', 'TeamID2'], how = 'outer')
team1_probs = sub.groupby('TeamID1')['Pred1'].apply(lambda x : (x ** -1.0).mean() ** -1.0 ).fillna(0.5).to_dict()
team2_probs = sub.groupby('TeamID2')['Pred1'].apply(lambda x : (x ** -1.0).mean() ** -1.0 ).fillna(0.5).to_dict()
sub['Pred'] = sub[['TeamID1', 'TeamID2','Pred1']]\
.apply(lambda x : team1_probs.get(x[0]) * ( 1 - team2_probs.get(x[1]) ) if np.isnan(x[2]) else x[2],
axis = 1)
sub = sub.drop_duplicates(subset=["ID"], keep='first')
sub[['ID', 'Pred']].to_csv('sub.csv', index = False)
sub[['ID', 'Pred']].head(20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we separate the winners from the losers and organize our dataset
Step2: Now we match the detailed results to the merge dataset above
Step3: Here we get our submission info
Step4: Training Data Creation
Step5: We will only consider years relevant to our test submission
Step6: Now lets just look at TeamID2, or just the second team info.
Step7: From the inner join, we will create data per team id to estimate the parameters we are missing that are independent of the year. Essentially, we are trying to estimate the average behavior of the team across the year.
Step8: Here we look at the comparable statistics. For the TeamID2 column, we would consider the inverse of the ratio, and 1 minus the score attempt percentage.
Step9: Now lets create a model just solely based on the inner group and predict those probabilities.
Step10: We scale our data for our keras classifier, and make sure our categorical variables are properly processed.
Step11: Here we store our probabilities
Step12: We merge our predictions
Step13: We get the 'average' probability of success for each team
Step14: Any missing value for the prediciton will be imputed with the product of the probabilities calculated above. We assume these are independent events.
|
3,603
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import optimize, special
from matplotlib import pyplot as plt
from astropy.io import fits
%matplotlib inline
# open the data file and load data into a list of points
infile = open("./samplevals_PA.txt", 'r')
v_arr = [ ]
for line in iter(infile):
line = line.split()
try:
float(line[0])
v_arr.append(float(line[0]))
except ValueError:
continue
infile.close()
# get a first look at the distribution to make guesses
plt.hist(v_arr)
# define the pdfs and likelihood functions
def Rice_dist(x, alpha, beta):
the pdf of the Rice distribution for a single value
return (1/alpha)*np.exp((x+beta)/(-alpha))*special.iv(0, 2*np.sqrt(x*beta)/alpha)
def Rice_dist_n(x, alpha, beta):
the pdf of the Rice distribution for an array
condlist = [ x>0 ]
choicelist = [ Rice_dist(x, alpha, beta) ]
return np.select(condlist, choicelist, default=0.0)
def Rice_dist_gen(x, alpha, beta):
pdf of Rice distribution that works for single values and array types
print('hi')
def gaussian_1param(x, mu):
gaussian pdf with variance equal to the mean
return np.exp(-(x-mu)**2/mu)/np.sqrt(2*np.pi*mu)
def neg_likelihood(params, value_array, function):
the opposite of the likelihood function for a set of independent values for a given \\
function
l = -1
for x in value_array:
l *= function(x, *params)
return l
# perform the optimization on both functions
guess = (2, 3)
opt_rice = optimize.fmin(neg_likelihood, guess, args=(v_arr, Rice_dist))
print(opt_rice)
guess = (np.mean(v_arr),)
opt_gauss = optimize.fmin(neg_likelihood, guess, args=(v_arr, gaussian_1param))
print(opt_gauss)
# plot the Rice distribution with optimal values against normed histogram
r = np.arange(0., 20., 0.1)
plt.plot(r, Rice_dist(r, *opt_rice), label='rice')
plt.plot(r, gaussian_1param(r, *opt_gauss), label='gauss')
plt.hist(v_arr, normed=True, label='data')
plt.legend(loc='center left', bbox_to_anchor=(1., 0.5))
plt.title("comparison of fits to data")
# define a mesh grid for parameter space plot
alpha_range = np.linspace(0.5, 2.5, 100)
beta_range = np.linspace(2.5, 6.5, 100)
alpha_arr, beta_arr = np.meshgrid(alpha_range, beta_range)
# positive likelihood values for Rice distribution!
Rice_arr = -neg_likelihood((alpha_arr, beta_arr), v_arr, Rice_dist_n)
# plot the posterior density function
ext = [alpha_range.min(), alpha_range.max(), beta_range.min(), beta_range.max()]
plt.imshow(Rice_arr, extent=ext, origin='lower')
plt.title("posterior density function for Rice distribution")
plt.xlabel('alpha')
plt.ylabel('beta')
# find the ratio of likelihood functions
ratio = neg_likelihood(opt_rice, v_arr, Rice_dist_n) / neg_likelihood(opt_gauss, v_arr, gaussian_1param)
print(ratio)
# read in the data and close files
model_fits = fits.open("./data/hw6prob2_model.fits")
psf_fits = fits.open("hw6prob2_psf.fits")
print(model_fits.info())
print(psf_fits.info())
model_data = model_fits[0].data
psf_data = psf_fits[0].data
model_fits.close()
psf_fits.close()
plt.imshow(psf_data)
cbar = plt.colorbar()
cbar.solids.set_edgecolors('face')
model_data_intgrl = np.sum(model_data, axis=0)
f = plt.figure()
plt.imshow(model_data_intgrl)
cbar = plt.colorbar()
cbar.solids.set_edgecolors('face')
# define FFT functions
def cool_turkey_fft(arr, N=0, s=1, **kwargs): # inverse=False
performs a 1-dimensional fast Fourier transform
on arr using the Cooley–Tukey algorithm.
return: transformed array, ndarray
keyword arguments: inverse=False
performs inverse FFT
if N == 0:
N = len(arr)
sign = 1 # sign that goes into exponential, + implies not doing inverse transform
# iter(kwargs)
for key, value in kwargs.items():
if key == 'inverse' and value:
sign = -1
s = int(s)
ARR = np.zeros(N, dtype=complex)
if N == 1:
ARR[0] = arr[0]
else:
N2 = int(N/2)
ARR[0:N2] = cool_turkey_fft(arr[0::2*s], N2, s, **kwargs)
ARR[N2:] = cool_turkey_fft(arr[1::2*s], N2, s, **kwargs)
for k in range(0, N2):
orig = ARR[k]
ARR[k] = orig + np.exp(-sign*2*np.pi*(1j)*k/N)*ARR[k+N2]
ARR[k+N2] = orig - np.exp(-sign*2*np.pi*(1j)*k/N)*ARR[k+N2]
return ARR
def ifft(arr, fft_method, *args, **kwargs): # =cool_turkey_fft
performs inverse of 1d fast Fourier transform
kwargs['inverse'] = True
ARR = fft_method(arr, *args, **kwargs)
return ARR / len(ARR)
def fft_2d(arr_2d, fft_1d, *args, **kwargs): # =cool_turkey_fft
performs a fast Fourier transform in 2 dimensions
# check type of array
# check dimensions
nx, ny = arr_2d.shape
N = nx
ARR_2d = np.zeros((N,N), dtype=np.complex64)
for i in range(0, N):
ARR_2d[i,:] = fft_1d(arr_2d[i,:], *args, **kwargs)
for j in range(0, N):
ARR_2d[:,j] = fft_1d(ARR_2d[:,j], *args, **kwargs)
return ARR_2d
def zero_pad_symm2d(arr, shape):
pads array with 0s, placing original values in the center symmetrically
returns ndarray of given shape
# check new shape big enough to include old shape
sh0 = arr.shape
ARR = np.zeros(shape)
ARR[int((shape[0]-sh0[0])/2):int((shape[0]+sh0[0])/2), int((shape[1]-sh0[1])/2):int((shape[1]+sh0[1])/2)] = arr
return ARR
# do the padding
size_full = model_data_intgrl.shape[0] + 2*psf_data.shape[0]
psf_data_padded = zero_pad_symm2d(psf_data, (size_full,size_full))
model_data_padded = zero_pad_symm2d(model_data_intgrl, (size_full,size_full))
# FFT the 2D data
psf_fft = fft_2d(psf_data_padded, cool_turkey_fft)
model_fft = fft_2d(model_data_padded, cool_turkey_fft)
# convolve model with PSF
convoluted_data_fft = psf_fft * model_fft
# inverse FFT to get back to real space
convoluted_data_space = fft_2d(convoluted_data_fft, cool_turkey_fft, inverse=True)
# shift back
convoluted_data_space = np.fft.fftshift(convoluted_data_space)
# plot the result, looks good!
f = plt.figure()
plt.imshow(np.real(convoluted_data_space[64:192, 64:192]))
plt.title("integrated model data convolved with PSF")
cbar = plt.colorbar()
cbar.solids.set_edgecolors('face')
import pandas
pandas.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h2> Problem 1
Step7: To estimate the values of $(\alpha,\beta)$, we maximize the posterior function $p(\alpha,\beta\mid{D})$ with respect to $\alpha$ and $\beta$. From Baye's rule, and assuming the prior $p(\alpha,\beta)$ is uniform, this is equivalent to maximizing the likelihood function since
Step8: The optimal values for the Rice fitting are
Step9: <h4> posterior density function for Rice distribution </h4>
Step10: To compare the distribution models to see which is a better fit, we compute the ratio of the probabilities of the models
Step11: We can see the major factor after calculating is
Step16: Integrating the model along the 0th (slow) dimension. For a discrete array, this amounts to summing across each value in the 0th dimension for each coordinate in the other two dimensions. In most cases there would be a multiplicative factor of the bin width to represent $\delta x$, but here we are doing it in units of pixels with value $1$
Step17: <h3> Performing the convolution </h3>
Step18: The result looks good, we can see the small points become wider blurs, but the overall picture looks the same!
|
3,604
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-1', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
3,605
|
<ASSISTANT_TASK:>
Python Code:
import random as rnd
import math
def approximate_pi(n):
k = 0
for _ in range(n):
x = 2 * rnd.random() - 1
y = 2 * rnd.random() - 1
r = x * x + y * y
if r <= 1:
k += 1
return 4 * k / n
def std_and_mean(L):
N = len(L)
mean = sum(L) / N
ss = 0
for x in L:
ss += (x - mean) ** 2
ss /= (N - 1)
std = math.sqrt(ss)
return mean, std
def confidence_interval(k, n):
L = []
for _ in range(k):
L.append(approximate_pi(n))
𝜇, 𝜎 = std_and_mean(L)
return 𝜇 - 3 * 𝜎, 𝜇, 𝜇 + 3 * 𝜎
%%time
n = 100
while n <= 10000000:
lower, pi, upper = confidence_interval(100, n)
print('%9d: %6f < 𝜋 < %6f, 𝜋 ≈ %6f, error: %6f' % (n, lower, upper, pi, abs(pi - math.pi)))
n *= 10
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The unit circle $U$ is defined as the set
Step2: Given a list $L = [x_1, \cdots, x_n]$, the function $\texttt{std_and_mean}(L)$ computes the pair $(\mu, \sigma)$, where $\sigma$ is the sample standard deviation of $L$,
Step3: The method $\texttt{confidence_interval}(k, n)$ runs $k$ approximations of $\pi$ using $n$ trials in each approximation run.
|
3,606
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1.0 / (1.0 + np.exp(-x)) #
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(self.weights_hidden_to_output, error)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error # f'(h) == 1 for output unit
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
import sys
### Set the hyperparameters here ###
iterations = 800
learning_rate = 0.8
hidden_nodes = 16
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Unit tests
Step9: Training the network
Step10: Check out your predictions
|
3,607
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import time
%pylab inline
gtd=pd.read_excel("data/gtd_12to15_0616dist.xlsx")
headers = pd.read_excel("data/GDELT Metadata.xlsx").columns.values
gdelt = pd.read_csv("data/20150108.export.txt", delimiter="\t", names=headers, parse_dates=["Day"])
gtd = gtd.dropna(subset=['latitude', 'longitude', 'iday'])
gdelt = gdelt.dropna(subset=["ActionGeo_Lat", "ActionGeo_Long", "Day"])
gtd.iyear.value_counts()
gdelt.Year.value_counts()
set(gtd.iyear).intersection(set(gdelt.Year))
#limit query to year 2015 b/c both datasest have full 2015 data
gtdf = gtd[(gtd.iyear==2015) | (gtd.iyear==2014)]
gdeltf=gdelt[(gdelt.Year==2015) | (gdelt.Year==2014)]#.sample(10000) #sampling a subset of the data for fast testing
print (gtdf.shape, gdeltf.shape)
gtdf.shape[0] * gdeltf.shape[0]
#python implemention of the sim function above w/ 2 matching rules for lat/lng and date
#arguments are pandas series (row) from each database
def sim(target, match):
sim = 0
latlng_sim = sim_latlng(target, match)
sim += (W_LATLNG * latlng_sim)
date_sim = sim_date(target, match)
sim += (W_DATE * date_sim)
return sim
#define lat/lng sim function
#expect 2 1x2 numpy arrays
def sim_latlng(target, match):
target_latlng = np.array(target[["latitude", "longitude"]].values)
match_latlng = np.array(match[["ActionGeo_Lat", "ActionGeo_Long"]].values)
sim = euclid_sim(target_latlng, match_latlng)
return sim
#try to match on date
#2 integers represeignn seconds from th epoch
def sim_date(target, match):
target_date_parts = target[["iyear", "imonth", "iday"]].values
target_date = datetime.datetime(target_date_parts[0], target_date_parts[1], target_date_parts[2])
target_seconds = (target_date - EPOCH).total_seconds()
match_date = match["Day"]
match_seconds = (match_date - EPOCH).total_seconds()
sim = euclid_sim(target_seconds, match_seconds)
return sim
def euclid_sim(a,b):
dist = numpy.linalg.norm(a-b)
prob = 1 / (1 + dist)
return prob
#utiltiy method to print match result
def print_table(gtd, gdelt, score):
gtd_vals=[]
gdelt_vals=[]
#latlng
gtd_vals.append(gtd[["latitude", "longitude"]].values)
gdelt_vals.append(gdelt[["ActionGeo_Lat", "ActionGeo_Long"]].values)
#date
target_date_parts = gtd[["iyear", "imonth", "iday"]].values
gtd_vals.append(datetime.datetime(target_date_parts[0], target_date_parts[1], target_date_parts[2]))
gdelt_vals.append(datetime.datetime.strptime(str(gdelt[["Day"]][0]), "%Y%m%d"))
#ids
gtd_vals.append(gtd.eventid)
gdelt_vals.append(gdelt.GlobalEventID)
t_dict = {"0cols": ["lat/lng", "date", "id"], "1GTD": gtd_vals, "GDELT": gdelt_vals, "Score": score}
df = pd.DataFrame(t_dict)
return df
%%time
#weights should sum to 1
W_LATLNG=0.5
W_DATE=0.5
EPOCH = datetime.datetime.utcfromtimestamp(0)
# threshold (beta)
THRESHOLD = 0.4
#choose what tuples of GTD you want to match against
targets = gtdf#.sample(500)
matches = {}
for _, target in targets.iterrows():
scores = []
for _, row in gdeltf.iterrows():
score = sim(target, row)
scores.append(score)
i = np.nanargmax(scores)
high_score = scores[i]
#if above thershold, consider match
if high_score >= THRESHOLD:
match = gdeltf.iloc[i]
matches[target.eventid] = (high_score, match.GlobalEventID)
else:
matches[target.eventid] = (None, None)
pass
sorted_matches = sorted(matches.items(), key=lambda x: x[1][0], reverse=True)
top200 = sorted_matches[0:200]
gtdids= zip(*top200)[0]; scores = zip(*zip(*top200)[1])[0]; gdeltids=zip(*zip(*top200)[1])[1]
report = pd.DataFrame({"GTD": gtdids, "Score": scores, "GDELT": gdeltids})
report [["GTD", "Score", "GDELT"]].head(200)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: Cleaning
Step3: Common Years
Step4: The years in GDLET are distributed strangly
Step5: Filter for 2015
Step6: Rule-Based Matching
Step7: Results
Step8: Report
|
3,608
|
<ASSISTANT_TASK:>
Python Code:
import torch
import numpy
import inspect # this should raise the "we'll do gross things Python internals flag"
# Based on the the original implementation from PyTorch
# So portions copyright by the PyTorch contributors, in particular Simon Wang worked on it a lot.
# Errors probably are my doing.
class SpectralNormWeight(torch.nn.Module):
def __init__(self, shape, dim=0, eps=1e-12, n_power_iterations=1):
super().__init__()
self.n_power_iterations = n_power_iterations
self.eps = eps
self.dim = dim
self.shape = shape
self.permuted_shape = (shape[dim],) + shape[:dim] + shape[dim+1:]
h = shape[dim]
w = numpy.prod(self.permuted_shape[1:])
self.weight_mat = torch.nn.Parameter(torch.randn(h, w))
self.register_buffer('u', torch.nn.functional.normalize(torch.randn(h), dim=0, eps=self.eps))
self.register_buffer('v', torch.nn.functional.normalize(torch.randn(w), dim=0, eps=self.eps))
def forward(self):
u = self.u
v = self.v
if self.training:
with torch.no_grad():
for _ in range(self.n_power_iterations):
# Spectral norm of weight equals to `u^T W v`, where `u` and `v`
# are the first left and right singular vectors.
# This power iteration produces approximations of `u` and `v`.
v = torch.nn.functional.normalize(torch.mv(self.weight_mat.t(), u), dim=0, eps=self.eps, out=v)
u = torch.nn.functional.normalize(torch.mv(self.weight_mat, v), dim=0, eps=self.eps, out=u)
# See above on why we need to clone
u = u.clone(memory_format=torch.contiguous_format)
v = v.clone(memory_format=torch.contiguous_format)
sigma = torch.dot(u, torch.mv(self.weight_mat, v))
weight = (self.weight_mat / sigma).view(self.permuted_shape)
if self.dim != 0:
weight = weight.transpose(0, self.dim)
return weight
class ComputedParameter(torch.nn.Module):
def __init__(self, m):
super().__init__()
self.m = m
self.needs_update = True
self.cache = None
self.param_versions = None # should we also treat buffers?
def require_update(self, *args): # dummy args for use as hook
self.needs_update = True
def check_param_versions(self):
if self.param_versions is None:
self.require_update()
return
for p, v in zip(self.parameters(), self.param_versions):
if p._version != v:
self.require_update()
return
def tensor(self):
if self.needs_update:
self.cache = self.m()
self.cache.register_hook(self.require_update)
self.param_versions = [p._version for p in self.parameters()]
return self.cache
@classmethod
def __torch_function__(cls, func, types, args=(), kwargs=None):
if kwargs is None:
kwargs = {}
args = tuple(a.tensor() if isinstance(a, ComputedParameter) else a for a in args)
return func(*args, **kwargs)
def __hash__(self):
return super().__hash__()
def __eq__(self, other):
if isinstance(other, torch.nn.Module):
return super().eq(other)
return torch.eq(self, other)
# this is very overly simple and should take care of signatures, docstrings and handle class methods, properties
for name, member in inspect.getmembers(torch.Tensor):
if not hasattr(ComputedParameter, name):
if inspect.ismethoddescriptor(member):
def get_proxy(name):
def new_fn(self, *args, **kwargs):
return getattr(self.tensor(), name)(*args, **kwargs)
return new_fn
setattr(ComputedParameter, name, get_proxy(name))
def replace_setattr():
# make old_setattr local..
old_setattr = torch.nn.Module.__setattr__
def new_setattr(self, n, v):
oldval = getattr(self, n, 1)
if isinstance(v, ComputedParameter) and oldval is None or isinstance(oldval, torch.nn.Parameter):
delattr(self, n)
old_setattr(self, n, v)
torch.nn.Module.__setattr__ = new_setattr
replace_setattr()
l = torch.nn.Linear(3, 4)
l.weight = ComputedParameter(SpectralNormWeight(l.weight.shape))
l
target = torch.randn(3, 4)
target /= torch.svd(target).S[0]
opt = torch.optim.SGD(l.parameters(), 1e-1)
for i in range(1000):
inp = torch.randn(20, 3)
t = inp @ target
p = l(inp)
loss = torch.nn.functional.mse_loss(p, t)
opt.zero_grad()
loss.backward()
opt.step()
if i % 100 == 0:
print(loss.item())
l.weight - target.t()
l.bias
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Spectral normalization and the PyTorch implementation
Step2: But how to apply this to our weight?
Step3: Unsurprisingly, as we don't subclass Tensor, we don't have all the methods. Typing them up would be a lot of work, but happily Python let's us patch them in programatically.
Step4: We also want to be able to replace parameters with our neat new ComputedParameters, so we monkey-patch Module's __setattr__ routine.
Step5: Done!
Step6: Our computed parameters show up in the module structure
Step7: Let's try this on an admittedly silly and trivial example, to fit a spectrally normalized target
Step8: It works
|
3,609
|
<ASSISTANT_TASK:>
Python Code:
X = np.array([[-1.0, -1.0], [-1.2, -1.4], [1, -0.5], [-3.4, -2.2], [1.1, 1.2], [-2.1, -0.2]])
y = np.array([1, 1, 1, 2, 2, 2])
x_new = [0, 0]
plt.scatter(X[y==1,0], X[y==1,1], s=100, c='r')
plt.scatter(X[y==2,0], X[y==2,1], s=100, c='b')
plt.scatter(x_new[0], x_new[1], s=100, c='g')
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import VotingClassifier
clf1 = LogisticRegression(random_state=1)
clf2 = SVC(random_state=1, probability=True)
clf3 = GaussianNB()
eclf = VotingClassifier(estimators=[('lr', clf1), ('ksvc', clf2), ('gnb', clf3)], voting='soft', weights=[2, 1, 1])
probas = [c.fit(X, y).predict_proba([x_new]) for c in (clf1, clf2, clf3, eclf)]
class1_1 = [pr[0, 0] for pr in probas]
class2_1 = [pr[0, 1] for pr in probas]
ind = np.arange(4)
width = 0.35 # bar width
p1 = plt.bar(ind, np.hstack(([class1_1[:-1], [0]])), width, align="center", color='green')
p2 = plt.bar(ind + width, np.hstack(([class2_1[:-1], [0]])), width, align="center", color='lightgreen')
p3 = plt.bar(ind, [0, 0, 0, class1_1[-1]], width, align="center", color='blue')
p4 = plt.bar(ind + width, [0, 0, 0, class2_1[-1]], width, align="center", color='steelblue')
plt.xticks(ind + 0.5 * width, ['LogisticRegression\nweight 2',
'Kernel SVC\nweight 1',
'GaussianNB\nweight 1',
'VotingClassifier'])
plt.ylim([0, 1.1])
plt.title('Class probabilities for sample 1 by different classifiers')
plt.legend([p1[0], p2[0]], ['class 1', 'class 2'], loc='upper left')
plt.show()
from itertools import product
x_min, x_max = -4, 2
y_min, y_max = -3, 2
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.025), np.arange(y_min, y_max, 0.025))
f, axarr = plt.subplots(2, 2)
for idx, clf, tt in zip(product([0, 1], [0, 1]),
[clf1, clf2, clf3, eclf],
['LogisticRegression', 'Kernel SVC', 'GaussianNB', 'VotingClassifier']):
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
axarr[idx[0], idx[1]].contourf(xx, yy, Z, alpha=0.2, cmap=mpl.cm.jet)
axarr[idx[0], idx[1]].scatter(X[:, 0], X[:, 1], c=y, alpha=0.5, s=50, cmap=mpl.cm.jet)
axarr[idx[0], idx[1]].set_title(tt)
plt.tight_layout()
plt.show()
from itertools import product
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import VotingClassifier
iris = load_iris()
X, y = iris.data[:, [0, 2]], iris.target
model1 = DecisionTreeClassifier(max_depth=4).fit(X, y)
model2 = LogisticRegression().fit(X, y)
model3 = SVC(probability=True).fit(X, y)
model4 = VotingClassifier(estimators=[('dt', model1), ('lr', model2), ('svc', model3)],
voting='soft', weights=[1, 2, 3]).fit(X, y)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.025), np.arange(y_min, y_max, 0.025))
f, axarr = plt.subplots(2, 2)
for idx, clf, tt in zip(product([0, 1], [0, 1]),
[model1, model2, model3, model4],
['Decision Tree', 'Logistic Regression', 'Kernel SVM', 'Soft Voting']):
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
axarr[idx[0], idx[1]].contourf(xx, yy, Z, alpha=0.2, cmap=mpl.cm.jet)
axarr[idx[0], idx[1]].scatter(X[:, 0], X[:, 1], c=y, alpha=1, s=50, cmap=mpl.cm.jet)
axarr[idx[0], idx[1]].set_title(tt)
plt.tight_layout()
plt.show()
def total_error(p, N):
te = 0.0
for k in range(int(np.ceil(N/2)), N + 1):
te += sp.misc.comb(N, k) * p**k * (1-p)**(N-k)
return te
x = np.linspace(0, 1, 100)
plt.plot(x, x, 'g:', lw=3, label="individual model")
plt.plot(x, total_error(x, 10), 'b-', label="voting model (N=10)")
plt.plot(x, total_error(x, 100), 'r-', label="voting model (N=100)")
plt.xlabel("performance of individual model")
plt.ylabel("performance of voting model")
plt.legend(loc=0)
plt.show()
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import BaggingClassifier
iris = load_iris()
X, y = iris.data[:, [0, 2]], iris.target
model1 = DecisionTreeClassifier().fit(X, y)
model2 = BaggingClassifier(DecisionTreeClassifier(), bootstrap_features=True, random_state=0).fit(X, y)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1), np.arange(y_min, y_max, 0.1))
plt.figure(figsize=(8,12))
plt.subplot(211)
Z1 = model1.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.contourf(xx, yy, Z1, alpha=0.6, cmap=mpl.cm.jet)
plt.scatter(X[:, 0], X[:, 1], c=y, alpha=1, s=50, cmap=mpl.cm.jet)
plt.subplot(212)
Z2 = model2.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.contourf(xx, yy, Z2, alpha=0.6, cmap=mpl.cm.jet)
plt.scatter(X[:, 0], X[:, 1], c=y, alpha=1, s=50, cmap=mpl.cm.jet)
plt.tight_layout()
plt.show()
from sklearn import clone
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
iris = load_iris()
n_classes = 3
n_estimators = 30
plot_colors = "ryb"
cmap = plt.cm.RdYlBu
plot_step = 0.02
RANDOM_SEED = 13
models = [DecisionTreeClassifier(max_depth=4),
RandomForestClassifier(max_depth=4, n_estimators=n_estimators),
ExtraTreesClassifier(max_depth=4, n_estimators=n_estimators)]
plot_idx = 1
plt.figure(figsize=(12, 12))
for pair in ([0, 1], [0, 2], [2, 3]):
for model in models:
X = iris.data[:, pair]
y = iris.target
idx = np.arange(X.shape[0])
np.random.seed(RANDOM_SEED)
np.random.shuffle(idx)
X = X[idx]
y = y[idx]
mean = X.mean(axis=0)
std = X.std(axis=0)
X = (X - mean) / std
clf = clone(model)
clf = model.fit(X, y)
plt.subplot(3, 3, plot_idx)
model_title = str(type(model)).split(".")[-1][:-2][:-len("Classifier")]
if plot_idx <= len(models):
plt.title(model_title)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
if isinstance(model, DecisionTreeClassifier):
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=cmap)
else:
estimator_alpha = 1.0 / len(model.estimators_)
for tree in model.estimators_:
Z = tree.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, alpha=estimator_alpha, cmap=cmap)
for i, c in zip(range(n_classes), plot_colors):
idx = np.where(y == i)
plt.scatter(X[idx, 0], X[idx, 1], c=c, label=iris.target_names[i], cmap=cmap)
plot_idx += 1
plt.tight_layout()
plt.show()
from sklearn.datasets import make_classification
from sklearn.ensemble import ExtraTreesClassifier
X, y = make_classification(n_samples=1000, n_features=10, n_informative=3, n_redundant=0, n_repeated=0,
n_classes=2, random_state=0, shuffle=False)
forest = ExtraTreesClassifier(n_estimators=250, random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices], color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
from sklearn.datasets import fetch_olivetti_faces
from sklearn.ensemble import ExtraTreesClassifier
data = fetch_olivetti_faces()
X = data.images.reshape((len(data.images), -1))
y = data.target
mask = y < 5 # Limit to 5 classes
X = X[mask]
y = y[mask]
forest = ExtraTreesClassifier(n_estimators=1000, max_features=128, random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
importances = importances.reshape(data.images[0].shape)
plt.figure(figsize=(8, 8))
plt.imshow(importances, cmap=plt.cm.bone_r)
plt.grid(False)
plt.title("Pixel importances with forests of trees")
plt.show()
from sklearn.datasets import fetch_olivetti_faces
from sklearn.utils.validation import check_random_state
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.linear_model import LinearRegression
data = fetch_olivetti_faces()
targets = data.target
data = data.images.reshape((len(data.images), -1))
train = data[targets < 30]
test = data[targets >= 30]
n_faces = 5
rng = check_random_state(4)
face_ids = rng.randint(test.shape[0], size=(n_faces, ))
test = test[face_ids, :]
n_pixels = data.shape[1]
X_train = train[:, :int(np.ceil(0.5 * n_pixels))] # Upper half of the faces
y_train = train[:, int(np.floor(0.5 * n_pixels)):] # Lower half of the faces
X_test = test[:, :int(np.ceil(0.5 * n_pixels))]
y_test = test[:, int(np.floor(0.5 * n_pixels)):]
ESTIMATORS = {
"Linear regression": LinearRegression(),
"Extra trees": ExtraTreesRegressor(n_estimators=10, max_features=32, random_state=0),
}
y_test_predict = dict()
for name, estimator in ESTIMATORS.items():
estimator.fit(X_train, y_train)
y_test_predict[name] = estimator.predict(X_test)
image_shape = (64, 64)
n_cols = 1 + len(ESTIMATORS)
plt.figure(figsize=(3*n_cols, 3*n_faces))
plt.suptitle("Face completion with multi-output estimators", size=16)
for i in range(n_faces):
true_face = np.hstack((X_test[i], y_test[i]))
if i:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 1)
else:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 1, title="true faces")
sub.axis("off")
sub.imshow(true_face.reshape(image_shape), cmap=plt.cm.gray, interpolation="nearest")
for j, est in enumerate(ESTIMATORS):
completed_face = np.hstack((X_test[i], y_test_predict[est][i]))
if i:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 2 + j)
else:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 2 + j, title=est)
sub.axis("off")
sub.imshow(completed_face.reshape(image_shape), cmap=plt.cm.gray, interpolation="nearest");
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_gaussian_quantiles
# Construct dataset
X1, y1 = make_gaussian_quantiles(cov=2.,
n_samples=200, n_features=2,
n_classes=2, random_state=1)
X2, y2 = make_gaussian_quantiles(mean=(3, 3), cov=1.5,
n_samples=300, n_features=2,
n_classes=2, random_state=1)
X = np.concatenate((X1, X2))
y = np.concatenate((y1, - y2 + 1))
# Create and fit an AdaBoosted decision tree
bdt = AdaBoostClassifier(DecisionTreeClassifier(max_depth=1),
algorithm="SAMME",
n_estimators=200)
bdt.fit(X, y)
plot_colors = "br"
plot_step = 0.02
class_names = "AB"
plt.figure(figsize=(12,6))
plt.subplot(121)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
Z = bdt.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired)
plt.axis("tight")
for i, n, c in zip(range(2), class_names, plot_colors):
idx = np.where(y == i)
plt.scatter(X[idx, 0], X[idx, 1],
c=c, cmap=plt.cm.Paired,
label="Class %s" % n)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.legend(loc='upper right')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Decision Boundary')
twoclass_output = bdt.decision_function(X)
plot_range = (twoclass_output.min(), twoclass_output.max())
plt.subplot(122)
for i, n, c in zip(range(2), class_names, plot_colors):
plt.hist(twoclass_output[y == i],
bins=10,
range=plot_range,
facecolor=c,
label='Class %s' % n,
alpha=.5)
x1, x2, y1, y2 = plt.axis()
plt.axis((x1, x2, y1, y2 * 1.2))
plt.legend(loc='upper right')
plt.ylabel('Samples')
plt.xlabel('Score')
plt.title('Decision Scores')
plt.tight_layout()
plt.subplots_adjust(wspace=0.35)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 다수결 모형이 개별 모형보다 더 나은 성능을 보이는 이유는 다음 실험에서도 확인 할 수 있다.
Step2: 배깅
Step3: 랜덤 포레스트
Step4: 랜덤 포레스트의 장점 중 하나는 각 독립 변수의 중요도(feature importance)를 계산할 수 있다는 점이다.
Step5: 예
Step6: 에이다 부스트
|
3,610
|
<ASSISTANT_TASK:>
Python Code:
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import matplotlib.pylab as plt
import numpy as np
from distutils.version import StrictVersion
import sklearn
print(sklearn.__version__)
assert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1')
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
assert StrictVersion(tf.__version__) >= StrictVersion('1.1.0')
import keras
print(keras.__version__)
assert StrictVersion(keras.__version__) >= StrictVersion('2.0.0')
# !curl -O https://raw.githubusercontent.com/DJCordhose/speed-limit-signs/master/data/speed-limit-signs.zip
!rm -r speed-limit-signs
# https://docs.python.org/3/library/zipfile.html
from zipfile import ZipFile
zip = ZipFile(r'speed-limit-signs.zip')
zip.extractall('.')
!ls -lh
from keras.preprocessing.image import ImageDataGenerator
# ImageDataGenerator?
train_dir = 'speed-limit-signs'
generated_train_dir = 'generated'
datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.4,
horizontal_flip=False,
vertical_flip=False,
fill_mode='nearest')
# train_dir = 'speed-limit-signs/0'
train_dir = 'speed-limit-signs/5'
number_of_transformations = 10
!rm -r generated
!mkdir generated
import os
from keras.preprocessing import image
import matplotlib
import matplotlib.pyplot as plt
datagen.flow_from_directory(
train_dir,
target_size=(64, 64),
batch_size=32,
save_to_dir=generated_train_dir
)
fnames = [os.path.join(train_dir, fname) for fname in os.listdir(train_dir)]
img_path = fnames[0] # We pick one image to "augment"
img = image.load_img(img_path, target_size=(64, 64)) # Read the image and resize it
x = image.img_to_array(img) # Convert it to a Numpy array with shape (64, 64, 3)
x = x.reshape((1,) + x.shape) # Reshape it to (1, 64, 64, 3)
# plt.figure(figsize=(15, 15))
# The .flow() command below generates batches of randomly transformed images.
# It will loop indefinitely, so we need to `break` the loop at some point!
i = 1
for batch in datagen.flow(x):
plt.figure(i)
# pillow image: http://pillow.readthedocs.io/en/3.4.x/reference/Image.html
img = image.array_to_img(batch[0])
fname = 'generated/augmented_{index}.png'.format(index=i)
img.save(fname)
plt.imshow(img)
# plt.subplot(8, 8, i) # A grid of 8 rows x 8 columns
plt.axis('off')
if i % number_of_transformations == 0:
break
i += 1
plt.show()
!ls -lh generated/
def traverse(root_dir,
category_callback=lambda category_dir, category: print(category_dir, category),
image_callback=lambda path, directory, basename: print(path, directory, basename)):
directories = [d for d in os.listdir(root_dir) if os.path.isdir(os.path.join(root_dir, d))]
for d in directories:
label_dir = os.path.join(data_dir, d)
category_callback(label_dir, d)
file_names = [f for f in os.listdir(label_dir) if f.endswith(".ppm")]
for f in file_names:
path = os.path.join(label_dir, f)
basename = os.path.splitext(f)[0]
image_callback(path, d, basename)
# so können wir durch alle Ordner mit allen Schildern durchgehen
ROOT_PATH = "./"
data_dir = os.path.join(ROOT_PATH, "speed-limit-signs")
# traverse(data_dir)
# oder eben alle Bilder in den Ordnern transformieren
# Vorsicht: auf Azure Notebooks, kann das hier ewig dauern
def augment_image(input_path, output_dir, output_name, number_of_transformations=10, plot=False):
img = image.load_img(input_path, target_size=(64, 64))
x = image.img_to_array(img)
x = x.reshape((1,) + x.shape) # Reshape it to (1, 64, 64, 3)
i = 1
for batch in datagen.flow(x):
if plot:
plt.figure(i)
# pillow image: http://pillow.readthedocs.io/en/3.4.x/reference/Image.html
img = image.array_to_img(batch[0])
if plot:
plt.imshow(img)
# plt.subplot(8, 8, i) # A grid of 8 rows x 8 columns
plt.axis('off')
else:
output_path = '{dir}/{name}_{index}.png'.format(index=i, dir=output_dir, name=output_name)
img.save(output_path)
if i % number_of_transformations == 0:
break
i += 1
if plot:
plt.show()
generated_train_dir = 'augmented-signs'
!rm -r augmented-signs
!mkdir augmented-signs
traverse(data_dir,
category_callback=lambda category_dir, category: os.makedirs(os.path.join(generated_train_dir, category)) if not os.path.exists(os.path.join(generated_train_dir, category)) else None,
image_callback=lambda path, category, basename: augment_image(path, os.path.join(generated_train_dir, category), basename)
)
!ls -lh augmented-signs
!ls -lh augmented-signs/0
!rm augmented-signs.zip
!zip -r augmented-signs.zip augmented-signs >/dev/null
!ls -lh
!curl --upload-file augmented-signs.zip https://transfer.sh/augmented-signs.zip
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Originalen Datensatz laden
Step2: Wir erkunden Keras' ImageDataGenerator
Step3: Hands-On
|
3,611
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import pymc3 as pm
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
from IPython.display import Image
from matplotlib import gridspec
%matplotlib inline
plt.style.use('seaborn-white')
color = '#87ceeb'
%load_ext watermark
%watermark -p pandas,numpy,pymc3,matplotlib,seaborn
df = pd.read_csv('data/TherapeuticTouchData.csv', dtype={'s':'category'})
df.info()
df.head()
df_proportions = df.groupby('s')['y'].apply(lambda x: x.sum()/len(x))
ax = sns.distplot(df_proportions, bins=8, kde=False, color='gray')
ax.set(xlabel='Proportion Correct', ylabel='# Practitioners')
sns.despine(ax=ax);
Image('images/fig9_7.png', width=200)
practitioner_idx = df.s.cat.codes.values
practitioner_codes = df.s.cat.categories
n_practitioners = practitioner_codes.size
with pm.Model() as hierarchical_model:
omega = pm.Beta('omega', 1., 1.)
kappa_minus2 = pm.Gamma('kappa_minus2', 0.01, 0.01)
kappa = pm.Deterministic('kappa', kappa_minus2 + 2)
theta = pm.Beta('theta', alpha=omega*(kappa-2)+1, beta=(1-omega)*(kappa-2)+1, shape=n_practitioners)
y = pm.Bernoulli('y', theta[practitioner_idx], observed=df.y)
pm.model_to_graphviz(hierarchical_model)
with hierarchical_model:
trace = pm.sample(5000, cores=4, nuts_kwargs={'target_accept': 0.95})
pm.traceplot(trace, ['omega','kappa', 'theta']);
pm.summary(trace)
# Note that theta is indexed starting with 0 and not 1, as is the case in Kruschke (2015).
plt.figure(figsize=(10,12))
# Define gridspec
gs = gridspec.GridSpec(4, 6)
ax1 = plt.subplot(gs[0,:3])
ax2 = plt.subplot(gs[0,3:])
ax3 = plt.subplot(gs[1,:2])
ax4 = plt.subplot(gs[1,2:4])
ax5 = plt.subplot(gs[1,4:6])
ax6 = plt.subplot(gs[2,:2])
ax7 = plt.subplot(gs[2,2:4])
ax8 = plt.subplot(gs[2,4:6])
ax9 = plt.subplot(gs[3,:2])
ax10 = plt.subplot(gs[3,2:4])
ax11 = plt.subplot(gs[3,4:6])
# thetas and theta pairs to plot
thetas = (0, 13, 27)
theta_pairs = ((0,13),(0,27),(13,27))
font_d = {'size':14}
# kappa & omega posterior plots
for var, ax in zip(['kappa', 'omega'], [ax1, ax2]):
pm.plot_posterior(trace[var], point_estimate='mode', ax=ax, color=color, round_to=2)
ax.set_xlabel('$\{}$'.format(var), fontdict={'size':20, 'weight':'bold'})
ax1.set(xlim=(0,500))
# theta posterior plots
for var, ax in zip(thetas,[ax3, ax7, ax11]):
pm.plot_posterior(trace['theta'][:,var], point_estimate='mode', ax=ax, color=color)
ax.set_xlabel('theta[{}]'.format(var), fontdict=font_d)
# theta scatter plots
for var, ax in zip(theta_pairs,[ax6, ax9, ax10]):
ax.scatter(trace['theta'][::10,var[0]], trace['theta'][::10,var[1]], alpha=0.75, color=color, facecolor='none')
ax.plot([0, 1], [0, 1], ':k', transform=ax.transAxes, alpha=0.5)
ax.set_xlabel('theta[{}]'.format(var[0]), fontdict=font_d)
ax.set_ylabel('theta[{}]'.format(var[1]), fontdict=font_d)
ax.set(xlim=(0,1), ylim=(0,1), aspect='equal')
# theta posterior differences plots
for var, ax in zip(theta_pairs,[ax4, ax5, ax8]):
pm.plot_posterior(trace['theta'][:,var[0]]-trace['theta'][:,var[1]], point_estimate='mode', ax=ax, color=color)
ax.set_xlabel('theta[{}] - theta[{}]'.format(*var), fontdict=font_d)
plt.tight_layout()
with pm.Model() as unpooled_model:
theta = pm.Beta('theta', 1, 1, shape=n_practitioners)
y = pm.Bernoulli('y', theta[practitioner_idx], observed=df.y)
pm.model_to_graphviz(unpooled_model)
with unpooled_model:
unpooled_trace = pm.sample(5000, cores=4)
df_shrinkage = (pd.concat([pm.summary(unpooled_trace).iloc[:,0],
pm.summary(trace).iloc[3:,0]],
axis=1)
.reset_index())
df_shrinkage.columns = ['theta', 'unpooled', 'hierarchical']
df_shrinkage = pd.melt(df_shrinkage, 'theta', ['unpooled', 'hierarchical'], var_name='Model')
df_shrinkage.head()
plt.figure(figsize=(10,9))
plt.scatter(1, pm.summary(trace).iloc[0,0], s=100, c='r', marker='x', zorder=999, label='Group mean')
sns.pointplot(x='Model', y='value', hue='theta', data=df_shrinkage);
df2 = pd.read_csv('data/BattingAverage.csv', usecols=[0,1,2,3], dtype={'PriPos':'category'})
df2.info()
df2['BatAv'] = df2.Hits.divide(df2.AtBats)
df2.head(10)
# Batting average by primary field positions calculated from the data
df2.groupby('PriPos')['Hits','AtBats'].sum().pipe(lambda x: x.Hits/x.AtBats)
Image('images/fig9_13.png', width=300)
pripos_idx = df2.PriPos.cat.codes.values
pripos_codes = df2.PriPos.cat.categories
n_pripos = pripos_codes.size
# df2 contains one entry per player
n_players = df2.index.size
with pm.Model() as hierarchical_model2:
# Hyper parameters
omega = pm.Beta('omega', 1, 1)
kappa_minus2 = pm.Gamma('kappa_minus2', 0.01, 0.01)
kappa = pm.Deterministic('kappa', kappa_minus2 + 2)
# Parameters for categories (Primary field positions)
omega_c = pm.Beta('omega_c',
omega*(kappa-2)+1, (1-omega)*(kappa-2)+1,
shape = n_pripos)
kappa_c_minus2 = pm.Gamma('kappa_c_minus2',
0.01, 0.01,
shape = n_pripos)
kappa_c = pm.Deterministic('kappa_c', kappa_c_minus2 + 2)
# Parameter for individual players
theta = pm.Beta('theta',
omega_c[pripos_idx]*(kappa_c[pripos_idx]-2)+1,
(1-omega_c[pripos_idx])*(kappa_c[pripos_idx]-2)+1,
shape = n_players)
y2 = pm.Binomial('y2', n=df2.AtBats.values, p=theta, observed=df2.Hits)
pm.model_to_graphviz(hierarchical_model2)
with hierarchical_model2:
trace2 = pm.sample(3000, cores=4)
pm.traceplot(trace2, ['omega', 'kappa', 'omega_c', 'kappa_c']);
pm.plot_posterior(trace2['omega'], point_estimate='mode', color=color)
plt.title('Overall', fontdict={'fontsize':16, 'fontweight':'bold'})
plt.xlabel('omega', fontdict={'fontsize':14});
fig, axes = plt.subplots(3,3, figsize=(14,8))
for i, ax in enumerate(axes.T.flatten()):
pm.plot_posterior(trace2['omega_c'][:,i], ax=ax, point_estimate='mode', color=color)
ax.set_title(pripos_codes[i], fontdict={'fontsize':16, 'fontweight':'bold'})
ax.set_xlabel('omega_c__{}'.format(i), fontdict={'fontsize':14})
ax.set_xlim(0.10,0.30)
plt.tight_layout(h_pad=3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 9.2.4 - Example
Step2: Figure 9.9
Step3: Model (Kruschke, 2015)
Step4: Figure 9.10 - Marginal posterior distributions
Step5: Shrinkage
Step6: Here we concatenate the trace results (thetas) from both models into a dataframe. Next we shape the data into a format that we can use with Seaborn's pointplot.
Step7: The below plot shows that the theta estimates on practitioner level are pulled towards the group mean of the hierarchical model.
Step8: 9.5.1 - Example
Step9: The DataFrame contains records for 948 players in the 2012 regular season of Major League Baseball.
Step10: Model (Kruschke, 2015)
Step11: Figure 9.17
Step12: Posterior distributions of the omega_c parameters after sampling.
|
3,612
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = lambda x: 1 / (1 + np.exp(-x))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs
)# signals into final output layer
final_outputs = self.activation_function(final_inputs)# signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# Output error
output_errors = (targets - final_inputs) # Output layer error is the difference between desired target and actual output.
# Backpropagated error
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer
hidden_grad = hidden_outputs * (1.0 - hidden_outputs) # hidden layer gradients
# TUpdate the weights
self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T) # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += np.dot(hidden_grad * hidden_errors, inputs.T) * self.lr# update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# Hidden layer
hidden_inputs = np.dot(inputs.T, self.weights_input_to_hidden.T
)# signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output.T
)# signals into final output layer
final_outputs = final_inputs# signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import sys
### Set the hyperparameters here ###
epochs = 600
learning_rate = 0.001
hidden_nodes = 24
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
# plt.ylim(ymax=0.5)
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions, label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Training the network
Step9: Check out your predictions
Step10: Thinking about your results
|
3,613
|
<ASSISTANT_TASK:>
Python Code:
# Downloading data - you get this for free :-)
import requests
import os
def download_book(url):
Download book given a url to a book in .txt format and return it as a string.
text_request = requests.get(url)
text = text_request.text
return text
book_urls = dict()
book_urls['HuckFinn'] = 'http://www.gutenberg.org/cache/epub/7100/pg7100.txt'
book_urls['Macbeth'] = 'http://www.gutenberg.org/cache/epub/1533/pg1533.txt'
book_urls['AnnaKarenina'] = 'http://www.gutenberg.org/files/1399/1399-0.txt'
if not os.path.isdir('../Data/books/'):
os.mkdir('../Data/books/')
for name, url in book_urls.items():
text = download_book(url)
with open('../Data/books/'+name+'.txt', 'w', encoding='utf-8') as outfile:
outfile.write(text)
from nltk.tokenize import sent_tokenize, word_tokenize
text = 'Python is a programming language. It was created by Guido van Rossum.'
for sent in sent_tokenize(text):
print('SENTENCE', sent)
tokens = word_tokenize(sent)
print('TOKENS', tokens)
import os
basename = os.path.basename('../Data/books/HuckFinn.txt')
book = basename.strip('.txt')
print(book)
import operator
token2freq = {'a': 1000, 'the': 100, 'cow' : 5}
for token, freq in sorted(token2freq.items(),
key=operator.itemgetter(1),
reverse=True):
print(token, freq)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Assignment 3b
Step2: Encoding issues with txt files
Step3: 2.b) Store the function in the Python module utils.py. Import it in analyze.py.
Step4: Please note that Exercises 3 and 4, respectively, are designed to be difficult. You will have to combine what you have learnt so far to complete these exercises.
|
3,614
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
# we can import the CSV data as a numpy rec array
from matplotlib.pylab import csv2rec
trends = csv2rec('trends.csv')
plot(trends.week_start, trends.spring_break, label='spring break')
plot(trends.week_start, trends.textbooks, label='texbooks')
plot(trends.week_start, trends.norad, label='norad')
plot(trends.week_start, trends.skiing, label='skiing')
legend()
# create vector of year and month numbers
dates = trends.week_start
yrs = zeros_like(dates)
wks = zeros_like(dates)
for i in range(len(dates)):
yrs[i] = dates[i].year
wks[i] = dates[i].isocalendar()[1]
# For each year, list week numbers corresponding to maximum search values
trend = trends.global_warming
for yr in range(2004,2016):
idx = find(yrs==yr)
print yr, wks[find(trend[idx] == max(trend[idx]))]
# study scatter about median values
def std_median(datums):
return sqrt( sum( (datums - median(datums))**2 ) )
print "spring break: ",std_median(trends.spring_break)
print "textbooks: ",std_median(trends.textbooks)
print "skiing:",std_median(trends.skiing)
print "norad:",std_median(trends.norad)
print "global warming:",std_median(trends.global_warming)
result = np.correlate(trends.norad,trends.spring_break,mode='full')
plot(arange(result.size) - result.size/2,result)
plot(gap,result)
print gap[find(result==max(result))]
result = np.correlate(trends.textbooks,trends.spring_break, mode='full')
gap = arange(result.size) - result.size/2
plot(gap,result)
print gap[find(result==max(result))]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Use the "trends.csv" file and csv2rec() to import the data and reproduce this plot
Step2: 2. Determine in which week of each year (for all five search trends including "global_warming") that search term reached its peak. What trends can you spot with any of the terms?
Step3: 3. Which term has the largest scatter about its median value? Which term has the smallest scatter? The scatter around the median value can be found using
Step4: 4. Determine the time lag, in weeks, that maximizes the cross-correlation between "skiing" and "spring break". Try this also for "norad" and "spring break".
|
3,615
|
<ASSISTANT_TASK:>
Python Code:
model simple()
S1 -> S2; k1*S1
k1 = 0.1
S1 = 10
end
simple.simulate(0, 50, 100)
simple.plot()
model advanced()
# Create two compartments
compartment compA=1, compB=0.5 # B is half the volume of A
species A in compA, B in compB
# Use the label `J0` for the reaction
J0: A -> B; k*A
# C is defined by an assignment rule
species C
C := sin(2*time/3.14) # a sine wave
k = 0.1
A = 10
# Event: half-way through the simulation,
# add a bolus of A
at time>=5: A = A+10
end
advanced.simulate(0, 10, 100)
advanced.plot()
model simple()
S1 -> S2; k1*S1
k1 = 0.1
S1 = 10
end
# Models
model1 = model "simple"
# Simulations
sim1 = simulate uniform(0, 50, 1000)
// Tasks
task1 = run sim1 on model1
// Outputs
plot "COMBINE Archive Plot" time vs S1, S2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id="ex2"></a>
Step2: <a id="ex3"></a>
|
3,616
|
<ASSISTANT_TASK:>
Python Code:
cc1 = block.FECCyclic('1011')
# Generate 16 distinct codewords
codewords = zeros((16,7),dtype=int)
x = zeros((16,4))
for i in range(0,16):
xbin = block.binary(i,4)
xbin = array(list(xbin)).astype(int)
x[i,:] = xbin
x = reshape(x,size(x)).astype(int)
codewords = cc1.cyclic_encoder(x)
print(reshape(codewords,(16,7)))
# introduce 1 bit error into each code word and decode
codewords = reshape(codewords,(16,7))
for i in range(16):
error_pos = i % 6
codewords[i,error_pos] = (codewords[i,error_pos] +1) % 2
codewords = reshape(codewords,size(codewords))
decoded_blocks = cc1.cyclic_decoder(codewords)
print(reshape(decoded_blocks,(16,4)))
cc1 = block.FECCyclic('101001')
N_blocks_per_frame = 2000
N_bits_per_frame = N_blocks_per_frame*cc1.k
EbN0 = 6
total_bit_errors = 0
total_bit_count = 0
while total_bit_errors < 100:
# Create random 0/1 bits
x = randint(0,2,N_bits_per_frame)
y = cc1.cyclic_encoder(x)
# Add channel noise to bits and scale to +/- 1
yn = dc.cpx_awgn(2*y-1,EbN0-10*log10(cc1.n/cc1.k),1) # Channel SNR is dB less
# Scale back to 0 and 1
yn = ((sign(yn.real)+1)/2).astype(int)
z = cc1.cyclic_decoder(yn)
# Count bit errors
bit_count, bit_errors = dc.bit_errors(x,z)
total_bit_errors += bit_errors
total_bit_count += bit_count
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
print('*****************************************************')
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
SNRdB = arange(0,12,.1)
#SNRdB = arange(9.4,9.6,0.1)
Pb_uc = block.block_single_error_Pb_bound(3,SNRdB,False)
Pb_c_3 = block.block_single_error_Pb_bound(3,SNRdB)
Pb_c_4 = block.block_single_error_Pb_bound(4,SNRdB)
Pb_c_5 = block.block_single_error_Pb_bound(5,SNRdB)
figure(figsize=(5,5))
semilogy(SNRdB,Pb_uc,'k-')
semilogy(SNRdB,Pb_c_3,'c--')
semilogy(SNRdB,Pb_c_4,'m--')
semilogy(SNRdB,Pb_c_5,'g--')
semilogy([4,5,6,7,8,9],[1.44e-2,5.45e-3,2.37e-3,6.63e-4,1.33e-4,1.31e-5],'cs')
semilogy([5,6,7,8],[4.86e-3,1.16e-3,2.32e-4,2.73e-5],'ms')
semilogy([5,6,7,8],[4.31e-3,9.42e-4,1.38e-4,1.15e-5],'gs')
axis([0,12,1e-10,1e0])
title('Cyclic code BEP')
xlabel(r'$E_b/N_0$ (dB)')
ylabel(r'Bit Error Probability')
legend(('Uncoded BPSK','(7,4), hard',\
'(15,11), hard', '(31,26), hard',\
'(7,4) sim', '(15,11) sim', \
'(31,26) sim'),loc='lower left')
grid();
hh1 = block.FECHamming(3)
print('k = ' + str(hh1.k))
print('n = ' + str(hh1.n))
print('H = \n' + str(hh1.H))
print('G = \n' + str(hh1.G))
hh1 = block.FECHamming(5)
N_blocks_per_frame = 20000
N_bits_per_frame = N_blocks_per_frame*hh1.k
EbN0 = 8
total_bit_errors = 0
total_bit_count = 0
while total_bit_errors < 100:
# Create random 0/1 bits
x = randint(0,2,N_bits_per_frame)
y = hh1.hamm_encoder(x)
# Add channel noise to bits and scale to +/- 1
yn = dc.cpx_awgn(2*y-1,EbN0-10*log10(hh1.n/hh1.k),1) # Channel SNR is dB less
# Scale back to 0 and 1
yn = ((sign(yn.real)+1)/2).astype(int)
z = hh1.hamm_decoder(yn)
# Count bit errors
bit_count, bit_errors = dc.bit_errors(x,z)
total_bit_errors += bit_errors
total_bit_count += bit_count
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
print('*****************************************************')
print('Bits Received = %d, Bit errors = %d, BEP = %1.2e' %\
(total_bit_count, total_bit_errors,\
total_bit_errors/total_bit_count))
SNRdB = arange(0,12,.1)
Pb_uc = block.block_single_error_Pb_bound(3,SNRdB,False)
Pb_c_3 = block.block_single_error_Pb_bound(3,SNRdB)
Pb_c_4 = block.block_single_error_Pb_bound(4,SNRdB)
Pb_c_5 = block.block_single_error_Pb_bound(5,SNRdB)
figure(figsize=(5,5))
semilogy(SNRdB,Pb_uc,'k-')
semilogy(SNRdB,Pb_c_3,'c--')
semilogy(SNRdB,Pb_c_4,'m--')
semilogy(SNRdB,Pb_c_5,'g--')
semilogy([5,6,7,8,9,10],[6.64e-3,2.32e-3,5.25e-4,1.16e-4,1.46e-5,1.19e-6],'cs')
semilogy([5,6,7,8,9],[4.68e-3,1.19e-3,2.48e-4,3.6e-5,1.76e-6],'ms')
semilogy([5,6,7,8,9],[4.42e-3,1.11e-3,1.41e-4,1.43e-5,6.73e-7],'gs')
axis([0,12,1e-10,1e0])
title('Hamming code BEP')
xlabel(r'$E_b/N_0$ (dB)')
ylabel(r'Bit Error Probability')
legend(('Uncoded BPSK','(7,4), hard',\
'(15,11), hard', '(31,26), hard',\
'(7,4) sim', '(15,11) sim', \
'(31,26) sim'),loc='lower left')
grid();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: After the cyclic code object cc1 is created, the cc1.cyclic_encoder method can be used to encode source data bits. In the following example, we generate 16 distinct source symbols to get 16 distinct channel symbol codewords using the cyclic_encoder method. The cyclic_encoder method takes an array of source bits as a paramter. The array of source bits must be a length of a multiple of $k$. Otherwise, the method will throw an error.
Step2: Now, a bit error is introduced into each of the codewords. Then, the codwords with the error are decoded using the cyclic_decoder method. The cyclic_decoder method takes an array of codewords of length $n$ as a parameter and returns an array of source bits. Even with 1 error introduced into each codeword, All of the original source bits are still decoded properly.
Step3: The following example generates many random source symbols. It then encodes the symbols using the cyclic encoder. It then simulates a channel by adding noise. It then implements hard decisions on each of the incoming bits and puts the received noisy bits into the cyclic decoder. Source bits are then returned and errors are counted until 100 bit errors are received. Once 100 bit errors are received, the bit error probability is calculated. This code can be run at a variety of SNRs and with various code rates.
Step4: There is a function in the fec_block module called block_single_error_Pb_bound that can be used to generate the theoretical bit error probability bounds for single error correction block codes. Measured bit error probabilities from the previous example were recorded to compare to the bounds.
Step5: These plots show that the simulated bit error probability is very close to the theoretical bit error probabilites.
Step6: $k$ and $n$ are calculated form the number of parity checks $j$ and can be accessed by hh1.k and hh1.n. The $j$ x $n$ parity-check matrix $H$ and the $k$ x $n$ generator matrix $G$ can be accessed by hh1.H and hh1.G. These are exactly as described previously.
Step7: The fec_hamming class has an encoder method called hamm_encoder. This method works the same way as the cyclic encoder. It takes an array of source bits with a length that is a multiple of $k$ and returns an array of codewords. This class has another method called hamm_decoder which can decode an array of codewords. The array of codewords must have a length that is a multiple of $n$. The following example generates random source bits, encodes them using a hamming encoder, simulates transmitting them over a channel, uses hard decisions after the receiver to get a received array of codewords, and decodes the codewords using the hamming decoder. It runs until it counds 100 bit errors and then calculates the bit error probability. This can be used to simulate hamming codes with different rates (different numbers of parity checks) at different SNRs.
Step8: The fec_block.block_single_error_Pb_bound function can also be used to generate the bit error probability bounds for hamming codes. The following example generates theoretical bit error probability bounds for hamming codes and compares it with simulated bit error probabilities from the previous examples.
|
3,617
|
<ASSISTANT_TASK:>
Python Code:
#!conda install -y numpy pandas matplotlib seaborn statsmodels
%matplotlib inline
import seaborn as sns
import pandas as pd
sns.set(style="ticks")
df = sns.load_dataset("anscombe")
type(df)
df.head()
df[df.dataset == 'I']
groups = ['I', 'II', 'III', 'IV']
for group in groups:
print(group)
print(df[df.dataset == group].describe())
print()
for g in groups:
print(df[df.dataset == g]['x'].corr(df[df.dataset == g]['y']))
sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df,
col_wrap=2, ci=None, palette="muted", size=4,
scatter_kws={"s": 50, "alpha": 1}, fit_reg=False)
sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df,
col_wrap=2, ci=None, palette="muted", size=4)
sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df,
col_wrap=2, ci=95, palette="muted", size=4)
sns.lmplot(x="x", y="y", data=df[df.dataset == 'II'],
order=2, ci=95, scatter_kws={"s": 80});
sns.lmplot(x="x", y="y", data=df[df.dataset == 'III'],
robust=True, ci=None, scatter_kws={"s": 80});
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the Anscombe's quartet dataset
Step2: And df is... a pandas dataframe
Step3: that we can print, plot, ...
Step4: Print just first dataset
Step5: Basic statistical parameters
Step6: Let's compare the correlation coefficient for each dataset
Step7: Plot
Step8: Linear regression
Step9: It's the same line for all datasets
Step10: Key message
Step11: In the presence of outliers, it can be useful to fit a robust regression, which uses a different loss function to downweight relatively large residuals
|
3,618
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Eric Larson <larson.eric.d@gmail.com>
# Denis Engemannn <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
import matplotlib.pyplot as plt
import mne
from mne.stats import (spatio_temporal_cluster_test, f_threshold_mway_rm,
f_mway_rm, summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
src_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
# we'll load all four conditions that make up the 'two ways' of our ANOVA
event_id = dict(l_aud=1, r_aud=2, l_vis=3, r_vis=4)
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
epochs.equalize_event_counts(event_id)
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE, sLORETA, or eLORETA)
inverse_operator = read_inverse_operator(fname_inv)
# we'll only use one hemisphere to speed up this example
# instead of a second vertex array we'll pass an empty array
sample_vertices = [inverse_operator['src'][0]['vertno'], np.array([], int)]
# Let's average and compute inverse, then resample to speed things up
conditions = []
for cond in ['l_aud', 'r_aud', 'l_vis', 'r_vis']: # order is important
evoked = epochs[cond].average()
evoked.resample(50, npad='auto')
condition = apply_inverse(evoked, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition.crop(0, None)
conditions.append(condition)
tmin = conditions[0].tmin
tstep = conditions[0].tstep
n_vertices_sample, n_times = conditions[0].lh_data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 4) * 10
for ii, condition in enumerate(conditions):
X[:, :, :, ii] += condition.lh_data[:, :, np.newaxis]
# Read the source space we are morphing to (just left hemisphere)
src = mne.read_source_spaces(src_fname)
fsave_vertices = [src[0]['vertno'], []]
morph_mat = mne.compute_source_morph(
src=inverse_operator['src'], subject_to='fsaverage',
spacing=fsave_vertices, subjects_dir=subjects_dir, smooth=20).morph_mat
morph_mat = morph_mat[:, :n_vertices_sample] # just left hemi from src
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 4)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 4)
X = np.transpose(X, [2, 1, 0, 3]) #
X = [np.squeeze(x) for x in np.split(X, 4, axis=-1)]
factor_levels = [2, 2]
effects = 'A:B'
# Tell the ANOVA not to compute p-values which we don't need for clustering
return_pvals = False
# a few more convenient bindings
n_times = X[0].shape[1]
n_conditions = 4
def stat_fun(*args):
# get f-values only.
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=return_pvals)[0]
# as we only have one hemisphere we need only need half the connectivity
print('Computing connectivity.')
connectivity = mne.spatial_src_connectivity(src[:1])
# Now let's actually do the clustering. Please relax, on a small
# notebook and one single thread only this will take a couple of minutes ...
pthresh = 0.0005
f_thresh = f_threshold_mway_rm(n_subjects, factor_levels, effects, pthresh)
# To speed things up a bit we will ...
n_permutations = 128 # ... run fewer permutations (reduces sensitivity)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1,
threshold=f_thresh, stat_fun=stat_fun,
n_permutations=n_permutations,
buffer_size=None)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# The brighter the color, the stronger the interaction between
# stimulus modality and stimulus location
brain = stc_all_cluster_vis.plot(subjects_dir=subjects_dir, views='lat',
time_label='Duration significant (ms)',
clim=dict(kind='value', lims=[0, 1, 40]))
brain.save_image('cluster-lh.png')
brain.show_view('medial')
inds_t, inds_v = [(clusters[cluster_ind]) for ii, cluster_ind in
enumerate(good_cluster_inds)][0] # first cluster
times = np.arange(X[0].shape[1]) * tstep * 1e3
plt.figure()
colors = ['y', 'b', 'g', 'purple']
event_ids = ['l_aud', 'r_aud', 'l_vis', 'r_vis']
for ii, (condition, color, eve_id) in enumerate(zip(X, colors, event_ids)):
# extract time course at cluster vertices
condition = condition[:, :, inds_v]
# normally we would normalize values across subjects but
# here we use data from the same subject so we're good to just
# create average time series across subjects and vertices.
mean_tc = condition.mean(axis=2).mean(axis=0)
std_tc = condition.std(axis=2).std(axis=0)
plt.plot(times, mean_tc.T, color=color, label=eve_id)
plt.fill_between(times, mean_tc + std_tc, mean_tc - std_tc, color='gray',
alpha=0.5, label='')
ymin, ymax = mean_tc.min() - 5, mean_tc.max() + 5
plt.xlabel('Time (ms)')
plt.ylabel('Activation (F-values)')
plt.xlim(times[[0, -1]])
plt.ylim(ymin, ymax)
plt.fill_betweenx((ymin, ymax), times[inds_t[0]],
times[inds_t[-1]], color='orange', alpha=0.3)
plt.legend()
plt.title('Interaction between stimulus-modality and location.')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Read epochs for all channels, removing a bad one
Step3: Transform to source space
Step4: Transform to common cortical space
Step5: It's a good idea to spatially smooth the data, and for visualization
Step6: Now we need to prepare the group matrix for the ANOVA statistic. To make the
Step7: Prepare function for arbitrary contrast
Step8: Finally we will pick the interaction effect by passing 'A
Step9: A stat_fun must deal with a variable number of input arguments.
Step10: Compute clustering statistic
Step11: Visualize the clusters
Step12: Finally, let's investigate interaction effect by reconstructing the time
|
3,619
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# use seaborn plotting defaults
import seaborn as sns; sns.set()
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn');
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plt.plot([0.6], [2.1], 'x', color='red', markeredgewidth=2, markersize=10)
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
plt.plot(xfit, m * xfit + b, '-k')
plt.xlim(-1, 3.5);
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none',
color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
from sklearn.svm import SVC # "Support vector classifier"
model = SVC(kernel='linear', C=1E10)
model.fit(X, y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Motivating Support Vector Machines
Step2: A linear discriminative classifier would attempt to draw a straight line separating the two sets of data, and thereby create a model for classification. For two dimensional data like that shown here, this is a task we could do by hand. But immediately we see a problem
Step3: These are three very different separators which, nevertheless, perfectly discriminate between these samples. Depending on which you choose, a new data point (e.g., the one marked by the "X" in this plot) will be assigned a different label! Evidently our simple intuition of "drawing a line between classes" is not enough, and we need to think a bit deeper.
Step4: In support vector machines, the line that maximizes this margin is the one we will choose as the optimal model. Support vector machines are an example of such a maximum margin estimator.
|
3,620
|
<ASSISTANT_TASK:>
Python Code:
import ROOT
treename = "Events"
filename = "root://eospublic.cern.ch//eos/opendata/cms/derived-data/AOD2NanoAODOutreachTool/Run2012BC_DoubleMuParked_Muons.root"
df = ROOT.RDataFrame(treename, filename)
# Take only the first 1M events
df_range = # do something here
# Change the first strings of both following operations to proper C++ expressions
# Use the points 1, 2 above as hints for what to write in your expression
df_2mu = df_range.Filter("DO SOMETHING WITH COLUMN nMuon", "Events with exactly two muons")
df_oc = df_2mu.Filter("DO SOMETHING WITH COLUMN Muon_charge", "Muons with opposite charge")
df_mass = df_oc.Define("Dimuon_mass", "ROOT::VecOps::InvariantMass(Muon_pt, Muon_eta, Muon_phi, Muon_mass)")
# These are the parameters you would give to a histogram object constructor
# Put them in the right order inside the parentheses below
# You are effectively passing a tuple to the `Histo1D` operation as seen previously in other notebooks
nbins = 30000
low = 0.25
up = 300
histo_name = "Dimuon_mass"
histo_title = histo_name
h = df_mass.Histo1D(("PUT HISTOGRAM PARAMETERS HERE IN THE CORRECT ORDER"), "Dimuon_mass")
report = # your code here
%%time
ROOT.gStyle.SetOptStat(0)
ROOT.gStyle.SetTextFont(42)
c = ROOT.TCanvas("c", "", 800, 700)
c.SetLogx()
c.SetLogy()
h.SetTitle("")
h.GetXaxis().SetTitle("m_{#mu#mu} (GeV)")
h.GetXaxis().SetTitleSize(0.04)
h.GetYaxis().SetTitle("N_{Events}")
h.GetYaxis().SetTitleSize(0.04)
h.Draw()
label = ROOT.TLatex()
label.SetNDC(True)
label.SetTextSize(0.040)
label.DrawLatex(0.100, 0.920, "#bf{CMS Open Data}")
label.SetTextSize(0.030)
label.DrawLatex(0.500, 0.920, "#sqrt{s} = 8 TeV, L_{int} = 11.6 fb^{-1}")
%jsroot on
c.Draw()
report.Print()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a ROOT dataframe in Python
Step2: Run only on a part of the dataset
Step3: Filter relevant events for this analysis
Step4: Perform complex operations in Python, efficiently!
Step5: Make a histogram of the newly created column
Step6: Book a Report of the dataframe filters
Step7: Start data processing
|
3,621
|
<ASSISTANT_TASK:>
Python Code:
import urllib.request
fuel_pin_url = 'https://tinyurl.com/y3ugwz6w' # 1.2 MB
teapot_url = 'https://tinyurl.com/y4mcmc3u' # 29 MB
def download(url):
Helper function for retrieving dagmc models
u = urllib.request.urlopen(url)
if u.status != 200:
raise RuntimeError("Failed to download file.")
# save file as dagmc.h5m
with open("dagmc.h5m", 'wb') as f:
f.write(u.read())
%matplotlib inline
from IPython.display import Image
import openmc
# materials
u235 = openmc.Material(name="fuel")
u235.add_nuclide('U235', 1.0, 'ao')
u235.set_density('g/cc', 11)
u235.id = 40
water = openmc.Material(name="water")
water.add_nuclide('H1', 2.0, 'ao')
water.add_nuclide('O16', 1.0, 'ao')
water.set_density('g/cc', 1.0)
water.add_s_alpha_beta('c_H_in_H2O')
water.id = 41
mats = openmc.Materials([u235, water])
mats.export_to_xml()
Image("./images/cylinder_mesh.png", width=350)
download(fuel_pin_url)
settings = openmc.Settings()
settings.dagmc = True
settings.batches = 10
settings.inactive = 2
settings.particles = 5000
settings.export_to_xml()
p = openmc.Plot()
p.width = (25.0, 25.0)
p.pixels = (400, 400)
p.color_by = 'material'
p.colors = {u235: 'yellow', water: 'blue'}
openmc.plot_inline(p)
settings.source = openmc.Source(space=openmc.stats.Box([-4., -4., -4.],
[ 4., 4., 4.]))
settings.export_to_xml()
tally = openmc.Tally()
tally.scores = ['total']
tally.filters = [openmc.CellFilter(1)]
tallies = openmc.Tallies([tally])
tallies.export_to_xml()
openmc.run()
download(teapot_url)
Image("./images/teapot.jpg", width=600)
iron = openmc.Material(name="iron")
iron.add_nuclide("Fe54", 0.0564555822608)
iron.add_nuclide("Fe56", 0.919015287728)
iron.add_nuclide("Fe57", 0.0216036861685)
iron.add_nuclide("Fe58", 0.00292544384231)
iron.set_density("g/cm3", 7.874)
mats = openmc.Materials([iron, water])
mats.export_to_xml()
p = openmc.Plot()
p.basis = 'xz'
p.origin = (0.0, 0.0, 0.0)
p.width = (30.0, 20.0)
p.pixels = (450, 300)
p.color_by = 'material'
p.colors = {iron: 'gray', water: 'blue'}
openmc.plot_inline(p)
p.width = (18.0, 6.0)
p.basis = 'xz'
p.origin = (10.0, 0.0, 5.0)
p.pixels = (600, 200)
p.color_by = 'material'
openmc.plot_inline(p)
settings = openmc.Settings()
settings.dagmc = True
settings.batches = 10
settings.particles = 5000
settings.run_mode = "fixed source"
src_locations = ((-4.0, 0.0, -2.0),
( 4.0, 0.0, -2.0),
( 4.0, 0.0, -6.0),
(-4.0, 0.0, -6.0),
(10.0, 0.0, -4.0),
(-8.0, 0.0, -4.0))
# we'll use the same energy for each source
src_e = openmc.stats.Discrete(x=[12.0,], p=[1.0,])
# create source for each location
sources = []
for loc in src_locations:
src_pnt = openmc.stats.Point(xyz=loc)
src = openmc.Source(space=src_pnt, energy=src_e)
sources.append(src)
src_str = 1.0 / len(sources)
for source in sources:
source.strength = src_str
settings.source = sources
settings.export_to_xml()
mesh = openmc.RegularMesh()
mesh.dimension = (120, 1, 40)
mesh.lower_left = (-20.0, 0.0, -10.0)
mesh.upper_right = (20.0, 1.0, 4.0)
mesh_filter = openmc.MeshFilter(mesh)
pot_filter = openmc.CellFilter([1])
pot_tally = openmc.Tally()
pot_tally.filters = [mesh_filter, pot_filter]
pot_tally.scores = ['flux']
water_filter = openmc.CellFilter([5])
water_tally = openmc.Tally()
water_tally.filters = [mesh_filter, water_filter]
water_tally.scores = ['flux']
tallies = openmc.Tallies([pot_tally, water_tally])
tallies.export_to_xml()
openmc.run()
sp = openmc.StatePoint("statepoint.10.h5")
water_tally = sp.get_tally(scores=['flux'], id=water_tally.id)
water_flux = water_tally.mean
water_flux.shape = (40, 120)
water_flux = water_flux[::-1, :]
pot_tally = sp.get_tally(scores=['flux'], id=pot_tally.id)
pot_flux = pot_tally.mean
pot_flux.shape = (40, 120)
pot_flux = pot_flux[::-1, :]
del sp
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(18, 16))
sub_plot1 = plt.subplot(121, title="Kettle Flux")
sub_plot1.imshow(pot_flux)
sub_plot2 = plt.subplot(122, title="Water Flux")
sub_plot2.imshow(water_flux)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using CAD-Based Geometries
Step2: This notebook is intended to demonstrate how DagMC problems are run in OpenMC. For more information on how DagMC models are created, please refer to the DagMC User's Guide.
Step3: To start, we'll be using a simple U235 fuel pin surrounded by a water moderator, so let's create those materials.
Step4: Now let's get our DAGMC geometry. We'll be using prefabricated models in this notebook. For information on how to create your own DAGMC models, you can refer to the instructions here.
Step5: First we'll need to grab some pre-made DagMC models.
Step6: OpenMC expects that the model has the name "dagmc.h5m" so we'll name the file that and indicate to OpenMC that a DAGMC geometry is being used by setting the settings.dagmc attribute to True.
Step7: Unlike conventional geometries in OpenMC, we really have no way of knowing what our model looks like at this point. Thankfully DagMC geometries can be plotted just like any other OpenMC geometry to give us an idea of what we're now working with.
Step8: Now that we've had a chance to examine the model a bit, we can finish applying our settings and add a source.
Step9: Tallies work in the same way when using DAGMC geometries too. We'll add a tally on the fuel cell here.
Step10: Note
Step11: More Complicated Geometry
Step12: Our teapot is made out of iron, so we'll want to create that material and make sure it is in our materials.xml file.
Step13: To make sure we've updated the file correctly, let's make a plot of the teapot.
Step14: Here we start to see some of the advantages CAD geometries provide. This particular file was pulled from the GrabCAD and pushed through the DAGMC workflow without modification (other than the addition of material assignments). It would take a considerable amount of time to create a model like this using CSG!
Step15: Now let's brew some tea! ... using a very hot neutron source. We'll use some well-placed point sources distributed throughout the model.
Step16: ...and setup a couple mesh tallies. One for the kettle, and one for the water inside.
Step17: Note that the performance is significantly lower than our pincell model due to the increased complexity of the model, but it allows us to examine tally results like these
|
3,622
|
<ASSISTANT_TASK:>
Python Code:
# Author: Tommy Clausner <tommy.clausner@gmail.com>
#
# License: BSD (3-clause)
import os
import matplotlib.pyplot as plt
import nibabel as nib
import mne
from mne.datasets import sample
from mne.minimum_norm import apply_inverse, read_inverse_operator
from nilearn.plotting import plot_glass_brain
print(__doc__)
sample_dir_raw = sample.data_path()
sample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')
subjects_dir = os.path.join(sample_dir_raw, 'subjects')
fname_evoked = os.path.join(sample_dir, 'sample_audvis-ave.fif')
fname_inv = os.path.join(sample_dir, 'sample_audvis-meg-vol-7-meg-inv.fif')
fname_t1_fsaverage = os.path.join(subjects_dir, 'fsaverage', 'mri',
'brain.mgz')
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
inverse_operator = read_inverse_operator(fname_inv)
# Apply inverse operator
stc = apply_inverse(evoked, inverse_operator, 1.0 / 3.0 ** 2, "dSPM")
# To save time
stc.crop(0.09, 0.09)
morph = mne.compute_source_morph(inverse_operator['src'],
subject_from='sample', subject_to='fsaverage',
subjects_dir=subjects_dir)
stc_fsaverage = morph.apply(stc)
# Create mri-resolution volume of results
img_fsaverage = morph.apply(stc, mri_resolution=2, output='nifti1')
# Load fsaverage anatomical image
t1_fsaverage = nib.load(fname_t1_fsaverage)
# Plot glass brain (change to plot_anat to display an overlaid anatomical T1)
display = plot_glass_brain(t1_fsaverage,
title='subject results to fsaverage',
draw_cross=False,
annotate=True)
# Add functional data as overlay
display.add_overlay(img_fsaverage, alpha=0.75)
stc_fsaverage = mne.compute_source_morph(inverse_operator['src'],
subject_from='sample',
subjects_dir=subjects_dir).apply(stc)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup paths
Step2: Compute example data. For reference see
Step3: Get a SourceMorph object for VolSourceEstimate
Step4: Apply morph to VolSourceEstimate
Step5: Convert morphed VolSourceEstimate into NIfTI
Step6: Plot results
Step7: Reading and writing SourceMorph from and to disk
|
3,623
|
<ASSISTANT_TASK:>
Python Code:
# built-in package
import os
import sys
import json
import time
import datetime as dt
# third-parth package
import dashboard as dash
import pandas as pd
import matplotlib as plt
import seaborn
import mpld3
# package configre
pd.options.display.max_columns = 100
pd.options.display.max_rows = 500
url = https://github.com/litaotao/IPython-Dashboard/raw/v-0.1.2-visualiza-table/docs/people_number_by_province_lateset_10_years.csv
data = pd.read_csv(url)
data.head(3)
data.plot(x="地区", y=["2014年", "2013年"], kind="bar", figsize=(12, 5))
mpld3.display()
dash.client.sender(data, key='chinese_population', force=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 2. Load data
Step3: 3. Traditionaly way of plotting [ I really like ipython indeed, but ... ]
Step4: 4. Not enough even arm matplotlib with seaborn, mpld3
|
3,624
|
<ASSISTANT_TASK:>
Python Code:
# Importing the libraries which we need now.
import pandas
from pandas.plotting import scatter_matrix
import matplotlib.pyplot as plt
%matplotlib inline
# Dataset from - https://archive.ics.uci.edu/ml/datasets/Nursery
df = pandas.read_table('nursery.txt', sep=',', header=None, names=['parents', 'has_nurs', 'form', 'children','housing',
'finance','social','health','class'])
# shape attribute gives the dimensions of a dataframe
print df.shape
# Output printing out first 5 columns
df.head()
# The describe function prints the summary of the data
print(df.describe())
# The group by function summarizes a particular feature.
print(df.groupby('class').size())
print("\n")
print(df.groupby('parents').size())
print("\n")
print(df.groupby('has_nurs').size())
print("\n")
print(df.groupby('form').size())
print("\n")
print(df.groupby('children').size())
print("\n")
print(df.groupby('housing').size())
print("\n")
print(df.groupby('finance').size())
print("\n")
print(df.groupby('social').size())
print("\n")
print(df.groupby('health').size())
plt.rcParams['figure.figsize'] = (5,5)
df['parents'].value_counts().plot(kind='bar')
plt.title('Parents')
plt.show()
df['has_nurs'].value_counts().plot(kind='bar')
plt.title('has_nurs')
plt.show()
df['form'].value_counts().plot(kind='bar')
plt.title('Form')
plt.show()
df['children'].value_counts().plot(kind='bar')
plt.title('Children')
plt.show()
df['housing'].value_counts().plot(kind='bar')
plt.title('Housing')
plt.show()
df['finance'].value_counts().plot(kind='bar')
plt.title('Finance')
plt.show()
df['social'].value_counts().plot(kind='bar')
plt.title('Social')
plt.show()
df['health'].value_counts().plot(kind='bar')
plt.title('Health')
plt.show()
df['class'].value_counts().plot(kind='bar')
plt.title('Class')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Summarizing the Dataset
Step2: The above displayed result shows that the dataframe has 12960 rows and 9 colums. It means that we have 9 features and 12960 instances.
Step3: 3. Summary of our data
Step4: If we look at the above result clearly, we can infer the following. Let us take example of the feature 'parent'
Step5: In the above summary data, we ca see that the class feature has 5 different observations. So, we should look at the data distribution of the expected output for every instance.
|
3,625
|
<ASSISTANT_TASK:>
Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
list(vocab)
import numpy as np
layer_0 = np.zeros((1,vocab_size))
layer_0
from IPython.display import Image
Image(filename='sentiment_network.png')
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
word2index
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
if(label == 'POSITIVE'):
return 1
else:
return 0
labels[0]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
# set our random number generator
np.random.seed(1)
self.pre_process_data(reviews, labels)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
assert(len(training_reviews) == len(training_labels))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# TODO: Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# TODO: Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
# evaluate our model before training (just to show how horrible it is)
mlp.test(reviews[-1000:],labels[-1000:])
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
# train the network
mlp.train(reviews[:-1000],labels[:-1000])
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
import time
import sys
import numpy as np
# Let's tweak our network from before to model these phenomena
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
# set our random number generator
np.random.seed(1)
self.pre_process_data(reviews, labels)
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.learning_rate = learning_rate
self.layer_0 = np.zeros((1,input_nodes))
self.layer_1 = np.zeros((1,hidden_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews_raw, training_labels):
assert(len(training_reviews_raw) == len(training_labels))
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
correct_so_far = 0
start = time.time()
for i in range(len(training_reviews)):
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
#self.update_input_layer(review)
# Hidden layer
# old code
#layer_1 = self.layer_0.dot(self.weights_0_1)
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# TODO: Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# TODO: Update the weights
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
#self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
if(np.abs(layer_2_error) < 0.5):
correct_so_far += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
correct = 0
start = time.time()
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
reviews_per_second = i / float(time.time() - start)
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
if(layer_2[0] > 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
# evaluate our model before training (just to show how horrible it is)
mlp.test(reviews[-1000:],labels[-1000:])
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lesson
Step2: Project 1
Step3: Transforming Text into Numbers
Step4: Project 2
Step5: Project 3
Step6: Understanding Neural Noise
Step7: Project 4
Step8: Analyzing Inefficiencies in our Network
|
3,626
|
<ASSISTANT_TASK:>
Python Code:
window = {
'since' : "2018-01-01T00:00:00",
'until' : "2018-01-10T00:00:00"
}
dt = DataTracker()
def extract_data(doc):
data = {}
## TODO: Add document UID?
data['title'] = doc.title
data['time'] = doc.time
data['group-acronym'] = dt.group(doc.group).acronym
data['type'] = doc.type.uri
affiliations = [
doc_author.affiliation
for doc_author
in dt.document_authors(doc)
]
return [
{**data,
'affiliation' : affiliation}
for affiliation
in affiliations]
drafts = dt.documents(doctype = dt.document_type(
DocumentTypeURI("/api/v1/name/doctypename/draft")),
**window
)
data = [item for sublist in [extract_data(x) for x in drafts] for item in sublist]
draft_df = pd.DataFrame(data=data)
draft_df.head()
link_df = draft_df[['group-acronym', 'affiliation','time']]
link_df[:5]
all_affiliations = link_df.groupby('affiliation').size()
ents = process.resolve_entities(all_affiliations,
process.containment_distance,
threshold=.25)
replacements = {}
for r in [{name: ent for name in ents[ent]} for ent in ents]:
replacements.update(r)
link_df = link_df.replace(to_replace=replacements)
edges = [
(row[1]['group-acronym'], row[1]['affiliation'])
for row
in link_df[['group-acronym','affiliation']].iterrows()
]
G = nx.Graph()
G.add_nodes_from([x[0]
for x
in link_df[['group-acronym']].drop_duplicates().values],
category=0)
G.add_nodes_from(all_affiliations.index,
category=1)
G.add_edges_from(edges)
## Clean the graph
G.remove_node('none')
for c in list(nx.connected_components(G)):
if len(c) <= 1:
for n in c:
G.remove_node(n)
colors = ['r' if x[1]['category'] else 'g' for x in list(G.nodes(data=True))]
nx.draw(
G,
node_color = colors,
with_labels = True
)
nx.write_gexf(G,"group-affiliations.gexf")
import bigbang.datasets.organizations as organizations
cat = organizations.load_data()
replacements = {x[1]['name'] : x[1]['category']
for x
in cat.iterrows()
if not pd.isna(x[1]['category'])}
link_cat_df = link_df.replace(to_replace=replacements)
link_cat_df.groupby('affiliation').size().sort_values(ascending=False)[:40]
edges = [
(row[1]['group-acronym'], row[1]['affiliation'])
for row
in link_cat_df[['group-acronym','affiliation']].iterrows()
]
affils = link_cat_df.groupby('affiliation').size().index
G = nx.Graph()
G.add_nodes_from([x[0]
for x
in link_cat_df[['group-acronym']].drop_duplicates().values],
category=0)
G.add_nodes_from(affils,
category=1)
G.add_edges_from(edges)
## Clean the graph
G.remove_node('none')
for c in list(nx.connected_components(G)):
if len(c) <= 1:
for n in c:
G.remove_node(n)
[x[1] for x in list(G.nodes(data=True)) if 'category' not in x[1]]
colors = ['r' if x[1]['category'] else 'g' for x in list(G.nodes(data=True))]
nx.draw(
G,
node_color = colors,
with_labels = True
)
nx.write_gexf(G,"group-org-categories.gexf")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a table for the group and affiliation links in particular.
Step2: Entity resolution on the affiliations
Step3: Plot the network links between working groups and affiliations
Step4: Look at categories of affiliations
|
3,627
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from pandas import set_option
set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
filename = 'training_data.csv'
training_data = pd.read_csv(filename)
training_data
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
training_data.describe()
blind = training_data[training_data['Well Name'] == 'SHANKLE']
training_data = training_data[training_data['Well Name'] != 'SHANKLE']
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
make_facies_log_plot(
training_data[training_data['Well Name'] == 'SHRIMPLIN'],
facies_colors)
#count the number of unique entries for each facies, sort them by
#facies number (instead of by number of entries)
facies_counts = training_data['Facies'].value_counts().sort_index()
#use facies labels to index each count
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,
title='Distribution of Training Data by Facies')
facies_counts
#save plot display settings to change back to when done plotting with seaborn
inline_rc = dict(mpl.rcParams)
import seaborn as sns
sns.set()
sns.pairplot(training_data.drop(['Well Name','Facies','Formation','Depth','NM_M','RELPOS'],axis=1),
hue='FaciesLabels', palette=facies_color_map,
hue_order=list(reversed(facies_labels)))
#switch back to default matplotlib plot style
mpl.rcParams.update(inline_rc)
correct_facies_labels = training_data['Facies'].values
feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1)
feature_vectors.describe()
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(feature_vectors)
scaled_features = scaler.transform(feature_vectors)
feature_vectors
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
scaled_features, correct_facies_labels, test_size=0.1, random_state=42)
from sklearn.ensemble import GradientBoostingClassifier
gbModel = GradientBoostingClassifier(loss='deviance', n_estimators=100, learning_rate=0.1, max_depth=3, random_state=None, max_leaf_nodes=None, verbose=1)
gbModel.fit(X_train,y_train)
predicted_labels = gbModel.predict(X_test)
from sklearn.metrics import confusion_matrix
from classification_utilities import display_cm, display_adj_cm
conf = confusion_matrix(y_test, predicted_labels)
display_cm(conf, facies_labels, hide_zeros=True)
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
print('Facies classification accuracy = %f' % accuracy(conf))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies))
blind
y_blind = blind['Facies'].values
well_features = blind.drop(['Facies', 'Formation', 'Well Name', 'Depth'], axis=1)
X_blind = scaler.transform(well_features)
y_pred = gbModel.predict(X_blind)
blind['Prediction'] = y_pred
cv_conf = confusion_matrix(y_blind, y_pred)
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
def compare_facies_plot(logs, compadre, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[6])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im2, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-2):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[6].set_xlabel(compadre)
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
ax[6].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
compare_facies_plot(blind, 'Prediction', facies_colors)
well_data = pd.read_csv('validation_data_nofacies.csv')
well_data['Well Name'] = well_data['Well Name'].astype('category')
well_features = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
X_unknown = scaler.transform(well_features)
#predict facies of unclassified data
y_unknown = gbModel.predict(X_unknown)
well_data['Facies'] = y_unknown
well_data
well_data['Well Name'].unique()
make_facies_log_plot(
well_data[well_data['Well Name'] == 'STUART'],
facies_colors=facies_colors)
make_facies_log_plot(
well_data[well_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies_colors)
well_data.to_csv('well_data_with_facies.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
Step2: This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set.
Step3: These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone.
Step4: Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial.
Step5: Placing the log plotting code in a function will make it easy to plot the logs from multiples wells, and can be reused later to view the results when we apply the facies classification model to other wells. The function was written to take a list of colors and facies labels as parameters.
Step6: In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class.
Step7: This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies.
Step8: Conditioning the data set
Step9: Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie
Step10: Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set.
Step11: Training the classifier using Gradient Boosting
Step12: Now we can train the classifier using the training set we created above.
Step13: Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set.
Step14: We need some metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels.
Step15: The rows of the confusion matrix correspond to the actual facies labels. The columns correspond to the labels assigned by the classifier. For example, consider the first row. For the feature vectors in the test set that actually have label SS, 18 were correctly indentified as SS, 5 were classified as CSiS and 1 was classified as FSiS.
Step16: As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels.
Step17: Applying the classification model to the blind data
Step18: The label vector is just the Facies column
Step19: We can form the feature matrix by dropping some of the columns and making a new dataframe
Step20: Now we can transform this with the scaler we made before
Step21: Now it's a simple matter of making a prediction and storing it back in the dataframe
Step22: Let's see how we did with the confusion matrix
Step23: The results are 0.59 accuracy on facies classification of blind data and 0.92 adjacent facies classification.
Step24: ...but does remarkably well on the adjacent facies predictions.
Step25: Applying the classification model to new data
Step26: The data needs to be scaled using the same constants we used for the training data.
Step27: Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe.
Step28: We can use the well log plot to view the classification results along with the well logs.
Step29: Finally we can write out a csv file with the well data along with the facies classification results.
|
3,628
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
from ipywidgets import interact
import matplotlib.pyplot as plt, mpld3
import seaborn as sns
mpld3.enable_notebook()
mean, cov = [0, 1], [(1, .5), (.5, 1)]
data = np.random.multivariate_normal(mean, cov, 200)
df = pd.DataFrame(data, columns=["x", "y"])
scatter_sns = sns.jointplot(x="x", y="y", data=df)
fig = plt.gcf()
ax = plt.gca()
pts = ax.get_children()[3]
mpld3.plugins.connect(fig)
mpld3.display(fig)
t = np.arange(0.0, 1.0, 0.01)
y1 = np.sin(2*np.pi*t)
y2 = np.sin(2*2*np.pi*t)
data = pd.DataFrame({"t": t,
"y1": y1,
"y2": y2,
"size": np.random.randint(20,200, size=len(t))
})
scatter_1 = sns.lmplot("t", "y1",
scatter_kws={"s": data["size"]},
robust=False, # slow if true
data=data, size=8)
fig1 = plt.gcf()
ax1 = plt.gca()
pts1 = ax1.get_children()[3]
scatter_2 = sns.lmplot("t", "y2",
scatter_kws={"s": data["size"]},
robust=False, # slow if true
data=data, size=8)
fig2 = plt.gcf()
ax2 = plt.gca()
pts2 = ax2.get_children()[3]
#mpld3.plugins.connect(fig2)
#mpld3.display(fig2)
# Scatter points
fig, ax = plt.subplots()
np.random.seed(42)
x, y = np.random.normal(size=(2,200))
color, size = np.random.random((2,200))
ax.scatter(x, y, c=color, s = 500 * size, alpha=0.3)
ax.grid(color='lightgray', alpha=0.7)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: d3.js via mpld3
|
3,629
|
<ASSISTANT_TASK:>
Python Code:
#general imports
import matplotlib.pyplot as plt
import pygslib
from matplotlib.patches import Ellipse
import numpy as np
import pandas as pd
#make the plots inline
%matplotlib inline
#get the data in gslib format into a pandas Dataframe
mydata= pygslib.gslib.read_gslib_file('../data/cluster.dat')
#view data in a 2D projection
plt.scatter(mydata['Xlocation'],mydata['Ylocation'], c=mydata['Primary'])
plt.colorbar()
plt.grid(True)
plt.show()
print (pygslib.gslib.__dist_transf.backtr.__doc__)
transin,transout, error = pygslib.gslib.__dist_transf.ns_ttable(mydata['Primary'],mydata['Declustering Weight'])
print ('there was any error?: ', error!=0)
mydata['NS_Primary'] = pygslib.gslib.__dist_transf.nscore(mydata['Primary'],transin,transout,getrank=False)
mydata['NS_Primary'].hist(bins=30)
mydata['NS_Primary_BT'],error = pygslib.gslib.__dist_transf.backtr(mydata['NS_Primary'],
transin,transout,
ltail=1,utail=1,ltpar=0,utpar=60,
zmin=0,zmax=60,getrank=False)
print ('there was any error?: ', error!=0, error)
mydata[['Primary','NS_Primary_BT']].hist(bins=30)
mydata[['Primary','NS_Primary_BT', 'NS_Primary']].head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data ready for work
Step2: The nscore transformation table function
Step3: Get the transformation table
Step4: Get the normal score transformation
Step5: Doing the back transformation
|
3,630
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
pass
HTML('../style/code_toggle.html')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
|
3,631
|
<ASSISTANT_TASK:>
Python Code:
# based on http://matplotlib.org/examples/api/barchart_demo.html
# Make some fake data
d = {'gender': np.hstack([np.ones(10), np.zeros(10)]), 'scores': np.hstack([np.random.rand(10), np.random.rand(10)+1])}
df = pd.DataFrame(d)
# Change this part and replace with the variables you want to plot and the grouping variable column name.
vals = ['scores'] # This is the column name of the variable to plot on Y axis
group = ['gender'] # This is the grouping variable for the X axis
# Get means for each group
means = df[vals+group].groupby(group).mean().squeeze()
# Get standard error of means for each group
sems = df[vals+group].groupby(group).sem().squeeze()
fig,ax = plt.subplots(figsize=(10,5)) # Change figure size in (width,height)
ind = np.arange(np.size(np.unique(df[group]),0)) # location of bars
width = .5 # Width of bars
# (bar x-location, bar heights, width=bar width, color=bar color, yerr=standard error,ecolor=errorbarcolor)
rects1 = ax.bar(ind - width/2,means,width=.5,color='lightsalmon',yerr=sems,ecolor='blue')
# Look up different colors here: http://stackoverflow.com/questions/22408237/named-colors-in-matplotlib
# configure axes properties to make pretty
ax.set_ylabel('scores')
ax.set_xlabel('gender')
ax.set_title('Scores by gender')
ax.set_xticks(ind)
ax.set_xticklabels(['Male','Female'])
ax.set_xlim([-.5,1.5])
ax.set_ylim([0,2])
# This part calls the function autolabel() defined above, and labels the bars with values
autolabel(rects1)
plt.show()
# Make some fake data
d = {'race': np.random.permutation(np.hstack([np.ones(10), np.zeros(10)])),
'gender': np.hstack([np.ones(10), np.zeros(10)]),
'scores': np.hstack([np.random.rand(10), np.random.rand(10)+1])}
df = pd.DataFrame(d)
# Change this part and replace with the variables you want to plot and the grouping variable column name.
val =['scores']
group1 = ['gender']
group2 = ['race']
# Get means and sems for Gender group
means1 = df[val+group1].groupby(group1).mean().squeeze()
sems1 = df[val+group1].groupby(group1).sem().squeeze()
# Get means and sems for Race group
means2 = df[val+group2].groupby(group2).mean().squeeze()
sems2 = df[val+group2].groupby(group2).sem().squeeze()
fig,ax = plt.subplots(figsize=(10,5)) # Change figure size in (width,height)
ind = np.array([0.,1.]) # location of bars
width = .4 # Width of bars
# plot score by gender
rects1 = ax.bar(ind - width,means1,width,color='lightcoral',yerr=sems1,ecolor='k') # (bar x-location, bar heights, width=bar width, color=bar color, yerr=standard error)
# plot score by race
rects2 = ax.bar(ind,means2,width,color='lightblue',yerr=sems2,ecolor='k')
# configure axes properties to make pretty
ax.set_ylabel('scores')
ax.set_xlabel('gender')
ax.set_title('Scores by gender and race')
ax.set_xticks(ind)
ax.set_xticklabels(['Male','Female'])
ax.set_xlim([ind[0]-width*1.25,ind[-1]+width*1.25])
ax.set_ylim([0,1.8])
ax.legend(['Race0','Race1'])
autolabel(rects1)
autolabel(rects2)
# Make some fake data
d = {'race': np.random.permutation(np.hstack([np.ones(20), np.zeros(20)])),
'gender': np.hstack([np.ones(20), np.zeros(20)]),
'scores': np.round(10*np.hstack([np.random.rand(20), np.random.rand(20)+1]))}
df = pd.DataFrame(d)
ax = df.plot(kind='scatter',x='gender',y='scores')
ax.set_title('Values are stacked')
plt.show()
# Set x,y values for each group
gender0 = 0 # value of first group
y0 = df[['scores']].loc[df['gender']==gender0].values.squeeze() # Grabs y values for Gender =0
y0 = y0+(np.random.rand(len(y0))-.5)*.1 #Change after + sign to control dispersion
x0 = np.ones(len(y0))*gender0 +(np.random.rand(len(y0))-.5)*.1 #Change after + sign to control dispersion
gender1 = 1 # value of second group
y1 = df[['scores']].loc[df['gender']==gender1].values.squeeze()
y1 = y1+(np.random.rand(len(y1))-.5)*.1
x1 = np.ones(len(y1))*gender1 + (np.random.rand(len(y1))-.5)*.1
fig,ax = plt.subplots(figsize=(5,5))
ax.scatter(x0,y0,color='lightcoral')
ax.scatter(x1,y1,color='lightcoral')
ax.set_ylabel('scores')
ax.set_xlabel('gender')
ax.set_title('Values are now dispersed')
ax.set_xticks([0,1])
ax.set_xticklabels(['Male','Female'])
ax.set_xlim([-.5,1.5])
ax.grid() # puts grid on
plt.show()
import statsmodels.formula.api as smf
import statsmodels.api as sm
d = {'race': np.random.permutation(np.hstack([np.ones(20), np.zeros(20)])),
'gender': np.hstack([np.ones(20), np.zeros(20)]),
'scores': np.round(10*np.hstack([np.random.rand(20), np.random.rand(20)+1]))}
df = pd.DataFrame(d)
lm = smf.ols(formula = "scores ~ gender",data=df).fit()
print(lm.summary())
# Save the slope for gender to b1 and intercept to b0
b1 = lm.params[1] # This is slope
b0 = lm.params[0] # This is intercept
# Set x,y values for each group
gender0 = 0 # value of first group
y0 = df[['scores']].loc[df['gender']==gender0].values.squeeze()
y0 = y0+(np.random.rand(len(y0))-.5)*.1 #Change after + sign to control dispersion
x0 = np.ones(len(y0))*gender0 + (np.random.rand(len(y0))-.5)*.1 #Change after + sign to control dispersion
gender1 = 1 # value of second group
y1 = df[['scores']].loc[df['gender']==gender1].values.squeeze()
y1 = y1+(np.random.rand(len(y1))-.5)*.1
x1 = np.ones(len(y1))*gender1 + (np.random.rand(len(y1))-.5)*.1
fig,ax = plt.subplots(figsize=(5,5))
ax.scatter(x0,y0,color='lightcoral')
ax.scatter(x1,y1,color='lightcoral')
# Part that adds the line
spacing = 10
minx = df[['gender']].min().squeeze()
maxx = df[['gender']].max().squeeze()
lx = np.linspace(minx,maxx,spacing) # make x coordinates
ly = b0+lx*b1 # Estimate the y values using betas
ax.plot(lx,ly,'-k')
ax.set_ylabel('scores')
ax.set_xlabel('gender')
ax.set_title('Values are now dispersed')
ax.set_xticks([0,1])
ax.set_xticklabels(['Male','Female'])
ax.set_xlim([-.5,1.5])
ax.grid()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bar graphs with standard error bars for 2 group
Step2: Scatterplots of 1 group with jittered location
Step3: Here is the fix.
Step4: Drawing trend line on a scatterplot
|
3,632
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
data = pd.read_csv('data.csv')
data
data['T']
data[T]
T = 'T'
#alias v hodnota
3 = 'T'
300
x = 1 # zde vznikne místo v paměti, do které se uloží číslo 1 a proměnná x ukazuje na to místo
x = x + 1 # zde se vezme obsah proměnné x (číslo 1) a provede se operace součtu,
# výsledek se uloží se do x
x # výsledek je pochopitelně 2
300 == 300
300 == 301
a = 300
b = 300
a is b # a ukazuje na jiné místo v paměti než b
a == b
T = 'T'
len(T) #Příkaz len vrací délku (v tomto případě délku řetězce)
data[T] # T se vyhodnotí na písmeno "T"
T1 = 'T'
T2 = "T"
T3 = T
T4 = '''T'''
T1 == T2 == T3 == T4
print('''
Hroch a Panda
jsou dobří přátelé
''')
'Hroch' != 'Zikán'
'hroch' == 'zvíře'
s = 'Panda'
print(s[0])
print(s[1:5])
'Hroch' + ' a ' + 'Panda'
s[0] = 'F'
l1 = ['a', 'a', 'n', 'd']
l1.append('a')
l1[0] = 'P'
l1
l2 = l1
l2.append('!')
l1
l2 # l2 se změnil také
l1 is l2
l3 = l1.copy()
l3
l3.append('!')
l1
l3
l3 is l1
s = 'Panda'
s[1:]
'F' + s[1:]
a = 1
b = a
a = 2
print(a)
print(b)
f = open('data.csv')
f
data = f.read()
print(data)
f.close()
print(type(data))
data
data = data.split('\n')
data
data = [line.split(',') for line in data if line != '']
data
import numpy as np
arr = np.array(data[1:])
arr
arr[0]
arr[0][0]
import pandas as pd
data = pd.DataFrame(arr, columns=data[0])
data
data['C'] = data['A'] + data['B']
data
def load_data(file_name):
f = open(file_name)
data = f.read()
f.close()
return data
def make_dataframe(data):
data = data.split('\n')
data = [line.split(',') for line in data if line != '']
arr = np.array(data[1:])
return pd.DataFrame(arr, columns=data[0])
data = make_dataframe(load_data('data.csv'))
data
x = data['A'][0]
print(x)
print(type(x))
print(float(x))
print(type(float(x)))
strings = [['1', '2', '4'], ['5', '9', '9']]
strings
floats = []
for rec in strings:
floats.append([float(num) for num in rec])
floats
def make_dataframe(data):
data = data.split('\n')
data = [line.split(',') for line in data if line != '']
floats = []
for rec in data[1:]:
floats.append([float(num) for num in rec])
arr = np.array(floats)
return pd.DataFrame(arr, columns=data[0])
data = make_dataframe(load_data('data.csv'))
data
print(type(data['A'][0]))
data['C'] = data['A'] + data['B']
data
def make_dataframe(data):
data = data.split('\n')
data = [line.split(',') for line in data if line != '']
arr = np.array(data[1:])
return pd.DataFrame(arr, columns=data[0], dtype='f')
data = make_dataframe(load_data('data.csv'))
data
print(type(data['A'][0]))
with open('data.csv') as f:
data = []
for line in f:
row = line.strip().split(',')
data.append(row)
data = pd.DataFrame(np.array(data[1:]), columns=data[0], dtype='f')
data
total_sum = 0
with open('data.csv') as f:
f.readline() # zahodíme hlavičku
for line in f:
total_sum += float(line.split(',')[0])
total_sum
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pokud vzpomínáte, tak ke sloupci T jsme přistupovali takto
Step2: Proč jsme nemohli jednoduše vykonat následující?
Step3: Alias v hodnota
Step4: Alias je (zhruba řečeno) pojmenované místo v paměti počítače. Tedy pomocí T označujeme při běhu programu místo v paměti, kam jsme si již předtím něco uložili (v tomto případě "písmeno" T). Ne vše může být aliasem (např. žádná proměnná nemůže začínat číslem)
Step5: Naopak následující příkaz
Step6: právě vytvořil nové "místo" v paměti počítače, kam uložil hodnotu (v binární podobě), která reprezentuje přirozené číslo 300. Že jsme si místo (jeho adresu) nezapamatovali, je náš problém. Nyní už neexistuje žádný způsob, jak zjistit, kde vlastně je ono místo a tudíž jej nemůžeme použít dále při běhu programu. Jupyter nám dokonce dává jasně najevo, že jsme si ono místo neuložili do proměnné (viz Out[6]
Step7: Jinými slovy
Step8: Na identitu (tedy že daný alias ukazuje na stejné místo v paměti) se ptáme slůvkem is
Step9: V obou místech je však uložená stejná hodnota (číslo 300), tedy
Step10: String
Step11: Nyní můžeme udělat to, co nám v Úvodu nefungovalo
Step13: V Pythonu není rozdíl mezi jednoduchou uvozovkou ' a dvojitou ", následující definice jsou si ekvivalentní
Step14: Tři uvozovky (ať už jednoduché či dvojité) se můžou rozprostírat na víc řádků.
Step15: Stringy lze opět porovnávat, v tomto případě použijeme operátor nerovnosti
Step16: Naopak následující může být trochu překvapivé (Python pochopitelně porovnává jednotlivé znaky, sémantice nerozumí)
Step17: Jako dobrá představa stringu je seznam (list) znaků. Vskutku, se stringem lze zacházet obdobně jako se seznamem
Step18: Dva či více stringů lze spojit pomocí operátoru +
Step19: Podobnost s listem však pokulhává v jedné zásadní věci
Step20: Mutanti
Step21: Jak už víte, prvky seznamu (narozdíl od stringu) můžeme měnit
Step22: Dokážete však vysvětlit následující?
Step23: Jak je možné, že se změnil list l1, když jsme modifikovali l2? Pro jistotu
Step24: Odpověď je poměrně jednoduchá - l1 i l2 ukazují na stejné místo v paměti (jedná se o stejné "objekty")
Step25: Pokud bychom chtěli vytvořit opravdovou kopii, tak aby se nám změny neprojevily na l1
Step26: Pro jistotu se ještě přesvědčíme, že se opravdu nejedná o stejné objekty
Step27: Nemutanti
Step28: Obdobně se chová i datový typ integer (celé číslo). Porovnejte následující s předchozími hrátky s listy l1 a l2
Step29: Toto je velice důležité chování Pythonu. Pokud si do nějaké proměnné uložíte nějaké číslo, pak se nikdy nemůže změnit jinak, než že ho sami explicitně změníte!
Step30: Proměnná f nyní představuje tzv. file handle. Neobsahuje žádná data (ta zatím stále leží na disku), je to jenom prostředek, skrze který můžeme komunikovat s operačním systémem. Samotný transfer dat můžeme zahájit zavoláním funkce read
Step31: Nyní máme data v operační paměti a uložili jsme si je do proměnné data. Nyní je záhodno sdělit operačnímu systému, že se souborem již nebudeme dále pracovat a soubor takzvaně uzavřít
Step32: Podívejme se blíže na proměnnou data
Step33: Vidíme, že tentokrát máme k dispozici něco úplně jiného, než co nám vrátila panda v první lekci zavoláním funkce read_csv. Tentokrát se jedná o string, ony záhadné \n označují znak nového řádku, Python jim rozumí a pokud použijeme funkci print, pak se vše zobrazí správně (viz výše).
Step34: Následně každý řádek rozdělíme podle čárky na jednotlivé hodnoty
Step35: Naším cílem je vytvořit starý známý DataFrame. Po pozorném přečtení dokumentace vidíme, že nejprve potrebujeme vytvořit numpy vektor (v tomto případě 2D vektor - správně bychom mělí říci tenzor či matice)
Step36: 2D numpy vektory, jsou velice podobné 1D vektorům, k prvnímu řádku této matice lze přistoupit takto
Step37: K prvnímu elementu prvního řádku pak takto
Step38: Nyní máme konečně vše nachystané k vytvoření DataFrame
Step39: Asi jsme něco udělali špatně. Co se to vlastně stalo?
Step40: Problém spočívá v tom, že data jsou stringy, ne floaty. Musíme je ručně převést. K tomu slouží funkce float
Step41: Pokud bychom měli následující vnořenou strukturu
Step42: a chtěli vytvořit novou identickou strukturu jen převést všechny stringy na floaty, pak to můžeme udělat např. následovně
Step43: Poznamenejme jen, že se vlastně jedná o dva vnořené for cykly. V prvním procházíme prvky listu strings - cykly tedy budou dva
Step44: Poznámka
Step45: Elegance Pythonu
Step46: Uvedený způsob skýtá dvě velké výhody.
|
3,633
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-hh', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
3,634
|
<ASSISTANT_TASK:>
Python Code:
data_in_shape = (3, 5, 2)
L = ZeroPadding2D(padding=(1, 1), data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(250)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding2D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (3, 5, 2)
L = ZeroPadding2D(padding=(1, 1), data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(251)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding2D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (2, 6, 4)
L = ZeroPadding2D(padding=(3, 2), data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(252)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding2D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (2, 6, 4)
L = ZeroPadding2D(padding=(3, 2), data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(253)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding2D.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (2, 6, 4)
L = ZeroPadding2D(padding=((1,2),(3,4)), data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(254)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding2D.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
data_in_shape = (2, 6, 4)
L = ZeroPadding2D(padding=2, data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(255)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding2D.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
print(json.dumps(DATA))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: [convolutional.ZeroPadding2D.1] padding (1,1) on 3x5x2 input, data_format='channels_first'
Step2: [convolutional.ZeroPadding2D.2] padding (3,2) on 2x6x4 input, data_format='channels_last'
Step3: [convolutional.ZeroPadding2D.3] padding (3,2) on 2x6x4 input, data_format='channels_first'
Step4: [convolutional.ZeroPadding2D.4] padding ((1,2),(3,4)) on 2x6x4 input, data_format='channels_last'
Step5: [convolutional.ZeroPadding2D.5] padding 2 on 2x6x4 input, data_format='channels_last'
Step6: export for Keras.js tests
|
3,635
|
<ASSISTANT_TASK:>
Python Code:
import dask.dataframe as ddf
columns = {
'sceneID': str,
'sensor': str,
'path': int,
'row': int,
'acquisitionDate': str,
'cloudCover': float,
'cloudCoverFull': float,
'sunElevation': float,
'sunAzimuth': float,
'DATA_TYPE_L1': str,
'GEOMETRIC_RMSE_MODEL': float,
'GEOMETRIC_RMSE_MODEL_X': float,
'GEOMETRIC_RMSE_MODEL_Y': float,
'satelliteNumber': float
}
df = ddf.read_csv('LANDSAT*.csv',
usecols=columns.keys(),
dtype=columns,
parse_dates=['acquisitionDate'],
blocksize=int(20e6))
df = df.assign(year=df.acquisitionDate.dt.year)
df.columns
df.groupby('sensor').sensor.count().compute()
result = df.groupby(['path', 'row', 'year']).cloudCoverFull.mean().compute()
result.loc[12, 31, :]
result = df.groupby(['DATA_TYPE_L1']).cloudCoverFull.mean().compute()
result
df = df.assign(DATA_TYPE_L1=df.DATA_TYPE_L1.apply(lambda x: x if x != 'L1Gt' else 'L1GT'))
result = df.groupby(['DATA_TYPE_L1']).cloudCoverFull.mean().compute()
result
df.groupby(['DATA_TYPE_L1', 'sensor'])[['GEOMETRIC_RMSE_MODEL_X', 'GEOMETRIC_RMSE_MODEL_Y']].mean().compute()
from dask.dot import dot_graph
dot_graph(result.dask)
import dask
import pandas as pd
_df = pd.read_csv('LANDSAT_8.csv', parse_dates=['acquisitionDate'], nrows=100)
s = _df['DATA_TYPE_L1']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question
Step2: Question
Step3: Question
Step4: Looks like there is a labeling issue due to a capitalization difference in L1GT versus L1Gt. We can correct this as well
Step5: Question
Step6: Unfortunately it looks like the Landsat 8 observations do not record the estimated geometric accuracy unless a systematic terrain correction using Ground Control Points is successful.
Step7: Getting started with Dask
|
3,636
|
<ASSISTANT_TASK:>
Python Code:
import sys, platform, subprocess
ansibleVersion = subprocess.check_output(['ansible', '--version']).decode('utf-8').split()[1]
print( f" Python: {' '.join(sys.version.split()[0:4])}\n" # Not the version of Pythone used by Ansible.
f' macOS: {platform.mac_ver()[0]}\n' # Control machine operatings sysrtem.
f"Ansible: {subprocess.check_output(['ansible', '--version']).decode('utf-8').split()[1]}")
! nl -b a /etc/hosts | sed -n '74,77p;84,86p' | cut -c 1-100
%cd '/Users/chrsclrk/Google Drive/solutionArchitect/automation'
! echo "*** inventory.ini contents ***" ; nl -ba controlMachine/inventory.ini | sed -n '1,7p'
! echo "*** ansible.cfg contents ***" ; cat /Users/chrsclrk/.ansible.cfg
!ansible aur --inventory=controlMachine/inventory.ini --module-name=ping
!ansible aur --inventory=controlMachine/inventory.ini --module-name=command --args=uptime
oneSetup = !ansible aur[0] --inventory=controlMachine/inventory.ini --module-name=setup
print(f'{len(oneSetup):>34} Metric of setup results; length of Jupyter reference to saved setup results.\n'
f'{type(oneSetup)} Type of Jupyter reference to save results.')
oneSetup # View facts from the first target machine.
!ansible aur --inventory=controlMachine/inventory.ini --module-name=setup --args='filter=ansible_default_ipv4'
!ansible aur --inventory=controlMachine/inventory.ini --module-name=shell --args="date --rfc-3339=ns"
!ansible aur --inventory=controlMachine/inventory.ini --module-name=shell --args="/usr/sbin/ip -4 addr show eno16777728 | sed -n '2p' | cut -d ' ' -f 6"
! nl playbooks/ch04-98_facts_ip-mac.yaml
! ansible-playbook --verbose --inventory=controlMachine/inventory.ini --become playbooks/ch04-98_facts_ip-mac.yaml
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: View of the network, private, from /etc/hosts
Step2: Review of control machine's Ansible configuration
Step3: Note use of group "aur" to provide value for "become" password.
Step4: is the control machine able to reach the remote machines?
Step5: Here Ansible runs a program on the remote machines.
Step6: Ansible's "setup" module; all it knows about a target machine
Step7: Retrieve a subset of the facts.
Step8: Report the target machines' date for rough idea of time synchrony.
Step9: Results from piping commands on the remote machines.
Step10: Ansible Playbooks
|
3,637
|
<ASSISTANT_TASK:>
Python Code:
query_url = 'https://data.sfgov.org/resource/wbb6-uh78.json?$order=close_dttm%20DESC&$offset={}&$limit={}'
offset = 0
limit = 1000000
df = pd.read_json(query_url.format(offset, limit))
cols_to_drop = ["automatic_extinguishing_sytem_failure_reason",
"automatic_extinguishing_sytem_type",
"battalion",
"box",
"call_number",
"detector_effectiveness",
"detector_failure_reason",
"ems_personnel",
"ems_units",
"exposure_number",
"first_unit_on_scene",
"ignition_factor_secondary",
"mutual_aid",
"no_flame_spead",
"other_personnel",
"other_units",
"station_area",
"supervisor_district"]
df = df.drop(cols_to_drop, axis=1)
for col in df.columns:
if 'dttm' in col:
df[col] = pd.to_datetime(df[col])
df.alarm_dttm.min()
df.estimated_property_loss.value_counts(dropna=False)
df.shape
# So we have 100,000 rows of data, going all the way back to February 10, 2013
# There is thoughts that there's a correlation with year and cost, especially in the mission
df[df.estimated_property_loss.isnull()].__len__()
# of the 100,000 rows, 96,335 are null
96335 / float(df.shape[0])
# wow, so where are these companies getting their data about the costs associated with fires?
# it's not from the sfgov website. we'll need to table that and come back later.
df['year'] = df.alarm_dttm.apply(lambda x: x.year)
temp_df = df[df.estimated_property_loss.notnull()]
temp_df.shape
temp_df.groupby('year').sum()['estimated_property_loss']
mask = ((temp_df.zipcode.notnull()) & (temp_df.zipcode.isin([94103, 94110])))
temp_df[mask].groupby('year').sum()['estimated_property_loss']
# So based on the above data yes, the 2015 fires for those two zipcodes doubled,
# and we can look into why, but could it be a symptom of population growth?
# this article http://sf.curbed.com/2016/7/1/12073544/mission-fires-arson-campos
# said that there were 2,788 blazes... but that's wrong, it's 2,788 units impacted.
# One blaze could impact multiple units
#
# This infographic shows number of units impacted by fire by neighborhood,
# but isn't this seriously misleading? https://infogr.am/sf_fires_by_zip-3
#
# Ok, no seriously, I'm setting aside this mission research, because the upside for getting it right is low
# but the downside for getting it wrong is very impactful. Not the sort of press we want
# TODO: check this out and compare it to the data set
# https://celestelecomptedotcom.files.wordpress.com/2015/04/15-04-05_wfs-greater-alarms-01-01-01-04-05-15.pdf
mask = ((temp_df.zipcode.notnull()) &
(temp_df.zipcode.isin([94103, 94110])) &
(temp_df.year == 2014))
temp_df[mask].groupby('year').sum()['estimated_property_loss']
mask = ((df.estimated_property_loss.notnull()))
sns.df[mask].groupby('year').sum()['estimated_property_loss']
# So based on the above data yes, the 2015 fires for those two zipcodes doubled,
# and we can look into why, but could it be a symptom of population growth?
# according to the document mentioned above and the report, it says that the population size shrunk. OK...
# but the data that is being looked at is a HUGE period. There was a census report in 2000, and then another one
# that's a large bucket of 2009-2013. The change reported was a 9% decrease, not exactly a huge boom.
# My next theory is that the reason that the cost increased is simply that they got better about capturing records
# for certain areas
# Let's try a little experiment
# let's look at which fire areas are better at keeping records, shall we?
df['loss_recorded'] = 0
mask = ((df.estimated_property_loss.notnull()))
df.loc[mask, 'loss_recorded'] = 1
mask = ((df.zipcode.notnull()))
zipgroup = df[mask].groupby(['zipcode'])
zipgroup.mean()['loss_recorded'].plot(kind='barh')
# the above document shows the likelihood that the estimated_property_loss value
# is recorded based on zipcode.
# Mission District is within 94103, 94110 zipcodes
#
zipgroup.mean()['loss_recorded'][94103]
zipgroup.mean()['loss_recorded'][94110]
mask = ((df.estimated_property_loss.notnull()) &
(df.zipcode == 94110))
sns.distplot(df[mask].estimated_property_loss)
mask = ((df.estimated_property_loss.notnull()) &
(df.zipcode == 94103))
sns.distplot(df[mask].estimated_property_loss)
df['estimated_property_loss'] = pd.to_numeric(df['estimated_property_loss'])
df['estimated_property_loss'] = df['estimated_property_loss'].fillna(0)
df.info()
mask = ((df.estimated_property_loss.notnull()) &
(df.zipcode == 94103))
df[mask].estimated_property_loss.value_counts(dropna=False, normalize=True, bins=50)
df['month'] = df.alarm_dttm.apply(lambda x: x.month)
mask = ((df.month == 6) & (df.year == 2016))
df[mask].describe()
df.describe()
df.alarm_dttm.min()
df.alarm_dttm.max()
# what is odd is how the fire civilian fatalities have a max value of 1, which makes it concerning that the dataset
# is inaccurate and needs to be cleaned more carefully before we proceed.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: According to wikipeda, the mission district falls into two zipcodes, 94103, 94110
Step2: Initial Conclusions
Step3: Disclaimers from the Fire Marshal
|
3,638
|
<ASSISTANT_TASK:>
Python Code:
from functools import partial
def convert(s):
converters = (int, float)
for converter in converters:
try:
value = converter(s)
except ValueError:
pass
else:
return value
return s
def process_input(s):
value = convert(s)
print('%r becomes %r' % (s, value))
def main():
prompt = 'gimme: '
while True:
s = input(prompt)
if s == 'quit':
break
process_input(s)
main()
def main():
prompt = 'gimme: '
for s in iter(partial(input, prompt), 'quit'):
process_input(s)
main()
prompt = 'gimme: '
get_values = (convert(s) for s in iter(partial(input, prompt), 'quit'))
for value in get_values:
print(value)
prompt = 'gimme: '
def get_input():
return input(prompt)
def main():
for s in iter(get_input, 'quit'):
process_input(s)
main()
def main():
prompt = 'gimme: '
for s in iter(lambda : input(prompt), 'quit'):
process_input(s)
main()
def my_partial(function, *args, **kwargs):
def helper():
return function(*args, **kwargs)
return helper
def main():
prompt = 'gimme: '
for s in iter(my_partial(input, prompt), 'quit'):
process_input(s)
main()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below is a typical loop for
Step2: It works as shown below.
Step3: Below is a different way of writing that loop.
Step4: It can be reduced to a generator expression.
Step5: 2017-10-06 More thoughts about partial(input, prompt) and alternatives to it.
|
3,639
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import sympy as sp
# comment out if you don't want plots rendered in notebook
%matplotlib inline
from quantecon import ivp
ivp.IVP?
def lotka_volterra_system(t, y, a, b, c, d):
Return the Lotka-Voltera system.
Parameters
----------
t : float
Time
y : ndarray (float, shape=(2,))
Endogenous variables of the Lotka-Volterra system. Ordering is
`y = [u, v]` where `u` is the number of prey and `v` is the number of
predators.
a : float
Natural growth rate of prey in the absence of predators.
b : float
Natural death rate of prey due to predation.
c : float
Natural death rate of predators, due to absence of prey.
d : float
Factor describing how many caught prey is necessary to create a new
predator.
Returns
-------
jac : ndarray (float, shape=(2,2))
Jacobian of the Lotka-Volterra system of ODEs.
f = np.array([ a * y[0] - b * y[0] * y[1] ,
-c * y[1] + d * b * y[0] * y[1] ])
return f
def lotka_volterra_jacobian(t, y, a, b, c, d):
Return the Lotka-Voltera Jacobian matrix.
Parameters
----------
t : float
Time
y : ndarray (float, shape=(2,))
Endogenous variables of the Lotka-Volterra system. Ordering is
`y = [u, v]` where `u` is the number of prey and `v` is the number of
predators.
a : float
Natural growth rate of prey in the absence of predators.
b : float
Natural death rate of prey due to predation.
c : float
Natural death rate of predators, due to absence of prey.
d : float
Factor describing how many caught prey is necessary to create a new
predator.
Returns
-------
jac : ndarray (float, shape=(2,2))
Jacobian of the Lotka-Volterra system of ODEs.
jac = np.array([[a - b * y[1], -b * y[0]],
[b * d * y[1], -c + b * d * y[0]]])
return jac
lotka_volterra_ivp = ivp.IVP(f=lotka_volterra_system,
jac=lotka_volterra_jacobian)
# ordering is (a, b, c, d)
lotka_volterra_params = (1.0, 0.1, 1.5, 0.75)
lotka_volterra_ivp.set_f_params?
lotka_volterra_ivp.set_jac_params?
lotka_volterra_ivp.set_f_params(*lotka_volterra_params)
lotka_volterra_ivp.set_jac_params(*lotka_volterra_params)
lotka_volterra_ivp.f_params
lotka_volterra_ivp.jac_params
# I generally prefer to set attributes directly...
lotka_volterra_ivp.f_params = lotka_volterra_params
# ...result is the same
lotka_volterra_ivp.f_params
# remember...always read the docs!
ivp.IVP.solve?
# define the initial condition...
t0, y0 = 0, np.array([10, 5])
# ...and integrate!
solution = lotka_volterra_ivp.solve(t0, y0, h=1e-1, T=15, integrator='dopri5',
atol=1e-12, rtol=1e-9)
# extract the components of the solution trajectory
t = solution[:, 0]
rabbits = solution[:, 1]
foxes = solution[:, 2]
# create the plot
fig = plt.figure(figsize=(8, 6))
plt.plot(t, rabbits, 'r.', label='Rabbits')
plt.plot(t, foxes , 'b.', label='Foxes')
plt.grid()
plt.legend(loc=0, frameon=False, bbox_to_anchor=(1, 1))
plt.xlabel('Time', fontsize=15)
plt.ylabel('Population', fontsize=15)
plt.title('Evolution of fox and rabbit populations', fontsize=20)
plt.show()
# define the desired interpolation points...
ti = np.linspace(t0, solution[-1, 0], 1000)
# ...and interpolate!
interp_solution = lotka_volterra_ivp.interpolate(solution, ti, k=5, ext=2)
# extract the components of the solution
ti = interp_solution[:, 0]
rabbits = interp_solution[:, 1]
foxes = interp_solution[:, 2]
# make the plot
fig = plt.figure(figsize=(8, 6))
plt.plot(ti, rabbits, 'r-', label='Rabbits')
plt.plot(ti, foxes , 'b-', label='Foxes')
plt.xlabel('Time', fontsize=15)
plt.ylabel('Population', fontsize=15)
plt.title('Evolution of fox and rabbit populations', fontsize=20)
plt.legend(loc='best', frameon=False, bbox_to_anchor=(1,1))
plt.grid()
plt.show()
# life will be easier if you read the docs!
lotka_volterra_ivp.compute_residual?
# reset original parameters
lotka_volterra_ivp.f_params = lotka_volterra_params
lotka_volterra_ivp.jac_params = lotka_volterra_params
# compute the residual
residual = lotka_volterra_ivp.compute_residual(solution, ti, k=1)
# extract the raw residuals
rabbits_residual = residual[:, 1]
foxes_residual = residual[:, 2]
# typically, normalize residual by the level of the variable
norm_rabbits_residual = np.abs(rabbits_residual) / rabbits
norm_foxes_residual = np.abs(foxes_residual) / foxes
# create the plot
fig = plt.figure(figsize=(8, 6))
plt.plot(ti, norm_rabbits_residual, 'r-', label='Rabbits')
plt.plot(ti, norm_foxes_residual**2 / foxes, 'b-', label='Foxes')
plt.axhline(np.finfo('float').eps, linestyle='dashed', color='k', label='Machine eps')
plt.xlabel('Time', fontsize=15)
plt.ylim(1e-16, 1)
plt.ylabel('Residuals (normalized)', fontsize=15)
plt.yscale('log')
plt.title('Lotka-Volterra residuals', fontsize=20)
plt.grid()
plt.legend(loc='best', frameon=False, bbox_to_anchor=(1,1))
plt.show()
from ipywidgets import interact
from ipywidgets.widgets import FloatText, FloatSlider, IntSlider, Text
# reset parameters
lotka_volterra_ivp.f_params = (1.0, 0.1, 2.0, 0.75)
lotka_volterra_ivp.jac_params = lotka_volterra_ivp.f_params
@interact(h=FloatText(value=1e0), atol=FloatText(value=1e-3),
rtol=FloatText(value=1e-3), k=IntSlider(min=1, value=3, max=5),
integrator=Text(value='lsoda'))
def plot_lotka_volterra_residuals(h, atol, rtol, k, integrator):
Plots residuals of the Lotka-Volterra system.
# re-compute the solution
tmp_solution = lotka_volterra_ivp.solve(t0, y0, h=h, T=15, integrator=integrator,
atol=atol, rtol=rtol)
# re-compute the interpolated solution and residual
tmp_ti = np.linspace(t0, tmp_solution[-1, 0], 1000)
tmp_interp_solution = lotka_volterra_ivp.interpolate(tmp_solution, tmp_ti, k=k)
tmp_residual = lotka_volterra_ivp.compute_residual(tmp_solution, tmp_ti, k=k)
# extract the components of the solution
tmp_rabbits = tmp_interp_solution[:, 1]
tmp_foxes = tmp_interp_solution[:, 2]
# extract the raw residuals
tmp_rabbits_residual = tmp_residual[:, 1]
tmp_foxes_residual = tmp_residual[:, 2]
# typically, normalize residual by the level of the variable
tmp_norm_rabbits_residual = np.abs(tmp_rabbits_residual) / tmp_rabbits
tmp_norm_foxes_residual = np.abs(tmp_foxes_residual) / tmp_foxes
# create the plot
fig = plt.figure(figsize=(8, 6))
plt.plot(tmp_ti, tmp_norm_rabbits_residual, 'r-', label='Rabbits')
plt.plot(tmp_ti, tmp_norm_foxes_residual**2 / foxes, 'b-', label='Foxes')
plt.axhline(np.finfo('float').eps, linestyle='dashed', color='k', label='Machine eps')
plt.xlabel('Time', fontsize=15)
plt.ylim(1e-16, 1)
plt.ylabel('Residuals (normalized)', fontsize=15)
plt.yscale('log')
plt.title('Lotka-Volterra residuals', fontsize=20)
plt.grid()
plt.legend(loc='best', frameon=False, bbox_to_anchor=(1,1))
@interact(a=FloatSlider(min=0.0, max=5.0, step=0.5, value=1.5),
b=FloatSlider(min=0.0, max=1.0, step=0.01, value=0.5),
c=FloatSlider(min=0.0, max=5.0, step=0.5, value=3.5),
d=FloatSlider(min=0.0, max=1.0, step=0.01, value=0.5))
def plot_lotka_volterra(a, b, c, d):
Plots trajectories of the Lotka-Volterra system.
# update the parameters and re-compute the solution
lotka_volterra_ivp.f_params = (a, b, c, d)
lotka_volterra_ivp.jac_params = (a, b, c, d)
tmp_solution = lotka_volterra_ivp.solve(t0, y0, h=1e-1, T=15, integrator='dopri5',
atol=1e-12, rtol=1e-9)
# extract the components of the solution
tmp_t = tmp_solution[:, 0]
tmp_rabbits = tmp_solution[:, 1]
tmp_foxes = tmp_solution[:, 2]
# create the plot!
fig = plt.figure(figsize=(8, 6))
plt.plot(tmp_t, tmp_rabbits, 'r.', label='Rabbits')
plt.plot(tmp_t, tmp_foxes , 'b.', label='Foxes')
plt.xlabel('Time', fontsize=15)
plt.ylabel('Population', fontsize=15)
plt.title('Evolution of fox and rabbit populations', fontsize=20)
plt.legend(loc='best', frameon=False, bbox_to_anchor=(1,1))
plt.grid()
# enables sympy LaTex printing...
sp.init_printing()
# declare endogenous variables
t, x, y, z = sp.var('t, x, y, z')
# declare model parameters
beta, rho, sigma = sp.var('beta, rho, sigma')
# define symbolic model equations
_x_dot = sigma * (y - x)
_y_dot = x * (rho - z) - y
_z_dot = x * y - beta * z
# define symbolic system and compute the jacobian
_lorenz_system = sp.Matrix([[_x_dot], [_y_dot], [_z_dot]])
_lorenz_system
_lorenz_jacobian = _lorenz_system.jacobian([x, y, z])
_lorenz_jacobian
# in order to pass an array as an argument, we need to apply a change of variables
X = sp.DeferredVector('X')
change_of_vars = {'x': X[0], 'y': X[1], 'z': X[2]}
_transformed_lorenz_system = _lorenz_system.subs(change_of_vars)
_transformed_lorenz_jacobian = _transformed_lorenz_system.jacobian([X[0], X[1], X[2]])
# wrap the symbolic expressions as callable numpy funcs
_args = (t, X, beta, rho, sigma)
_f = sp.lambdify(_args, _transformed_lorenz_system,
modules=[{'ImmutableMatrix': np.array}, "numpy"])
_jac = sp.lambdify(_args, _transformed_lorenz_jacobian,
modules=[{'ImmutableMatrix': np.array}, "numpy"])
def lorenz_system(t, X, beta, rho, sigma):
Return the Lorenz system.
Parameters
----------
t : float
Time
X : ndarray (float, shape=(3,))
Endogenous variables of the Lorenz system.
beta : float
Model parameter. Should satisfy :math:`0 < \beta`.
rho : float
Model parameter. Should satisfy :math:`0 < \rho`.
sigma : float
Model parameter. Should satisfy :math:`0 < \sigma`.
Returns
-------
rhs_ode : ndarray (float, shape=(3,))
Right hand side of the Lorenz system of ODEs.
rhs_ode = _f(t, X, beta, rho, sigma).ravel()
return rhs_ode
def lorenz_jacobian(t, X, beta, rho, sigma):
Return the Jacobian of the Lorenz system.
Parameters
----------
t : float
Time
X : ndarray (float, shape=(3,))
Endogenous variables of the Lorenz system.
beta : float
Model parameter. Should satisfy :math:`0 < \beta`.
rho : float
Model parameter. Should satisfy :math:`0 < \rho`.
sigma : float
Model parameter. Should satisfy :math:`0 < \sigma`.
Returns
-------
jac : ndarray (float, shape=(3,3))
Jacobian of the Lorenz system of ODEs.
jac = _jac(t, X, beta, rho, sigma)
return jac
# parameters with ordering (beta, rho, sigma)
lorenz_params = (2.66, 28.0, 10.0)
# create the instance
lorenz_ivp = ivp.IVP(f=lorenz_system,
jac=lorenz_jacobian)
# specify the params
lorenz_ivp.f_params = lorenz_params
lorenz_ivp.jac_params = lorenz_params
# declare and initial condition
t0, X0 = 0.0, np.array([1.0, 1.0, 1.0])
# solve!
solution = lorenz_ivp.solve(t0, X0, h=1e-2, T=100, integrator='dop853',
atol=1e-12, rtol=1e-9)
@interact(T=IntSlider(min=0, value=0, max=solution.shape[0], step=5))
def plot_lorenz(T):
Plots the first T points in the solution trajectory of the Lorenz equations.
# extract the components of the solution trajectory
t = solution[:T, 0]
x_vals = solution[:T, 1]
y_vals = solution[:T, 2]
z_vals = solution[:T, 3]
# create the plot
fig = plt.figure(figsize=(8, 6))
plt.plot(t, x_vals, 'r.', label='$x_t$', alpha=0.5)
plt.plot(t, y_vals , 'b.', label='$y_t$', alpha=0.5)
plt.plot(t, z_vals , 'g.', label='$z_t$', alpha=0.5)
plt.grid()
plt.xlabel('Time', fontsize=20)
plt.ylabel('$x_t, y_t, z_t$', fontsize=20, rotation='horizontal')
plt.title('Time paths of the $x,y,z$ coordinates', fontsize=25)
plt.legend(frameon=False, bbox_to_anchor=(1.15,1))
plt.show()
# define the desired interpolation points...
ti = np.linspace(t0, solution[-1, 0], 1e4)
# ...and interpolate!
interp_solution = lorenz_ivp.interpolate(solution, ti, k=5, ext=2)
# extract the components of the solution trajectory
t = solution[:, 0]
x_vals = interp_solution[:, 1]
y_vals = interp_solution[:, 2]
z_vals = interp_solution[:, 3]
# xy phase space projection
fig, axes = plt.subplots(1, 3, figsize=(12, 6), sharex=True, sharey=True, squeeze=False)
axes[0,0].plot(x_vals, y_vals, 'r', alpha=0.5)
axes[0,0].set_xlabel('$x$', fontsize=20, rotation='horizontal')
axes[0,0].set_ylabel('$y$', fontsize=20, rotation='horizontal')
axes[0,0].set_title('$x,y$-plane', fontsize=20)
axes[0,0].grid()
# xz phase space projection
axes[0,1].plot(x_vals, z_vals , 'b', alpha=0.5)
axes[0,1].set_xlabel('$x$', fontsize=20, rotation='horizontal')
axes[0,1].set_ylabel('$z$', fontsize=20, rotation='horizontal')
axes[0,1].set_title('$x,z$-plane', fontsize=20)
axes[0,1].grid()
# yz phase space projection
axes[0,2].plot(y_vals, z_vals , 'g', alpha=0.5)
axes[0,2].set_xlabel('$y$', fontsize=20, rotation='horizontal')
axes[0,2].set_ylabel('$z$', fontsize=20, rotation='horizontal')
axes[0,2].set_title('$y,z$-plane', fontsize=20)
axes[0,2].grid()
plt.suptitle('Phase space projections', x=0.5, y=1.05, fontsize=25)
plt.tight_layout()
plt.show()
# compute the residual
ti = np.linspace(0, solution[-1,0], 10000)
residual = lorenz_ivp.compute_residual(solution, ti, k=5)
# extract the raw residuals
x_residuals = residual[:, 1]
y_residuals = residual[:, 2]
z_residuals = residual[:, 3]
# typically, normalize residual by the level of the variable
norm_x_residuals = np.abs(x_residuals) / x_vals
norm_y_residuals = np.abs(y_residuals) / y_vals
norm_z_residuals = np.abs(z_residuals) / z_vals
# create the plot
fig = plt.figure(figsize=(8, 6))
plt.plot(ti, norm_x_residuals, 'r-', label='$x(t)$', alpha=0.5)
plt.plot(ti, norm_y_residuals, 'b-', label='$y(t)$', alpha=0.5)
plt.plot(ti, norm_z_residuals, 'g-', label='$z(t)$', alpha=0.5)
plt.axhline(np.finfo('float').eps, linestyle='dashed', color='k', label='Machine eps')
plt.xlabel('Time', fontsize=15)
plt.ylim(1e-16, 1)
plt.ylabel('Residuals', fontsize=15)
plt.yscale('log')
plt.title('Lorenz equations residuals', fontsize=20)
plt.grid()
plt.legend(loc='best', frameon=False)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Introduction
Step2: 2.1 Lotka-Volterra "Predator-Prey" model
Step5: From the docstring we see that we are required to define a function describing the right-hand side of the system of differential equations that we wish to solve. While, optional, it is always a good idea to also define a function describing the Jacobian matrix of partial derivatives.
Step6: We can go ahead and create our instance of the ivp.IVP class representing the Lotka-Volterra model using the above defined functions as follows...
Step7: 2.1.2 Defining model parameters
Step8: In order to add these parameter values to our model we need to pass them as arguments to the set_f_params and set_jac_params methods of the newly created instance of the ivp.IVP class. Check the doctrings of the methods for information on the appropriate syntax...
Step9: From the docstring we see that both the set_f_params and the set_jac_params methods take an arbitrary number of positional arguments.
Step10: ...and we can inspect that values of these attributes and see that the return results are the same.
Step11: Alternatively, we could just directly set the f_params and jac_params attributes without needing to explicitly call either the set_f_params and set_jac_params methods!
Step12: 2.1.3 Using ivp.IVP.solve to integrate the ODE
Step13: Example usage
Step14: Plotting the solution
Step15: Note that we have plotted the time paths of rabbit and fox populations as sequences of points rather than smooth curves. This is done to visually emphasize the fact that finite-difference methods used to approximate the solution return a discrete approximation to the true continuous solution.
Step16: Plotting the interpolated solution
Step17: Note that we have plotted the time paths of rabbit and fox populations as smooth curves. This is done to visually emphasize the fact that the B-spline interpolation methods used to approximate the solution return a continuous approximation to the true continuous solution.
Step18: Example usage
Step19: Plotting the residual
Step20: Understanding determinants of accuracy
Step22: Now we can make use of the @interact decorator and the various IPython widgets to create an interactive visualization of the residual plot for the Lotka-Volterra "Predator-Prey" model.
Step24: Sensitivity to parameters
Step25: 2.2 The Lorenz equations
Step26: Let's take a check out our newly defined _lorenz_system and make sure it looks as expected...
Step27: Step 2
Step28: Step 3
Step31: Step 4
Step32: ... next we define a tuple of model parameters...
Step33: ... finally, we are ready to create the instance of the ivp.IVP class representing the Lorenz equations.
Step34: 2.2.2 Solving the Lorenz equations
Step36: Plotting the solution time paths
Step37: Step 2
Step38: Plotting 2D projections of the solution in phase space
Step39: Step 3
Step40: Again, we want to confirm that the residuals are "small" everywhere. Patterns, if they exists, are not cause for concern.
|
3,640
|
<ASSISTANT_TASK:>
Python Code:
%run make_topo.py
%run make_data.py
!slosh
#!mpirun -n 4 slosh
%run make_plots.py
%pylab inline
import glob
from matplotlib import image
from clawpack.visclaw.JSAnimation import IPython_display
from matplotlib import animation
def init():
im.set_data(image.imread(filenames[0]))
return im,
def animate(i):
image_i=image.imread(filenames[i])
im.set_data(image_i)
return im,
figno = 0
fname = '_plots/*fig' + str(figno) + '.png'
filenames = sorted(glob.glob(fname))
fig = plt.figure()
im = plt.imshow(image.imread(filenames[0]))
animation.FuncAnimation(fig, animate, init_func=init,
frames=len(filenames), interval=500, blit=True)
figno = 1
fname = '_plots/*fig' + str(figno) + '.png'
filenames = sorted(glob.glob(fname))
fig = plt.figure()
im = plt.imshow(image.imread(filenames[0]))
animation.FuncAnimation(fig, animate, init_func=init,
frames=len(filenames), interval=500, blit=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run code in serial mode (will work, even if code is compiled with MPI)
Step2: Or, run code in parallel mode (command may need to be customized, depending your on MPI installation.)
Step3: Create PNG files for web-browser viewing, or animation.
Step4: View PNG files in browser, using URL above, or create an animation of all PNG files, using code below.
Step5: Plot figure 0, showing the entire solution. To see detailed plotting parameters, see file make_plots.py.
Step6: Then plot figures 10 and 11 to compare the solution in the two refined regions.
|
3,641
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import time
from tqdm import tqdm
import gc
print('loading prior')
priors = pd.read_csv('./data/order_products__prior.csv')
print('loading train')
train_all = pd.read_csv('./data/order_products__train.csv')
## Have split the trian data into two sets, train and eval
train = pd.read_csv('./data/train_new.csv')
train_eval = pd.read_csv('./data/train_eval.csv')
print('loading orders')
orders = pd.read_csv(IDIR + 'orders.csv')
print('priors {}: {}'.format(priors.shape, ', '.join(priors.columns)))
print('orders {}: {}'.format(orders.shape, ', '.join(orders.columns)))
print('train {}: {}'.format(train.shape, ', '.join(train.columns)))
###
# some memory measures for kaggle kernel
print('optimize memory')
orders.order_dow = orders.order_dow.astype(np.int8)
orders.order_hour_of_day = orders.order_hour_of_day.astype(np.int8)
orders.order_number = orders.order_number.astype(np.int16)
orders.order_id = orders.order_id.astype(np.int32)
orders.user_id = orders.user_id.astype(np.int32)
orders.days_since_prior_order = orders.days_since_prior_order.astype(np.float32)
train.reordered = train.reordered.astype(np.int8)
train.add_to_cart_order = train.add_to_cart_order.astype(np.int16)
train_eval.reordered = train.reordered.astype(np.int8)
train_eval.add_to_cart_order = train.add_to_cart_order.astype(np.int16)
train_all.reordered = train.reordered.astype(np.int8)
train_all.add_to_cart_order = train.add_to_cart_order.astype(np.int16)
priors.order_id = priors.order_id.astype(np.int32)
priors.add_to_cart_order = priors.add_to_cart_order.astype(np.int16)
priors.reordered = priors.reordered.astype(np.int8)
priors.product_id = priors.product_id.astype(np.int32)
print('loading products')
products = pd.read_csv(IDIR + 'products.csv')
products.drop(['product_name'], axis=1, inplace=True)
products.aisle_id = products.aisle_id.astype(np.int8)
products.department_id = products.department_id.astype(np.int8)
products.product_id = products.product_id.astype(np.int32)
prior_train = pd.concat([priors, train_all], axis = 0)
print prior_train.shape
if ( priors.shape[0] + train.shape[0] ) == prior_train.shape[0]:
print "concat successful"
## set the index, for the join
orders.set_index('order_id', inplace=True, drop=False)
## Join with prior_train
print "Joining orders with prior_train"
prior_train = prior_train.join(orders, on = 'order_id', how = 'left', lsuffix = '_')
prior_train.drop('order_id_', inplace = True, axis = 1)
## Repeat the same only for prior
print "Joining orders with priors"
priors = priors.join(orders, on = 'order_id', how = 'left', lsuffix = '_')
priors.drop('order_id_', inplace = True, axis = 1)
## Joining orders with train
print "Joining orders with train"
train = train.join(orders, on = 'order_id', how = 'left', lsuffix = '_')
train.drop('order_id_', inplace = True, axis = 1)
## Joining orders with train_eval
print "Joining orders with train_eval"
train_eval = train_eval.join(orders, on = 'order_id', how = 'left', lsuffix = '_')
train_eval.drop('order_id_', inplace = True, axis = 1)
## reset the order table index
orders.reset_index(inplace=True, drop=True)
orders.head()
## Using prior and train data
users_prior_all = pd.DataFrame()
users_prior_all['prod_list'] = prior_train.groupby('user_id')['product_id'].apply(set)
users_prior_all.reset_index(inplace = True, drop = False)
## Using only prior data
users_prior = pd.DataFrame()
users_prior['prod_list'] = priors.groupby('user_id')['product_id'].apply(set)
users_prior.reset_index(inplace = True, drop = False)
users_prior.head()
import random
train.reset_index(drop = True, inplace = True)
order_ids = list(train['order_id'].unique())
sample_size = int(0.25 * len(order_ids))
sample_orders = random.sample(order_ids, sample_size)
train_eval = train[train['order_id'].isin(sample_orders)]
train_new = train[~train['order_id'].isin(sample_orders)]
train_eval.to_csv('./data/train_eval.csv', index = False)
train_new.to_csv('./data/train_new.csv', index = False)
del train
train = train_new
del train_new
train.head()
## using prior stats
user_order_prior = pd.DataFrame()
user_order_prior['avg_days_since_prior_order'] = priors.groupby('user_id')['days_since_prior_order'].agg('mean')
## using prior train stats
user_order_all = pd.DataFrame()
user_order_all['avg_days_since_prior_order'] = prior_train.groupby('user_id')['days_since_prior_order'].agg('mean')
user_order_prior.reset_index(drop = False, inplace = True)
user_order_all.reset_index(drop = False, inplace = True)
print user_order_prior.head()
print
print user_order_all.head()
def last_order_features(priors):
max_order = pd.DataFrame()
max_order['max_order'] = priors.groupby(['user_id'])['order_number'].agg('max')
max_order = max_order.rename(columns = {"max_order":"order_number"})
max_order.reset_index(inplace = True, drop = False)
max_order.set_index(['user_id','order_number'], drop = False, inplace = True)
priors.set_index(['user_id','order_number'], drop = False, inplace = True)
max_order = max_order.join(priors[['user_id', 'order_id', 'order_number','order_dow','order_hour_of_day','days_since_prior_order']], rsuffix="_")
max_order.reset_index(drop =True, inplace = True)
priors.reset_index(drop = True, inplace = True)
max_order.drop('user_id_', inplace = True, axis =1)
max_order.drop('order_number_', inplace = True, axis = 1)
max_order.drop('order_number', inplace = True, axis = 1)
max_order = max_order.rename(columns = {"order_id":"prev_order_id","order_dow":"prev_order_dow"
,"order_hour_of_day":"prev_order_hour_of_day"
,"days_since_prior_order":"prev_days_since_prior_order"})
max_order.drop_duplicates(inplace = True)
return max_order
## Stats from prior
max_order = last_order_features(priors)
print max_order.head()
print
## Stats from prior and train
max_order_all = last_order_features(prior_train)
print max_order_all.head()
def product_features(priors):
prods = pd.DataFrame()
prods['p_orders'] = priors.groupby(priors.product_id).size().astype(np.float32)
prods['p_reorders'] = priors['reordered'].groupby(priors.product_id).sum().astype(np.float32)
prods['p_reorder_rate'] = (prods.p_reorders / prods.p_orders).astype(np.float32)
## set the index for products
products.set_index('product_id', inplace = True, drop = False)
products_prior = products.join(prods, rsuffix="_")
## Reset the index
products_prior.reset_index(inplace = True, drop = True)
del prods
return products_prior
### Stats from prior and train
products_all = product_features(prior_train)
print products_all.head()
print
## Stats from prior
products_prior = product_features(priors)
print products_prior.head()
def user_features(priors):
prod_count_prior = pd.DataFrame()
prod_count_prior['basket_size'] = priors.groupby(['user_id','order_id'])['product_id'].size().astype(np.int32)
prod_count_prior['reorder_size'] = priors.groupby(['user_id','order_id'])['reordered'].agg('sum').astype(np.int32)
# reset / set index
prod_count_prior = prod_count_prior.reset_index()
prod_count_prior.set_index('user_id', inplace = True, drop =False)
prod_count_prior['tot_orders'] = prod_count_prior.groupby(['user_id']).size().astype(np.int32)
prod_count_prior['tot_prods'] = prod_count_prior.groupby(['user_id'])['basket_size'].agg(['sum'])
prod_count_prior['avg_basket'] = prod_count_prior.groupby(['user_id'])['basket_size'].agg(['mean'])
prod_count_prior['avg_reorder'] = prod_count_prior.groupby(['user_id'])['reorder_size'].agg(['mean'])
prod_count_prior['std_basket'] = prod_count_prior.groupby(['user_id'])['basket_size'].agg(['std'])
prod_count_prior.drop(['order_id','basket_size','reorder_size'], inplace=True, axis=1)
prod_count_prior.drop_duplicates(inplace = True)
prod_count_prior = prod_count_prior.reset_index(level = 'user_id', drop = True)
return prod_count_prior
## Stats from only prior
prod_count_prior = user_features(priors)
## Stats from all
prod_count_all = user_features(prior_train)
print prod_count_prior.head()
print
print prod_count_all.head()
def user_prod_features(priors):
user_prod_prior = pd.DataFrame()
## Number of user's order where product id is present
user_prod_prior['up_orders'] = priors.groupby(['user_id','product_id'])['order_id'].size()
user_prod_prior.reset_index(inplace = True, drop = False)
user_prod_prior.set_index(['user_id', 'product_id'], inplace = True, drop = False)
## Number of times the product was re-ordered by the user
user_prod_prior['up_reorder'] = priors.groupby(['user_id', 'product_id'])['reordered'].agg(['sum'])
user_prod_prior['up_reorder_rate'] = user_prod_prior.up_reorder / user_prod_prior.up_orders
user_prod_prior.reset_index(drop = True, inplace= True)
return user_prod_prior
## Stats from prior
user_prod_prior = user_prod_features(priors)
user_prod_all = user_prod_features(prior_train)
print user_prod_prior.head()
print
print user_prod_all.head()
feature_dict_prior = {}
feature_dict_prior[1] = {"name":"user_order_feature","obj":user_order_prior,"index":['user_id']}
feature_dict_prior[2] = {"name":"last_order_feature","obj":max_order,"index":['user_id']}
feature_dict_prior[3] = {"name":"product_feature","obj":products_prior,"index":['product_id']}
feature_dict_prior[4] = {"name":"user_feature","obj":prod_count_prior,"index":['user_id']}
feature_dict_prior[5] = {"name":"user_pro_feature","obj":user_prod_prior,"index":['user_id','product_id']}
feature_dict_all = {}
feature_dict_all[1] = {"name":"user_order_feature","obj":user_order_all,"index":['user_id']}
feature_dict_all[2] = {"name":"last_order_feature","obj":max_order_all,"index":['user_id']}
feature_dict_all[3] = {"name":"product_feature","obj":products_all,"index":['product_id']}
feature_dict_all[4] = {"name":"user_feature","obj":prod_count_all,"index":['user_id']}
feature_dict_all[5] = {"name":"user_prod_feature","obj":user_prod_all,"index":['user_id','product_id']}
def join_features(feature_dict, features):
for k,v in feature_dict.items():
print "Joining {} feature".format(v['name'])
obj = v['obj']
index = v['index']
features.set_index(index, drop = False, inplace = True)
obj.set_index(index, drop = False, inplace = True)
features = features.join(obj ,on =index, rsuffix='_')
index_ = [idx + '_' for idx in index]
features.drop(index_, inplace = True, axis = 1)
features.reset_index(drop = True, inplace = True)
obj.reset_index(drop = True, inplace = True)
features.drop( ['prev_order_id'], inplace = True, axis = 1 )
return features
## Join train
train.head()
## This block needs to be run only once
## Later the output of this block is stored in ./data/features.csv file
## The next block reads the file, it enough to run the next block subsequently
## We could have got this from order_id
## However since we have separted our train into two sets
## we need to interate the train_new aka train data we have created.
train.reset_index(inplace = True, drop = True)
train_list = pd.DataFrame()
train_list['ignore'] = train.groupby(['user_id','order_id'], group_keys = True).size()
train_list.reset_index(inplace = True, drop = False)
train.set_index(['order_id', 'product_id'], inplace = True, drop = False)
print "features"
count = 0
order_list = []
product_list = []
user_list = []
labels = []
for user_record in train_list.itertuples():
count+=1
if count%10000 == 0:
print "Finished {} users".format(count)
user_id = user_record.user_id
order_id = user_record.order_id
prev_products = list(users_prior[users_prior.user_id == user_id]['prod_list'].values.tolist()[0])
product_list+= prev_products
order_list+=[order_id] * len(prev_products)
user_list+=[user_id] * len(prev_products)
labels+=[(order_id, product) in train.index for product in prev_products]
feature_df = pd.DataFrame({'user_id':user_list,'product_id':product_list,'order_id':order_list,'in_next_order':labels}, dtype=np.int32)
print feature_df.head()
feature_df.to_csv('./features/features.csv', index = False)
features = pd.read_csv('./features/features.csv')
features.head()
print "Order features"
features.set_index('order_id',inplace = True, drop = False)
orders.set_index('order_id', inplace = True, drop = False)
features = pd.merge(features, orders, left_on = 'order_id', right_on = 'order_id')
#features.drop('order_id', inplace = True, axis =1)
features.drop('eval_set', inplace = True, axis =1)
features.drop('user_id_y', inplace = True, axis =1)
features.drop('order_number', inplace = True, axis =1)
features = features.rename(columns={"user_id_x":"user_id"})
features.reset_index(drop = True, inplace= True)
train.reset_index(drop = True, inplace = True)
features.head()
features = join_features(feature_dict_prior, features)
features.head()
features.to_csv('./features/features_train.csv', index = False)
def get_y(test_list, users_prior):
feature = []
count = 0
for user_record in (test_list.itertuples()):
count+=1
if count%10000 == 0:
print "Finished {} users".format(count)
user_id = user_record.user_id
order_id = user_record.order_id
prev_products = list(users_prior[users_prior.user_id == user_id]['prod_list'].values.tolist()[0])
for p_p in prev_products:
feature.append((order_id, user_id ,p_p))
test_df = pd.DataFrame(data = feature, columns =['order_id','user_id','product_id'])
return test_df
train_eval.head()
train_eval.reset_index(inplace = True, drop = True)
train_eval_list = pd.DataFrame()
train_eval_list['ignore'] = train_eval.groupby(['user_id','order_id'], group_keys = True).size()
train_eval_list.reset_index(inplace = True, drop = False)
test_df = get_y(train_eval_list, users_prior, orders)
test_df.head()
test_df = join_features(feature_dict_prior, test_df)
test_df.head()
test_df.to_csv('./features/features_eval.csv',index = False)
test_list = orders[orders.eval_set == 'test']
feature = []
count = 0
for user_record in (test_list.itertuples()):
count+=1
if count%10000 == 0:
print "Finished {} users".format(count)
user_id = user_record.user_id
order_id = user_record.order_id
prev_products = list(users_prior[users_prior.user_id == user_id]['prod_list'].values.tolist()[0])
for p_p in prev_products:
feature.append((order_id, user_id ,p_p))
test_df = pd.DataFrame(data = feature, columns =['order_id','user_id','product_id'])
print test_df.head()
## Order features
print "Order features"
test_df.set_index('order_id',inplace = True, drop = False)
orders.set_index('order_id', inplace = True, drop = False)
test_df = pd.merge(test_df, orders, left_on = 'order_id', right_on = 'order_id')
test_df.drop('eval_set', inplace = True, axis =1)
test_df.drop('user_id_y', inplace = True, axis =1)
test_df.drop('order_number', inplace = True, axis =1)
test_df = test_df.rename(columns={"user_id_x":"user_id"})
test_df.reset_index(drop = True, inplace= True)
train.reset_index(drop = True, inplace = True)
test_df.head()
test_df = join_features(feature_dict_all, test_df)
test_df.head()
test_df.to_csv('./features/features_test.csv',index = False)
## Numpy savetxt is extremely slow
#VW_train = np.column_stack((y_train, X_train))
#print "Save"
#np.savetxt('./data/vw_train.csv', VW_train)
#print "done"
VW_train = pd.concat([Y, X],axis =1 )
print VW_train.shape
VW_train.head()
print "VW_train"
VW_train.to_csv('./data/vw_train.csv', index = False)
#python csv2vw.py ./data/vw_train.csv ./data/vw_train.txt 0 1
#python csv2vw.py ./data/vw_test.csv ./data/vw_test.txt 0 1
### Vowpal wabbit baseline model
#time vw ./data/vw_train.txt --predictions vwpred_train.out
vw_pred_train = pd.read_csv('vwpred_train.out', names=['y_p'])
vw_pred_train['y_pp']= vw_pred_train['y_p'].apply(lambda x: 1.0 if x > 0.35 else 0.0)
y_p3 = vw_pred_train['y_pp'].values
print "Vowpal wabbit accuracy {0:.2f}, precision {0:.2f}, recall {0:.2f}, f1-score {0:.2f}".format(
accuracy_score(y_train, y_p3),
precision_score(y_train, y_p3),
recall_score(y_train, y_p3),
f1_score(y_train, y_p3))
print confusion_matrix(y_train, y_p3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Products
Step2: 2. Prepare data
Step3: Join with orders table to get the user id
Step4: Make a data frame of user and previous product list
Step6: Create train eval
Step7: 3. Feature Generation
Step8: User id last order feature
Step9: Product Features
Step10: User features
Step11: User Product Features
Step12: Product days since last ordered
Step13: 4. Join the features
Step14: Prepare Y Variable from train
Step15: Order Features
Step16: Prepare y variable
Step17: Prepare Y Varible from eval
Step18: Prepare Y Variable from test
Step19: Order Feature
Step21: Vowpalwabbit - Experimental
|
3,642
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
# If you have a GPU, execute the following lines to restrict the amount of VRAM used:
gpus = tf.config.experimental.list_physical_devices('GPU')
if len(gpus) > 1:
print("Using GPU {}".format(gpus[0]))
tf.config.experimental.set_visible_devices(gpus[0], 'GPU')
else:
print("Using CPU")
import os
import random
import itertools
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input, Concatenate, Lambda, Dot
from tensorflow.keras.layers import Conv2D, MaxPool2D, GlobalAveragePooling2D, Flatten, Dropout
import numpy as np
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
PATH = "lfw/lfw-deepfunneled/"
USE_SUBSET = True
dirs = sorted(os.listdir(PATH))
if USE_SUBSET:
dirs = dirs[:500]
name_to_classid = {d: i for i, d in enumerate(dirs)}
classid_to_name = {v: k for k, v in name_to_classid.items()}
num_classes = len(name_to_classid)
print("number of classes: ", num_classes)
# read all directories
img_paths = {c: [PATH + subfolder + "/" + img
for img in sorted(os.listdir(PATH + subfolder))]
for subfolder, c in name_to_classid.items()}
# retrieve all images
all_images_path = []
for img_list in img_paths.values():
all_images_path += img_list
# map to integers
path_to_id = {v: k for k, v in enumerate(all_images_path)}
id_to_path = {v: k for k, v in path_to_id.items()}
all_images_path[:10]
len(all_images_path)
# build mappings between images and class
classid_to_ids = {k: [path_to_id[path] for path in v] for k, v in img_paths.items()}
id_to_classid = {v: c for c, imgs in classid_to_ids.items() for v in imgs}
dict(list(id_to_classid.items())[0:13])
plt.hist([len(v) for k, v in classid_to_ids.items()], bins=range(1, 10))
plt.show()
np.median([len(ids) for ids in classid_to_ids.values()])
[(classid_to_name[x], len(classid_to_ids[x]))
for x in np.argsort([len(v) for k, v in classid_to_ids.items()])[::-1][:10]]
# build pairs of positive image ids for a given classid
def build_pos_pairs_for_id(classid, max_num=50):
imgs = classid_to_ids[classid]
if len(imgs) == 1:
return []
pos_pairs = list(itertools.combinations(imgs, 2))
random.shuffle(pos_pairs)
return pos_pairs[:max_num]
# build pairs of negative image ids for a given classid
def build_neg_pairs_for_id(classid, classes, max_num=20):
imgs = classid_to_ids[classid]
neg_classes_ids = random.sample(classes, max_num+1)
if classid in neg_classes_ids:
neg_classes_ids.remove(classid)
neg_pairs = []
for id2 in range(max_num):
img1 = imgs[random.randint(0, len(imgs) - 1)]
imgs2 = classid_to_ids[neg_classes_ids[id2]]
img2 = imgs2[random.randint(0, len(imgs2) - 1)]
neg_pairs += [(img1, img2)]
return neg_pairs
build_pos_pairs_for_id(5, max_num=10)
build_neg_pairs_for_id(5, list(range(num_classes)), max_num=6)
from skimage.io import imread
from skimage.transform import resize
def resize100(img):
return resize(
img, (100, 100), preserve_range=True, mode='reflect', anti_aliasing=True
)[20:80, 20:80, :]
def open_all_images(id_to_path):
all_imgs = []
for path in id_to_path.values():
all_imgs += [np.expand_dims(resize100(imread(path)), 0)]
return np.vstack(all_imgs)
all_imgs = open_all_images(id_to_path)
all_imgs.shape
print(f"{all_imgs.nbytes / 1e6} MB")
def build_train_test_data(split=0.8):
listX1 = []
listX2 = []
listY = []
split = int(num_classes * split)
# train
for class_id in range(split):
pos = build_pos_pairs_for_id(class_id)
neg = build_neg_pairs_for_id(class_id, list(range(split)))
for pair in pos:
listX1 += [pair[0]]
listX2 += [pair[1]]
listY += [1]
for pair in neg:
if sum(listY) > len(listY) / 2:
listX1 += [pair[0]]
listX2 += [pair[1]]
listY += [0]
perm = np.random.permutation(len(listX1))
X1_ids_train = np.array(listX1)[perm]
X2_ids_train = np.array(listX2)[perm]
Y_ids_train = np.array(listY)[perm]
listX1 = []
listX2 = []
listY = []
#test
for id in range(split, num_classes):
pos = build_pos_pairs_for_id(id)
neg = build_neg_pairs_for_id(id, list(range(split, num_classes)))
for pair in pos:
listX1 += [pair[0]]
listX2 += [pair[1]]
listY += [1]
for pair in neg:
if sum(listY) > len(listY) / 2:
listX1 += [pair[0]]
listX2 += [pair[1]]
listY += [0]
X1_ids_test = np.array(listX1)
X2_ids_test = np.array(listX2)
Y_ids_test = np.array(listY)
return (X1_ids_train, X2_ids_train, Y_ids_train,
X1_ids_test, X2_ids_test, Y_ids_test)
X1_ids_train, X2_ids_train, train_Y, X1_ids_test, X2_ids_test, test_Y = build_train_test_data()
X1_ids_train.shape, X2_ids_train.shape, train_Y.shape
np.mean(train_Y)
X1_ids_test.shape, X2_ids_test.shape, test_Y.shape
np.mean(test_Y)
from imgaug import augmenters as iaa
seq = iaa.Sequential([
iaa.Fliplr(0.5), # horizontally flip 50% of the images
# You can add more transformation like random rotations, random change of luminance, etc.
])
class Generator(tf.keras.utils.Sequence):
def __init__(self, X1, X2, Y, batch_size, all_imgs):
self.batch_size = batch_size
self.X1 = X1
self.X2 = X2
self.Y = Y
self.imgs = all_imgs
self.num_samples = Y.shape[0]
def __len__(self):
return self.num_samples // self.batch_size
def __getitem__(self, batch_index):
This method returns the `batch_index`-th batch of the dataset.
Keras choose by itself the order in which batches are created, and several may be created
in the same time using multiprocessing. Therefore, avoid any side-effect in this method!
low_index = batch_index * self.batch_size
high_index = (batch_index + 1) * self.batch_size
imgs1 = seq.augment_images(self.imgs[self.X1[low_index:high_index]])
imgs2 = seq.augment_images(self.imgs[self.X2[low_index:high_index]])
targets = self.Y[low_index:high_index]
return ([imgs1, imgs2], targets)
gen = Generator(X1_ids_train, X2_ids_train, train_Y, 32, all_imgs)
print("Number of batches: {}".format(len(gen)))
[x1, x2], y = gen[0]
x1.shape, x2.shape, y.shape
plt.figure(figsize=(16, 6))
for i in range(6):
plt.subplot(2, 6, i + 1)
plt.imshow(x1[i] / 255)
plt.axis('off')
for i in range(6):
plt.subplot(2, 6, i + 7)
plt.imshow(x2[i] / 255)
if y[i]==1.0:
plt.title("similar")
else:
plt.title("different")
plt.axis('off')
plt.show()
test_X1 = all_imgs[X1_ids_test]
test_X2 = all_imgs[X2_ids_test]
test_X1.shape, test_X2.shape, test_Y.shape
@tf.function
def contrastive_loss(y_true, y_pred, margin=0.25):
'''Contrastive loss from Hadsell-et-al.'06
http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
'''
y_true = tf.cast(y_true, "float32")
return tf.reduce_mean( y_true * tf.square(1 - y_pred) +
(1 - y_true) * tf.square(tf.maximum(y_pred - margin, 0)))
@tf.function
def accuracy_sim(y_true, y_pred, threshold=0.5):
'''Compute classification accuracy with a fixed threshold on similarity.
'''
y_thresholded = tf.cast(y_pred > threshold, "float32")
return tf.reduce_mean(tf.cast(tf.equal(y_true, y_thresholded), "float32"))
class SharedConv(tf.keras.Model):
def __init__(self):
super().__init__(self, name="sharedconv")
# TODO
def call(self, inputs):
# TODO
shared_conv = SharedConv()
# %load solutions/shared_conv.py
all_imgs.shape
shared_conv.predict(all_imgs[:10]).shape
shared_conv.summary()
class Siamese(tf.keras.Model):
def __init__(self, shared_conv):
super().__init__(self, name="siamese")
# TODO
def call(self, inputs):
pass # TODO
model = Siamese(shared_conv)
model.compile(loss=contrastive_loss, optimizer='rmsprop', metrics=[accuracy_sim])
# %load solutions/siamese.py
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.models import load_model
best_model_fname = "siamese_checkpoint.h5"
best_model_cb = ModelCheckpoint(best_model_fname, monitor='val_accuracy_sim',
save_best_only=True, verbose=1)
model.fit_generator(generator=gen,
epochs=15,
validation_data=([test_X1, test_X2], test_Y),
callbacks=[best_model_cb], verbose=2)
model.load_weights("siamese_checkpoint.h5")
# You may load a pre-trained model if you have the exact solution architecture.
# This model is a start, but far from perfect !
# model.load_weights("siamese_pretrained.h5")
# TODO
emb = None
def most_sim(x, emb, topn=3):
return None
# %load solutions/most_similar.py
def display(img):
img = img.astype('uint8')
plt.imshow(img)
plt.axis('off')
plt.show()
interesting_classes = list(filter(lambda x: len(x[1]) > 4, classid_to_ids.items()))
class_id = random.choice(interesting_classes)[0]
query_id = random.choice(classid_to_ids[class_id])
print("query:", classid_to_name[class_id], query_id)
# display(all_imgs[query_id])
print("nearest matches")
for result_id, sim in most_sim(emb[query_id], emb):
class_name = classid_to_name.get(id_to_classid.get(result_id))
print(class_name, result_id, sim)
display(all_imgs[result_id])
import cv2
def camera_grab(camera_id=0, fallback_filename=None):
camera = cv2.VideoCapture(camera_id)
try:
# take 10 consecutive snapshots to let the camera automatically tune
# itself and hope that the contrast and lightning of the last snapshot
# is good enough.
for i in range(10):
snapshot_ok, image = camera.read()
if snapshot_ok:
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
else:
print("WARNING: could not access camera")
if fallback_filename:
image = imread(fallback_filename)
finally:
camera.release()
return image
image = camera_grab(camera_id=0,
fallback_filename='test_images/olivier/img_olivier_0.jpeg')
x = resize100(image)
out = shared_conv(np.reshape(x, (1, 60, 60, 3)))
print("query image:")
display(x)
for id, sim in most_sim(out[0], emb, topn=10):
class_name = classid_to_name.get(id_to_classid.get(id))
if class_name is None:
print(id)
print(class_name, id, sim)
display(all_imgs[id])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Processing the dataset
Step2: In each directory, there is one or more images corresponding to the identity. We map each image path with an integer id, then build a few dictionaries
Step3: The following histogram shows the number of images per class
Step4: Siamese nets
Step5: Let's build positive and a negative pairs for class 5
Step6: Now that we have a way to compute the pairs, let's load all the possible JPEG-compressed image files into a single numpy array in RAM. There are more than 1000 images, so 100MB of RAM will be used, which will not cause any issue.
Step7: The following function builds a large number of positives/negatives pairs (train and test)
Step9: Data augmentation and generator
Step10: Exercise
Step11: Simple convolutional model
Step12: Exercise
Step13: Exercise
Step14: We can now fit the model and checkpoint it to keep the best version. We can expect to get a model with around 0.75 as "accuracy_sim" on the validation set
Step15: Exercise
Step16: Most similar faces
Step17: Note that this model is still underfitting, even when running queries against the training set. Even if the results are not correct, the mistakes often seem to "make sense" though.
|
3,643
|
<ASSISTANT_TASK:>
Python Code:
import pandas
import igraph
edge_list = pandas.read_csv('hsmetnet.txt', sep='\t', header=None).values.tolist()
graph = igraph.Graph.TupleList(edge_list, directed=True)
igraph.summary(graph)
flatten = lambda l: sum(l, [])
vertex_set = set(flatten(edge_list))
isreaction = lambda vertex: vertex[0:8]=='REACTION'
metabolite_list = [vertex for vertex in vertex_set if not isreaction(vertex)]
len(metabolite_list)
reaction_list = [vertex for vertex in vertex_set if isreaction(vertex)]
len(reaction_list)
len(edge_list)
nameof = lambda vertex: vertex.attributes()['name']
metabolite_degree = {}
for vertex in graph.vs:
if not isreaction(nameof(vertex)):
metabolite_degree[nameof(vertex)] = vertex.degree()
metabolite_degree_sorted = sorted(metabolite_degree.items(), key=lambda item: item[1], reverse=True)
metabolite_degree_sorted[0:6]
import matplotlib.pyplot as plt
import numpy as np
metabolites = [item[0] for item in metabolite_degree.items()]
degrees = [item[1] for item in metabolite_degree.items()]
plt.hist(degrees, bins=np.logspace(np.log10(1),np.log10(1202),50), log=True)
plt.title('Degree Distribution')
plt.xlabel('k')
plt.ylabel('N(k)')
plt.gca().set_xscale('log')
plt.show()
igraph.statistics.power_law_fit(graph.degree())
vertices = []
for vertex in graph.vs:
if not isreaction(nameof(vertex)):
vertices += [vertex]
shortest_paths = [[len(x)-1 for x in graph.get_shortest_paths(v=vertex,to=vertices)] for vertex in vertices]
giant = graph.components(mode=igraph.WEAK).giant()
vertices = []
for vertex in giant.vs:
if not isreaction(nameof(vertex)):
vertices += [vertex]
giant_shortest_paths = [[len(x)-1 for x in graph.get_shortest_paths(v=vertex,to=vertices)] for vertex in vertices]
n = 0
m = 0
for vertex_shortest_paths in giant_shortest_paths:
for path_length in vertex_shortest_paths:
if path_length > 0:
n += path_length
m += 1
n/m
diameter = 0
for vertex_shortest_paths in giant_shortest_paths:
for path_length in vertex_shortest_paths:
if path_length > 0 and path_length > diameter:
diameter = path_length
diameter
vertices = []
for vertex in graph.vs:
if not isreaction(nameof(vertex)):
vertices += [vertex]
betweennesses = graph.betweenness(vertices=vertices)
degrees = [vertex.degree() for vertex in vertices]
plt.clf()
plt.scatter(degrees, betweennesses)
plt.title('Metabolite Betweenness vs. Degree')
plt.xlabel('Degree')
plt.ylabel('Betweenness')
plt.gca().set_xscale('log')
#plt.gca().set_yscale('log')
plt.show()
highest_betweenness = 0
highest_vertex = None
for degree, betweenness, vertex in zip(degrees, betweennesses, vertices):
if degree == 2 and betweenness > highest_betweenness:
highest_betweenness = betweenness
highest_vertex = vertex
highest_vertex
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: How many distinct metabolites are there in the graph?
Step2: How many reactions?
Step3: How many edges are there?
Step4: Calculate the degree k=kin+kout of each of the metabolite vertices (you don’t need to calculate the degree for the vertices that correspond to reactions).
Step5: What are the top six metabolites in terms of vertex degree in the graph?
Step6: Plot the distribution of the degrees of these vertices, on log-log scale.
Step7: Is the degree distribution well-described by a power law?
Step8: How does the \alphaα that you get compare to the estimate of the power-law exponent reported by Jeong et al. in their 2000 article in Nature, “The large-scale organization of metabolic networks” (vol. 407, pp. 651–654)?
Step9: Why are some of these path lengths infinite?
Step10: Calculate the maximum of the shortest-path-length between all pairs of metabolites (throwing away infinite values, as before) in the giant (weakly connected) component of the network (i.e., you are calculating the diameter of the giant component, according to Newman's definition of diameter).
Step11: Why are the average geodesic distances that we get, roughly twice those reported in Fig. 3b of Jeong et al., 2000?
Step12: Plot the scatter plot of betweenness centrality vs. vertex degree for all metabolites, on log-log scale.
Step13: Among metabolites with degree k=2, what metabolite has highest betweenness centrality in the network?
|
3,644
|
<ASSISTANT_TASK:>
Python Code:
km.random_init(data2, 3)
init_centroids = km.random_init(data2, 3)
init_centroids
x = np.array([1, 1])
fig, ax = plt.subplots(figsize=(6,4))
ax.scatter(x=init_centroids[:, 0], y=init_centroids[:, 1])
for i, node in enumerate(init_centroids):
ax.annotate('{}: ({},{})'.format(i, node[0], node[1]), node)
ax.scatter(x[0], x[1], marker='x', s=200)
km._find_your_cluster(x, init_centroids)
C = km.assign_cluster(data2, init_centroids)
data_with_c = km.combine_data_C(data2, C)
data_with_c.head()
sns.lmplot('X1', 'X2', hue='C', data=data_with_c, fit_reg=False)
km.new_centroids(data2, C)
final_C, final_centroid, _= km._k_means_iter(data2, 3)
data_with_c = km.combine_data_C(data2, final_C)
sns.lmplot('X1', 'X2', hue='C', data=data_with_c, fit_reg=False)
km.cost(data2, final_centroid, final_C)
best_C, best_centroids, least_cost = km.k_means(data2, 3)
least_cost
data_with_c = km.combine_data_C(data2, best_C)
sns.lmplot('X1', 'X2', hue='C', data=data_with_c, fit_reg=False)
from sklearn.cluster import KMeans
sk_kmeans = KMeans(n_clusters=3)
sk_kmeans.fit(data2)
sk_C = sk_kmeans.predict(data2)
data_with_c = km.combine_data_C(data2, sk_C)
sns.lmplot('X1', 'X2', hue='C', data=data_with_c, fit_reg=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. cluster assignment
Step2: 1 epoch cluster assigning
Step3: See the first round clustering result
Step4: 2. calculate new centroid
Step5: putting all together, take1
Step6: calculate the cost
Step7: k-mean with multiple tries of randome init, pick the best one with least cost
Step8: try sklearn kmeans
|
3,645
|
<ASSISTANT_TASK:>
Python Code:
# imports
from desispec.qa import qa_exposure as qa_exp
from desispec.io import qa as desio_qa
reload(qa_exp)
qaframe = qa_exp.QA_Frame(flavor='arc')
print(qaframe)
reload(qa_exp)
qaframe = qa_exp.QA_Frame(flavor='science')
qaframe.init_skysub()
print(qaframe.data)
from desispec.io import read_qa_frame
reload(desio_qa)
desio_qa.write_qa_frame('tmp.yaml', qaframe)
qaframe = desio_qa.read_qa_frame('tmp.yaml')
qaframe
# Read and print
qaframe = desio_qa.read_qa_frame('/Users/xavier/DESI/TST/dogwood/exposures/20150211/00000002/qa-b0-00000002.yaml')
qaframe.data
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Instantiate
Step2: Init SkySub
Step3: I/O
Step4: Test FiberFlat QA
|
3,646
|
<ASSISTANT_TASK:>
Python Code:
# As usual, a bit of setup
from __future__ import absolute_import, division, print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, \
eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
# set default size of plots
plt.rcParams['figure.figsize'] = (10.0, 8.0)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) \
/ (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print('{}:'.format(k), v.shape)
x = np.random.randn(500, 500) + 10
for p in [0.3, 0.6, 0.75]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print('Running tests with p =', p)
print('Mean of input:', x.mean())
print('Mean of train-time output:', out.mean())
print('Mean of test-time output:', out_test.mean())
print('Fraction of train-time output set to zero:',
(out == 0).mean())
print('Fraction of test-time output set to zero:',
(out_test == 0).mean())
print()
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(
lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
print('dx relative error:', rel_error(dx, dx_num))
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [0, 0.25, 0.5]:
print('Running check with dropout =', dropout)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print('Initial loss:', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name],
verbose=False, h=1e-5)
print('{} relative error: {:.2e}'.format(
name, rel_error(grad_num, grads[name])))
print()
# Train two identical nets, one with dropout and one without
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [0, 0.75]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print('Using dropout =', dropout)
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
print()
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o',
label='{:.2f} dropout'.format(dropout))
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o',
label='{:.2f} dropout'.format(dropout))
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dropout
Step2: Dropout forward pass
Step3: Dropout backward pass
Step4: Fully-connected nets with Dropout
Step5: Regularization experiment
|
3,647
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import pandas as pd
import sys
sys.path.insert(0, '../')
from paleopy import proxy
from paleopy import analogs
from paleopy import ensemble
djsons = '../jsons/'
pjsons = '../jsons/proxies'
p = proxy(sitename='Rarotonga', \
lon = -159.82, \
lat = -21.23, \
djsons = djsons, \
pjsons = pjsons, \
pfname = 'Rarotonga.json', \
dataset = 'ersst', \
variable ='sst', \
measurement ='delta O18', \
dating_convention = 'absolute', \
calendar = 'gregorian',\
chronology = 'historic', \
season = 'DJF', \
value = 0.6, \
calc_anoms = True, \
detrend = True)
p.find_analogs()
p.proxy_repr(pprint=True)
from paleopy import WR
w = WR(p, classification='New Zealand')
f = w.plot_bar(sig=1)
f.savefig('/Users/nicolasf/Desktop/proxy.png')
w = WR(p, classification='SW Pacific')
f = w.plot_bar(sig=1)
w.df_probs
ens = ensemble(djsons=djsons, pjsons=pjsons, season='DJF')
classification = 'SW Pacific'
w = WR(ens, classification=classification)
w.parent.description
w.climatology
w.probs_anomalies(kind='many')
w.df_anoms
f = w.plot_heatmap()
f = w.plot_bar()
w.df_anoms.to_csv('/Users/nicolasf/Desktop/table.csv')
w.df_probs_MC
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: import the development version of paleopy
Step2: instantiates a proxy with the required parameters
Step3: find the analogs
Step4: print the updated proxy features
Step5: Now instantiates a WR class, passing the proxy object
Step6: WR frequency changes associated with the analog years for Kidson types
Step7: plots the bar plot, significance level = 99%
Step8: WR frequency changes associated with the analog years for the SW Pacific regimes = SW Pacific
Step9: plots the bar plot, significance level = 99%
Step10: this is consistent with the known relationships between the SW Pacific regimes and the large-scale SST anomalies
Step11: now passing an ensemble object
|
3,648
|
<ASSISTANT_TASK:>
Python Code:
#Begin spark session
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
#Create pysplice context. Allows you to create a Spark dataframe using our Native Spark DataSource
from splicemachine.spark import PySpliceContext
splice = PySpliceContext(spark)
#Initialize our Feature Store API
from splicemachine.features import FeatureStore
from splicemachine.features.constants import FeatureType
fs = FeatureStore(splice)
#Initialize MLFlow
from splicemachine.mlflow_support import *
mlflow.register_feature_store(fs)
mlflow.register_splice_context(splice)
from splicemachine.notebook import get_mlflow_ui
get_mlflow_ui()
# Load in most relevant features generated in the previous notebook
%store -r features_list
%store -r features_str
%%sql
-- Create schema and drop table, if necessary
CREATE SCHEMA IF NOT EXISTS deployed_models;
DROP TABLE IF EXISTS deployed_models.twimlcon_regression;
#Define the training data frame. Necessary so the model table knows what columns to make
training_df = fs.get_training_set_from_view('twimlcon_customer_lifetime_value').dropna()
#create the table itself
jobid = mlflow.deploy_db( db_schema_name='deployed_models',db_table_name='twimlcon_regression', run_id= '<replace with your run id>',
primary_key={'CUSTOMERID':'INTEGER','EVAL_TIME':'TIMESTAMP'},
df=training_df.select(features_list)
)
#watch the table creation logs
mlflow.watch_job(jobid)
feature_vector = fs.get_feature_vector(features=features_list, join_key_values={'customerid':'14235'})
feature_vector
feature_vector_sql = fs.get_feature_vector(features=features_list, return_sql=True, join_key_values={'customerid':'14235'})
print(feature_vector_sql)
%%time
%%sql
{Insert SQL from previous cell here}
%%sql
truncate table deployed_models.twimlcon_regression;
%%time
splice.execute(f
INSERT INTO deployed_models.twimlcon_regression ( CUSTOMERID, {features_str} )
SELECT fset2.CUSTOMERID, {features_str}
FROM twimlcon_fs.customer_lifetime fset2,
twimlcon_fs.customer_rfm_by_category fset1
WHERE fset2.CUSTOMERID = 15838 AND fset1.CUSTOMERID = 15838
union all
SELECT fset2.CUSTOMERID, {features_str}
FROM twimlcon_fs.customer_lifetime fset2,
twimlcon_fs.customer_rfm_by_category fset1
WHERE fset2.CUSTOMERID = 15839 AND fset1.CUSTOMERID = 15839)
%%sql
SELECT * FROM deployed_models.twimlcon_regression;
%%sql
truncate table deployed_models.twimlcon_regression;
%%time
splice.execute(f
INSERT INTO deployed_models.twimlcon_regression ( EVAL_TIME, CUSTOMERID, {features_str} ) --splice-properties useSpark=False
SELECT fset2.ASOF_TS, fset2.CUSTOMERID, {features_str}
FROM twimlcon_fs.customer_lifetime_history fset2,
twimlcon_fs.customer_rfm_by_category_history fset1
WHERE fset2.CUSTOMERID = fset1.CUSTOMERID
AND fset2.ASOF_TS >=fset1.ASOF_TS AND fset2.ASOF_TS<fset1.UNTIL_TS
AND fset2.ASOF_TS BETWEEN '2020-10-01' and '2020-12-31'
)
%%sql
SELECT * FROM deployed_models.twimlcon_regression ORDER BY EVAL_TIME {limit 10};
spark.stop()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Deploy Machine Learning model as a table in the database
Step2: Create the deployment table
Step3: Insert data into this empty table using the Feature Store
Step4: Return features using SQL
Step6: Generate and retreive predictions using INSERT/SELECT sequence on a single row
Step8: Generate and retreive predictions using INSERT/SELECT sequence on a multiple rows
|
3,649
|
<ASSISTANT_TASK:>
Python Code:
import DSGRN
br=DSGRN.Network("br.txt")
print(br)
import graphviz
print(br.graphviz())
graph=graphviz.Source(br.graphviz())
graph
br_pg=DSGRN.ParameterGraph(br)
print(br_pg.size())
br_64 = br_pg.parameter(64)
print(br_64.inequalities())
br_dg_64=DSGRN.DomainGraph(br_64)
graphviz.Source(br_dg_64.graphviz())
br_morsedecomposition=DSGRN.MorseDecomposition(br_dg_64.digraph())
graphviz.Source(br_morsedecomposition.graphviz())
br_morsegraph=DSGRN.MorseGraph()
br_morsegraph.assign(br_dg_64,br_morsedecomposition)
print(br_morsegraph)
graphviz.Source(br_morsegraph.graphviz())
br2=DSGRN.Network("br2.txt")
print(br2)
graph = graphviz.Source(br2.graphviz())
graph
parametergraph = DSGRN.ParameterGraph(br2)
br2_141=parametergraph.parameter(141)
domaingraph_br2_141 = DSGRN.DomainGraph(br2_141)
graphviz.Source(domaingraph_br2_141.graphviz())
morsedecomposition = DSGRN.MorseDecomposition(domaingraph_br2_141.digraph())
graphviz.Source(morsedecomposition.graphviz())
morsegraph = DSGRN.MorseGraph()
morsegraph.assign(domaingraph_br2_141, morsedecomposition)
graphviz.Source(morsegraph.graphviz())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Network
Step2: We would like to see the network in the way that it is specified in the file br.txt
Step3: Graphviz
Step4: DSGRN created an object br with a method "graphviz" that produces a file in graphviz format. Here it is.
Step5: To actually see the picture we export and then look at it
Step6: ParameterGraph
Step7: The size of the parameter graph is an important number that grows quickly with the size of the network. This is the number of the distinct domains in the parameter space such that the Morse graph is constant in each domain.
Step8: Computing Database of all Morse graphs for entire parameter graph
Step9: This is precise description of the domain in the parameter space that is represented by the parameter node 64. The inequalities between the parameters are separated by && signs. The first inequality says that the lower value describing effect of X1 on X0 (notation L[X1,X0]) multiplied by the lower value describing effect of X2 on X0 (notation L[X2,X0]) must be less than the threshold between X0 and X1 (notation T[X0,X1]). Note that this represents a relationship between inputs of node X0 and output threshold of the node X1. All inequalities have this form
Step10: Here is the picture of the domain graph. These become also unwieldy very quickly.
Step11: MorseDecomposition
Step12: As you can see, this pulls out the sequence of domains that are recurrent.
Step13: The annotation FC suggests that this Morse set contains solutions where each variable oscillates.
Step14: What are the differences between this and the previous network?
Step15: Now we look at the domain graph
Step16: Bistability is between domains 5 and 9, but in bigger graphs this would be impossible to see. We also want to recover the annotations. So we go to Morse decomposition.
Step17: We just recovered the strongly connected components
|
3,650
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import math
import openmc
fuel = openmc.Material(name="uo2")
fuel.add_element("U", 1, percent_type="ao", enrichment=4.25)
fuel.add_element("O", 2)
fuel.set_density("g/cc", 10.4)
clad = openmc.Material(name="clad")
clad.add_element("Zr", 1)
clad.set_density("g/cc", 6)
water = openmc.Material(name="water")
water.add_element("O", 1)
water.add_element("H", 2)
water.set_density("g/cc", 1.0)
water.add_s_alpha_beta("c_H_in_H2O")
materials = openmc.Materials([fuel, clad, water])
radii = [0.42, 0.45]
pin_surfaces = [openmc.ZCylinder(r=r) for r in radii]
pin_univ = openmc.model.pin(pin_surfaces, materials)
pin_univ.plot()
bound_box = openmc.rectangular_prism(0.62, 0.62, boundary_type="reflective")
root_cell = openmc.Cell(fill=pin_univ, region=bound_box)
root_univ = openmc.Universe(cells=[root_cell])
geometry = openmc.Geometry(root_univ)
settings = openmc.Settings()
settings.particles = 100
settings.inactive = 10
settings.batches = 50
geometry.export_to_xml()
settings.export_to_xml()
fuel.volume = math.pi * radii[0] ** 2
materials.export_to_xml()
import openmc.deplete
chain = openmc.deplete.Chain.from_xml("./chain_simple.xml")
chain.nuclide_dict
operator = openmc.deplete.Operator(geometry, settings, "./chain_simple.xml")
power = 174
time_steps = [30 * 24 * 60 * 60] * 6
integrator = openmc.deplete.PredictorIntegrator(operator, time_steps, power)
integrator.integrate()
!ls *.h5
results = openmc.deplete.ResultsList.from_hdf5("./depletion_results.h5")
time, k = results.get_eigenvalue()
time /= (24 * 60 * 60) # convert back to days from seconds
k
from matplotlib import pyplot
pyplot.errorbar(time, k[:, 0], yerr=k[:, 1])
pyplot.xlabel("Time [d]")
pyplot.ylabel("$k_{eff}\pm \sigma$");
_time, u5 = results.get_atoms("1", "U235")
_time, xe135 = results.get_atoms("1", "Xe135")
pyplot.plot(time, u5, label="U235")
pyplot.xlabel("Time [d]")
pyplot.ylabel("Number of atoms - U235");
pyplot.plot(time, xe135, label="Xe135")
pyplot.xlabel("Time [d]")
pyplot.ylabel("Number of atoms - Xe135");
_time, u5_fission = results.get_reaction_rate("1", "U235", "fission")
pyplot.plot(time, u5_fission)
pyplot.xlabel("Time [d]")
pyplot.ylabel("Fission reactions / s");
div_surfs_1 = [openmc.ZCylinder(r=1)]
div_1 = openmc.model.pin(div_surfs_1, [fuel, water], subdivisions={0: 10})
div_1.plot(width=(2.0, 2.0))
data_lib = openmc.data.DataLibrary()
data_lib.register_file("./chain_simple.xml")
data_lib.export_to_xml()
!cat cross_sections.xml
new_op = openmc.deplete.Operator(geometry, settings)
len(new_op.chain.nuclide_dict)
[nuc.name for nuc in new_op.chain.nuclides[:10]]
[nuc.name for nuc in new_op.chain.nuclides[-10:]]
operator.heavy_metal
max_step = 2 * operator.heavy_metal / power * 1E3
print("\"Maximum\" depletion step: {:5.3} [d]".format(max_step))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Build the Geometry
Step2: Here, we are going to use the openmc.model.pin function to build our pin cell. The pin function anticipates concentric cylinders and materials to fill the inner regions. One additional material is needed than the number of cylinders to cover the domain outside the final ring.
Step3: Using these radii, we define concentric ZCylinder objects. So long as the cylinders are concentric and increasing in radius, any orientation can be used. We also take advantage of the fact that the openmc.Materials object is a subclass of the list object to assign materials to the regions defined by the surfaces.
Step4: The first material, in our case fuel, is placed inside the first cylinder in the inner-most region. The second material, clad, fills the space between our cylinders, while water is placed outside the last ring. The pin function returns an openmc.Universe object, and has some additional features we will mention later.
Step5: Lastly we construct our settings. For the sake of time, a relatively low number of particles will be used.
Step6: The depletion interface relies on OpenMC to perform the transport simulation and obtain reaction rates and other important information. We then have to create the xml input files that openmc expects, specifically geometry.xml, settings.xml, and materials.xml.
Step7: Before we write the material file, we must add one bit of information
Step8: Setting up for depletion
Step9: In order to run the depletion calculation we need the following information
Step10: The primary entry point for depletion is the openmc.deplete.Operator. It relies on the openmc.deplete.Chain and helper classes to run openmc, retrieve and normalize reaction rates, and other perform other tasks. For a thorough description, please see the full API documentation.
Step11: We will then simulate our fuel pin operating at linear power of 174 W/cm, or 174 W given a unit height for our problem.
Step12: For this problem, we will take depletion step sizes of 30 days, and instruct OpenMC to re-run a transport simulation every 30 days until we have modeled the problem over a six month cycle. The depletion interface expects the time to be given in seconds, so we will have to convert. Note that these values are not cumulative.
Step13: And lastly, we will use the basic predictor, or forward Euler, time integration scheme. Other, more advanced methods are provided to the user through openmc.deplete
Step14: To perform the simulation, we use the integrate method, and let openmc take care of the rest.
Step15: Processing the outputs
Step16: The depletion_results.h5 file contains information that is aggregated over all time steps through depletion. This includes the multiplication factor, as well as concentrations. We can process this file using the openmc.deplete.ResultsList object
Step17: The first column of k is the value of k-combined at each point in our simulation, while the second column contains the associated uncertainty. We can plot this using matplotlib
Step18: Due to the low number of particles selected, we have not only a very high uncertainty, but likely a horrendously poor fission source. This pin cell should have $k>1$, but we can still see the decline over time due to fuel consumption.
Step19: We can also examine reaction rates over time using the ResultsList
Step20: Helpful tips
Step21: The innermost region has been divided into 10 equal volume regions. We can pass additional arguments to divide multiple regions, except for the region outside the last cylinder.
Step22: This allows us to make an Operator simply with the geometry and settings arguments, provided we exported our library to OPENMC_CROSS_SECTIONS. For a problem where we built and registered a Chain using all the available nuclear data, we might see something like the following.
Step23: Choice of depletion step size
|
3,651
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
!head -n 30 open_exoplanet_catalogue.txt
data = np.genfromtxt('open_exoplanet_catalogue.txt' , delimiter = ",")
assert data.shape==(1993,24)
mass = data[:2]
assert True # leave for grading
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave for grading
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exoplanet properties
Step2: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data
Step3: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Step4: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
|
3,652
|
<ASSISTANT_TASK:>
Python Code:
import sys
try:
import docplex.mp
except:
raise Exception('Please install docplex. See https://pypi.org/project/docplex/')
products = [("kluski", 100, 0.6, 0.8),
("capellini", 200, 0.8, 0.9),
("fettucine", 300, 0.3, 0.4)]
# resources are a list of simple tuples (name, capacity)
resources = [("flour", 20),
("eggs", 40)]
consumptions = {("kluski", "flour"): 0.5,
("kluski", "eggs"): 0.2,
("capellini", "flour"): 0.4,
("capellini", "eggs"): 0.4,
("fettucine", "flour"): 0.3,
("fettucine", "eggs"): 0.6}
from docplex.mp.model import Model
mdl = Model(name="pasta")
inside_vars = mdl.continuous_var_dict(products, name='inside')
outside_vars = mdl.continuous_var_dict(products, name='outside')
# --- constraints ---
# demand satisfaction
mdl.add_constraints((inside_vars[prod] + outside_vars[prod] >= prod[1], 'ct_demand_%s' % prod[0]) for prod in products)
# --- resource capacity ---
mdl.add_constraints((mdl.sum(inside_vars[p] * consumptions[p[0], res[0]] for p in products) <= res[1], 'ct_res_%s' % res[0]) for res in resources)
mdl.print_information()
total_inside_cost = mdl.sum(inside_vars[p] * p[2] for p in products)
total_outside_cost = mdl.sum(outside_vars[p] * p[3] for p in products)
mdl.minimize(total_inside_cost + total_outside_cost)
mdl.solve()
obj = mdl.objective_value
print("* Production model solved with objective: {:g}".format(obj))
print("* Total inside cost=%g" % total_inside_cost.solution_value)
for p in products:
print("Inside production of {product}: {ins_var}".format(product=p[0], ins_var=inside_vars[p].solution_value))
print("* Total outside cost=%g" % total_outside_cost.solution_value)
for p in products:
print("Outside production of {product}: {out_var}".format(product=p[0], out_var=outside_vars[p].solution_value))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2
Step2: Step 3
Step3: Define the decision variables
Step4: Express the business constraints
Step5: Express the objective
Step6: Solve with Decision Optimization
Step7: Step 5
|
3,653
|
<ASSISTANT_TASK:>
Python Code:
!type Examples\c-grammar.g
!cat Examples/arith.g
!cat Pure.g4
!cat -n Grammar.g4
!antlr4 -Dlanguage=Python3 Grammar.g4
from GrammarLexer import GrammarLexer
from GrammarParser import GrammarParser
import antlr4
class GrammarRule:
def __init__(self, variable, body):
self.mVariable = variable
self.mBody = body
def __eq__(self, other):
return isinstance(other, GrammarRule) and \
self.mVariable == other.mVariable and \
self.mBody == other.mBody
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.__repr__())
def __repr__(self):
return f'{self.mVariable} → {" ".join(self.mBody)}'
def parse_grammar(filename):
input_stream = antlr4.FileStream(filename, encoding="utf-8")
lexer = GrammarLexer(input_stream)
token_stream = antlr4.CommonTokenStream(lexer)
parser = GrammarParser(token_stream)
grammar = parser.start()
return [GrammarRule(head, tuple(body)) for head, *body in grammar.g]
grammar = parse_grammar('Examples/c-grammar.g')
grammar
def is_var(name):
return name[0] != "'" and name.islower()
def collect_variables(Rules):
Variables = set()
for rule in Rules:
Variables.add(rule.mVariable)
for item in rule.mBody:
if is_var(item):
Variables.add(item)
return Variables
def collect_tokens(Rules):
Tokens = set()
for rule in Rules:
for item in rule.mBody:
if not is_var(item):
Tokens.add(item)
return Tokens
class ExtendedMarkedRule():
def __init__(self, variable, alpha, beta, follow):
self.mVariable = variable
self.mAlpha = alpha
self.mBeta = beta
self.mFollow = follow
def __eq__(self, other):
return isinstance(other, ExtendedMarkedRule) and \
self.mVariable == other.mVariable and \
self.mAlpha == other.mAlpha and \
self.mBeta == other.mBeta and \
self.mFollow == other.mFollow
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.mVariable) + \
hash(self.mAlpha) + \
hash(self.mBeta) + \
hash(self.mFollow)
def __repr__(self):
alphaStr = ' '.join(self.mAlpha)
betaStr = ' '.join(self.mBeta)
if len(self.mFollow) > 1:
followStr = '{' + ','.join(self.mFollow) + '}'
else:
followStr = ','.join(self.mFollow)
return f'{self.mVariable} → {alphaStr} • {betaStr}: {followStr}'
def is_complete(self):
return len(self.mBeta) == 0
ExtendedMarkedRule.is_complete = is_complete
del is_complete
def symbol_after_dot(self):
if len(self.mBeta) > 0:
return self.mBeta[0]
return None
ExtendedMarkedRule.symbol_after_dot = symbol_after_dot
del symbol_after_dot
def is_var(name):
return name[0] != "'" and name.islower()
def next_var(self):
if len(self.mBeta) > 0:
var = self.mBeta[0]
if is_var(var):
return var
return None
ExtendedMarkedRule.next_var = next_var
del next_var
def move_dot(self):
return ExtendedMarkedRule(self.mVariable,
self.mAlpha + (self.mBeta[0],),
self.mBeta[1:],
self.mFollow)
ExtendedMarkedRule.move_dot = move_dot
del move_dot
def to_rule(self):
return GrammarRule(self.mVariable, self.mAlpha + self.mBeta)
ExtendedMarkedRule.to_rule = to_rule
del to_rule
def to_marked_rule(self):
return MarkedRule(self.mVariable, self.mAlpha, self.mBeta)
ExtendedMarkedRule.to_marked_rule = to_marked_rule
del to_marked_rule
class MarkedRule():
def __init__(self, variable, alpha, beta):
self.mVariable = variable
self.mAlpha = alpha
self.mBeta = beta
def __eq__(self, other):
return isinstance(other, MarkedRule) and \
self.mVariable == other.mVariable and \
self.mAlpha == other.mAlpha and \
self.mBeta == other.mBeta
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.mVariable) + \
hash(self.mAlpha) + \
hash(self.mBeta)
def __repr__(self):
alphaStr = ' '.join(self.mAlpha)
betaStr = ' '.join(self.mBeta)
return f'{self.mVariable} → {alphaStr} • {betaStr}'
def combine_rules(M):
Result = set()
Core = set()
for emr1 in M:
Follow = set()
core1 = emr1.to_marked_rule()
if core1 in Core:
continue
Core.add(core1)
for emr2 in M:
core2 = emr2.to_marked_rule()
if core1 == core2:
Follow |= emr2.mFollow
new_emr = ExtendedMarkedRule(core1.mVariable, core1.mAlpha, core1.mBeta, frozenset(Follow))
Result.add(new_emr)
return frozenset(Result)
class Grammar():
def __init__(self, Rules):
self.mRules = Rules
self.mStart = Rules[0].mVariable
self.mVariables = collect_variables(Rules)
self.mTokens = collect_tokens(Rules)
self.mStates = set()
self.mStateNames = {}
self.mConflicts = False
self.mVariables.add('ŝ')
self.mTokens.add('$')
self.mRules.append(GrammarRule('ŝ', (self.mStart, ))) # augmenting
self.compute_tables()
def initialize_dictionary(Variables):
return { a: set() for a in Variables }
def compute_tables(self):
self.mFirst = initialize_dictionary(self.mVariables)
self.mFollow = initialize_dictionary(self.mVariables)
self.compute_first()
self.compute_follow()
self.compute_rule_names()
self.all_states()
self.compute_action_table()
self.compute_goto_table()
Grammar.compute_tables = compute_tables
del compute_tables
def compute_rule_names(self):
self.mRuleNames = {}
counter = 0
for rule in self.mRules:
self.mRuleNames[rule] = 'r' + str(counter)
counter += 1
Grammar.compute_rule_names = compute_rule_names
del compute_rule_names
def compute_first(self):
change = True
while change:
change = False
for rule in self.mRules:
a, body = rule.mVariable, rule.mBody
first_body = self.first_list(body)
if not (first_body <= self.mFirst[a]):
change = True
self.mFirst[a] |= first_body
print('First sets:')
for v in self.mVariables:
print(f'First({v}) = {self.mFirst[v]}')
Grammar.compute_first = compute_first
del compute_first
def first_list(self, alpha):
if len(alpha) == 0:
return { '' }
elif is_var(alpha[0]):
v, *r = alpha
return eps_union(self.mFirst[v], self.first_list(r))
else:
t = alpha[0]
return { t }
Grammar.first_list = first_list
del first_list
def eps_union(S, T):
if '' in S:
if '' in T:
return S | T
return (S - { '' }) | T
return S
def compute_follow(self):
self.mFollow[self.mStart] = { '$' }
change = True
while change:
change = False
for rule in self.mRules:
a, body = rule.mVariable, rule.mBody
for i in range(len(body)):
if is_var(body[i]):
yi = body[i]
Tail = self.first_list(body[i+1:])
firstTail = eps_union(Tail, self.mFollow[a])
if not (firstTail <= self.mFollow[yi]):
change = True
self.mFollow[yi] |= firstTail
print('Follow sets (note that "$" denotes the end of file):');
for v in self.mVariables:
print(f'Follow({v}) = {self.mFollow[v]}')
Grammar.compute_follow = compute_follow
del compute_follow
def cmp_closure(self, Marked_Rules):
All_Rules = Marked_Rules
New_Rules = Marked_Rules
while True:
More_Rules = set()
for rule in New_Rules:
c = rule.next_var()
if c == None:
continue
delta = rule.mBeta[1:]
L = rule.mFollow
for rule in self.mRules:
head, alpha = rule.mVariable, rule.mBody
if c == head:
newL = frozenset({ x for t in L for x in self.first_list(delta + (t,)) })
More_Rules |= { ExtendedMarkedRule(head, (), alpha, newL) }
if More_Rules <= All_Rules:
return frozenset(All_Rules)
New_Rules = More_Rules - All_Rules
All_Rules |= New_Rules
Grammar.cmp_closure = cmp_closure
del cmp_closure
def goto(self, Marked_Rules, x):
Result = set()
for mr in Marked_Rules:
if mr.symbol_after_dot() == x:
Result.add(mr.move_dot())
return combine_rules(self.cmp_closure(Result))
Grammar.goto = goto
del goto
def all_states(self):
start_state = self.cmp_closure({ ExtendedMarkedRule('ŝ', (), (self.mStart,), frozenset({'$'})) })
start_state = combine_rules(start_state)
self.mStates = { start_state }
New_States = self.mStates
while True:
More_States = set()
for Rule_Set in New_States:
for mr in Rule_Set:
if not mr.is_complete():
x = mr.symbol_after_dot()
next_state = self.goto(Rule_Set, x)
if next_state not in self.mStates and next_state not in More_States:
More_States.add(next_state)
print('.', end='')
if len(More_States) == 0:
break
New_States = More_States;
self.mStates |= New_States
print('\n', len(self.mStates), sep='')
print("All LR-states:")
counter = 1
self.mStateNames[start_state] = 's0'
print(f's0 = {set(start_state)}')
for state in self.mStates - { start_state }:
self.mStateNames[state] = f's{counter}'
print(f's{counter} = {set(state)}')
counter += 1
Grammar.all_states = all_states
del all_states
def compute_action_table(self):
self.mActionTable = {}
print('\nAction Table:')
for state in self.mStates:
stateName = self.mStateNames[state]
actionTable = {}
# compute shift actions
for token in self.mTokens:
newState = self.goto(state, token)
if newState != set():
newName = self.mStateNames[newState]
actionTable[token] = ('shift', newName)
self.mActionTable[stateName, token] = ('shift', newName)
print(f'action("{stateName}", {token}) = ("shift", {newName})')
# compute reduce actions
for mr in state:
if mr.is_complete():
for token in mr.mFollow:
action1 = actionTable.get(token)
action2 = ('reduce', mr.to_rule())
if action1 == None:
actionTable[token] = action2
r = self.mRuleNames[mr.to_rule()]
self.mActionTable[stateName, token] = ('reduce', r)
print(f'action("{stateName}", {token}) = {action2}')
elif action1 != action2:
self.mConflicts = True
print('')
print(f'conflict in state {stateName}:')
print(f'{stateName} = {state}')
print(f'action("{stateName}", {token}) = {action1}')
print(f'action("{stateName}", {token}) = {action2}')
print('')
for mr in state:
if mr == ExtendedMarkedRule('ŝ', (self.mStart,), (), frozenset({'$'})):
actionTable['$'] = 'accept'
self.mActionTable[stateName, '$'] = 'accept'
print(f'action("{stateName}", $) = accept')
Grammar.compute_action_table = compute_action_table
del compute_action_table
def compute_goto_table(self):
self.mGotoTable = {}
print('\nGoto Table:')
for state in self.mStates:
for var in self.mVariables:
newState = self.goto(state, var)
if newState != set():
stateName = self.mStateNames[state]
newName = self.mStateNames[newState]
self.mGotoTable[stateName, var] = newName
print(f'goto({stateName}, {var}) = {newName}')
Grammar.compute_goto_table = compute_goto_table
del compute_goto_table
%%time
g = Grammar(grammar)
def strip_quotes(t):
if t[0] == "'" and t[-1] == "'":
return t[1:-1]
return t
def dump_parse_table(self, file):
with open(file, 'w', encoding="utf-8") as handle:
handle.write('# Grammar rules:\n')
for rule in self.mRules:
rule_name = self.mRuleNames[rule]
handle.write(f'{rule_name} =("{rule.mVariable}", {rule.mBody})\n')
handle.write('\n# Action table:\n')
handle.write('actionTable = {}\n')
for s, t in self.mActionTable:
action = self.mActionTable[s, t]
t = strip_quotes(t)
if action[0] == 'reduce':
rule_name = action[1]
handle.write(f"actionTable['{s}', '{t}'] = ('reduce', {rule_name})\n")
elif action == 'accept':
handle.write(f"actionTable['{s}', '{t}'] = 'accept'\n")
else:
handle.write(f"actionTable['{s}', '{t}'] = {action}\n")
handle.write('\n# Goto table:\n')
handle.write('gotoTable = {}\n')
for s, v in self.mGotoTable:
state = self.mGotoTable[s, v]
handle.write(f"gotoTable['{s}', '{v}'] = '{state}'\n")
Grammar.dump_parse_table = dump_parse_table
del dump_parse_table
g.dump_parse_table('parse-table.py')
!type parse-table.py
!cat parse-table.py
!del GrammarLexer.* GrammarParser.* Grammar.tokens GrammarListener.py Grammar.interp
!rmdir /S /Q __pycache__
!dir /B
!rm GrammarLexer.* GrammarParser.* Grammar.tokens GrammarListener.py Grammar.interp
!rm -r __pycache__
!ls
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We use <span style="font-variant
Step2: The annotated grammar is stored in the file Grammar.g4.
Step3: We start by generating both scanner and parser.
Step4: The Class GrammarRule
Step5: The function parse_grammar takes a string filename as its argument and returns the grammar that is stored in the specified file. The grammar is represented as list of rules. Each rule is represented as a tuple. The example below will clarify this structure.
Step6: Given a string name, which is either a variable, a token, or a literal, the function is_var checks whether name is a variable. The function can distinguish variable names from tokens and literals because variable names consist only of lower case letters, while tokens are all uppercase and literals start with the character "'".
Step7: Given a list Rules of GrammarRules, the function collect_variables(Rules) returns the set of all variables occuring in Rules.
Step8: Given a set Rules of GrammarRules, the function collect_tokens(Rules) returns the set of all tokens and literals occuring in Rules.
Step9: Extended Marked Rules
Step10: Given an extended marked rule self, the function is_complete checks, whether the extended marked rule self has the form
Step11: Given an extended marked rule self of the form
Step12: Given a grammar symbol name, which is either a variable, a token, or a literal, the function is_var checks whether name is a variable. The function can distinguish variable names from tokens and literals because variable names consist only of lower case letters, while tokens are all uppercase and literals start with the character "'".
Step13: Given an extended marked rule, this function returns the variable following the dot. If there is no variable following the dot, the function returns None.
Step14: The function move_dot(self) transforms an extended marked rule of the form
Step15: The function to_rule(self) turns the extended marked rule self into a GrammarRule, i.e. the extended marked rule
Step16: The function to_rule(self) turns the extended marked rule self into a MarkedRule, i.e. the extended marked rule
Step17: The class MarkedRule is similar to the class ExtendedMarkedRule but does not have the follow set.
Step18: Given a set of extended marked rules M, the function combine_rule combines those extended marked ruless that have the same core
Step19: LR-Table-Generation
Step20: Given a set of Variables, the function initialize_dictionary returns a dictionary that assigns the empty set to all variables.
Step21: Given a Grammar, the function compute_tables computes
Step22: The function compute_rule_names assigns a unique name to each rule of the grammar. These names are used later
Step23: The function compute_first(self) computes the sets $\texttt{First}(c)$ for all variables $c$ and stores them in the dictionary mFirst. Abstractly, given a variable $c$ the function $\texttt{First}(c)$ is the set of all tokens that can start a string that is derived from $c$
Step24: Given a tuple of variables and tokens alpha, the function first_list(alpha) computes the function $\texttt{FirstList}(\alpha)$ that has been defined above. If alpha is nullable, then the result will contain the empty string $\varepsilon = \texttt{''}$.
Step25: The arguments S and T of eps_union are sets that contain tokens and, additionally, they might contain the empty string.
Step26: Given an augmented grammar $G = \langle V,T,R\cup{\widehat{s} \rightarrow s\,\$}, \widehat{s}\rangle$
Step27: If $\mathcal{M}$ is a set of extended marked rules, then the closure of $\mathcal{M}$ is the smallest set $\mathcal{K}$ such that
Step28: Given a set of extended marked rules $\mathcal{M}$ and a grammar symbol $X$, the function $\texttt{goto}(\mathcal{M}, X)$
Step29: The function all_states computes the set of all states of an LR-parser. The function starts with the state
Step30: The following function computes the action table and is defined as follows
Step31: The function compute_goto_table computes the goto table.
Step32: The command below cleans the directory. If you are running windows, you have to replace rm with del.
|
3,654
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
N = 50
sig_x = 0.5
sig_y = 0.5
a_true = 5.0
b_true = 2.0
x_true = np.random.uniform(0,10,size=N)
y_true = a_true + x_true*b_true
x_obs = x_true + np.random.normal(0, sig_x, size=N)
y_obs = y_true + np.random.normal(0, sig_y, size=N)
fig,ax = plt.subplots(1)
ax.errorbar(x_obs, y_obs, xerr=sig_x, yerr=sig_y, fmt='o')
xx = np.array([0,10])
yy = a_true + xx*b_true
ax.plot(xx, yy, '-')
ax.set_xlabel('X')
ax.set_ylabel('Y')
from scipy.optimize import curve_fit
def mod(x, a, b):
return a + x*b
sig_y = np.ones(N)*sig_y # curve_fit needs arrays of errors
sig_x = np.ones(N)*sig_x
sigma_total = np.sqrt(sig_y**2 + b_true**2*sig_x**2)
pars,cov = curve_fit(mod, x_obs, y_obs, [0, 1], sigma=sigma_total)
print(pars)
print(cov)
ax.plot(xx, xx*pars[1]+pars[0])
fig
slopes = []
intercepts = []
for i in range(10000):
x_obs = x_true + np.random.normal(0, sig_x, size=N)
y_obs = y_true + np.random.normal(0, sig_y, size=N)
sig_total = y_obs*0 + np.sqrt(sig_x**2*b_true**2 + sig_y**2)
pars,cov = curve_fit(mod, x_obs, y_obs, [0,1], sigma=sig_total)
slopes.append(pars[1])
intercepts.append(pars[0])
fig,axes = plt.subplots(1,2, figsize=(10,5))
axes[0].hist(slopes, bins=20)
axes[0].axvline(b_true, color='red', zorder=10)
axes[0].set_xlabel('slope')
axes[1].hist(intercepts, bins=20)
axes[1].axvline(a_true, color='red', zorder=10)
axes[1].set_xlabel('intercept')
from scipy.stats import norm
def lnprior(p):
a,b = p
return 0
def lnlike(p, x, y, dx, dy):
a,b = p
model = mod(x, a, b)
sig_tot = np.sqrt(dx**2*b**2 + dy**2)
return sum(norm.logpdf(y, loc=model, scale=sig_tot))
def lnprob(p, x, y, dx, dy):
a,b = p
lp = lnprior(p)
if not np.isfinite(lp):
return -inf
return lp + lnlike(p, x, y, dx, dy)
Nwalker,Ndim = 10,2
ptrue = np.array([a_true,b_true])
# add a random vector 0.1 times the true vector to the true vector
p0 = [ptrue + 0.1*np.random.randn(Ndim)*ptrue for i in range(Nwalker)]
import emcee
sampler = emcee.EnsembleSampler(Nwalker, Ndim, lnprob, args=(x_obs, y_obs, sig_x, sig_y))
pos,prob,state = sampler.run_mcmc(p0, 500)
fig,ax = plt.subplots(2,1)
res = [ax[i].plot(sampler.chain[:,:,i].T, '-', color='k', alpha=0.3) for i in range(2)]
res = [ax[i].axhline(ptrue[i]) for i in range(2)]
sampler.reset()
pos,prob,state = sampler.run_mcmc(pos, 1000)
print(np.mean(sampler.flatchain, axis=0))
print(np.std(sampler.flatchain, axis=0))
slopes = []
intercepts = []
for i in range(100):
x_obs = x_true + np.random.normal(0, sig_x, size=N)
y_obs = y_true + np.random.normal(0, sig_y, size=N)
sampler = emcee.EnsembleSampler(Nwalker, Ndim, lnprob, args=(x_obs, y_obs, sig_x, sig_y))
pos,prob,state = sampler.run_mcmc(pos, 100)
sampler.reset()
pos,prob,state = sampler.run_mcmc(pos, 500)
mns = np.mean(sampler.flatchain, axis=0)
slopes.append(mns[1])
intercepts.append(mns[0])
print(i, end=' ')
fig,axes = plt.subplots(1,2, figsize=(10,5))
axes[0].hist(slopes, bins=10)
axes[0].axvline(b_true, color='red', zorder=10)
axes[0].set_xlabel('slope')
axes[1].hist(intercepts, bins=10)
axes[1].axvline(a_true, color='red', zorder=10)
axes[1].set_xlabel('intercept')
import pystan
model_str = '''
data {
int <lower=1> N; // number of points
vector[N] x; // observed x
vector[N] y; // observed y
vector[N] dx; // error in x
vector[N] dy; // error in y
}
parameters {
real <lower=-10, upper=10> a; // intercept
real <lower=-10, upper=10> b; // slope
vector<lower=0, upper=10>[N] xtrue; // true values of x
}
model {
x ~ normal(xtrue, dx);
y ~ normal(a + b*xtrue, dy);
}'''
sampler2 = pystan.StanModel(model_code=model_str, verbose=False)
idata = dict(N=len(x_obs), x=x_obs, y=y_obs, dx=sig_x, dy=sig_y)
output = sampler2.sampling(data=idata, chains=4, iter=1000, warmup=500)
print(output)
slopes = []
intercepts = []
for i in range(500):
x_obs = x_true + np.random.normal(0, sig_x, size=N)
y_obs = y_true + np.random.normal(0, sig_y, size=N)
idata = dict(N=len(x_obs), x=x_obs, y=y_obs, dx=sig_x, dy=sig_y)
output = sampler2.sampling(data=idata, chains=4, iter=1000, warmup=500)
samples = output.extract(permuted=True)
slopes.append(np.mean(samples['b']))
intercepts.append(np.mean(samples['a']))
print(i, end=' ')
fig,axes = plt.subplots(1,2, figsize=(10,5))
axes[0].hist(slopes, bins=20)
axes[0].axvline(b_true, color='red', zorder=10)
axes[1].hist(intercepts, bins=20)
axes[1].axvline(a_true, color='red', zorder=10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Looks reasonable, I hope. We want to find the best-fit line to the data. It should be close to the orange line (the truth), but not equal, since we introduced noise. First, we could fit this with scipy.optimize.curve_fit as in a previous notebook. It needs an error (sigma), but that's usually the error in $y$. How do we incorporate the error in $x$?. Well, one way to think about it is we are minimizing the residuals (numerator of $\chi^2$), call them $\Delta_i$
Step2: The green line (our fit to the data) should be pretty close to the orange line. You could re-run the notebook a few times to see the result change again and again (since the data would be random each time). This is a method called Monte Carlo (MC) and you can use it to make sure your method is working. Let's do that. We'll re-create the data many times (you can choose how long you want to wait). Each time, we compute the slope and intercept, then save these values in a couple of lists. Finally, make histograms of the values we got and compare to the truth.
Step3: You should see a couple of histograms and red lines that represent the true values. Depending on what you chose for your uncertainties, the red lines may not line up with the peaks of the distributions. In fact, the larger you make the $x$ errors, the more they won't agree. Now, this isn't just due to a random draw and noisy data. This is happening on average over a large number of trials. The conclusion
Step4: Note that the priors are completely uninformative (uniform over all values). The likelihood is computed in the usual way with the normal distribution and we comptue the total error based on the current slope, which is updated at each iteration, so the errors will change as the walkers make their way through parameters space. You should think about what this might mean as the slope becomes small or large. Let's run a bunch of walkers and see what we get.
Step5: Now plot out the samples to see how long it takes to get a convergence.
Step6: Looks like 100 iterations should do it. Let's reset and run for longer and print out the results.
Step7: Okay, slope and intercept are "close". But do we have the same problem we did before? If we run this a bunch of times, will the result still be biased? Let's check. I'm only going to run this for 100 iterations, just to get an idea. MCMC samplers take much longer to run than simple least-squares. We'll print out the index as we go so you can judge if it will take too long on your computer and if you might want to reduce the number of loops.
Step8: Hmmmm. Looks like the problem is still there. So what's the reason for this? Well, it has to do with the way that we incorporated the $\sigma_x$, obviously. This was done by what's called marginalizing. Let's look a bit more closely at what it means to have observed our data. We know (since we made the fake data ourselves) that both $x$ and $y$ had noise added to them that was Gaussian (normally distributed). So, using $N$ to represent normal distributions, the likelihoods for our observed $x$ and $y$ values are
Step9: Note that by putting limits (0,10) on xtrue, I've specified a uniform prior, exactly as the true values were drawn. The other parameters also have uniform priors associated with them. Now we prepare the data and run a test sample, and check to see if we have convergence (rhat close to 1.0).
Step10: Note that we now have N+2 parameters being reported. I kept the number of iterations lower than I would normally and the values of Rhat and standard errors in the parameters look reasonable. If not, increase iter. But even though STAN lets you have large numbers of parameters, the more you have, the slower the sampler will be. We'll do the same thing we did above and re-fit random data sets many times. This will take a while... grab a coffee or tea. Catch up on facebook. But for this final example, we want enough samples to be confident we've "fixed" the problem.
|
3,655
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from statsmodels.compat import lzip
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.formula.api import ols
plt.rc("figure", figsize=(16,8))
plt.rc("font", size=14)
prestige = sm.datasets.get_rdataset("Duncan", "carData", cache=True).data
prestige.head()
prestige_model = ols("prestige ~ income + education", data=prestige).fit()
print(prestige_model.summary())
fig = sm.graphics.influence_plot(prestige_model, criterion="cooks")
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_partregress("prestige", "income", ["income", "education"], data=prestige)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_partregress("prestige", "income", ["education"], data=prestige)
fig.tight_layout(pad=1.0)
subset = ~prestige.index.isin(["conductor", "RR.engineer", "minister"])
prestige_model2 = ols("prestige ~ income + education", data=prestige, subset=subset).fit()
print(prestige_model2.summary())
fig = sm.graphics.plot_partregress_grid(prestige_model)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_ccpr(prestige_model, "education")
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_ccpr_grid(prestige_model)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_regress_exog(prestige_model, "education")
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_fit(prestige_model, "education")
fig.tight_layout(pad=1.0)
#dta = pd.read_csv("http://www.stat.ufl.edu/~aa/social/csv_files/statewide-crime-2.csv")
#dta = dta.set_index("State", inplace=True).dropna()
#dta.rename(columns={"VR" : "crime",
# "MR" : "murder",
# "M" : "pctmetro",
# "W" : "pctwhite",
# "H" : "pcths",
# "P" : "poverty",
# "S" : "single"
# }, inplace=True)
#
#crime_model = ols("murder ~ pctmetro + poverty + pcths + single", data=dta).fit()
dta = sm.datasets.statecrime.load_pandas().data
crime_model = ols("murder ~ urban + poverty + hs_grad + single", data=dta).fit()
print(crime_model.summary())
fig = sm.graphics.plot_partregress_grid(crime_model)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_partregress("murder", "hs_grad", ["urban", "poverty", "single"], data=dta)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_leverage_resid2(crime_model)
fig.tight_layout(pad=1.0)
fig = sm.graphics.influence_plot(crime_model)
fig.tight_layout(pad=1.0)
from statsmodels.formula.api import rlm
rob_crime_model = rlm("murder ~ urban + poverty + hs_grad + single", data=dta,
M=sm.robust.norms.TukeyBiweight(3)).fit(conv="weights")
print(rob_crime_model.summary())
#rob_crime_model = rlm("murder ~ pctmetro + poverty + pcths + single", data=dta, M=sm.robust.norms.TukeyBiweight()).fit(conv="weights")
#print(rob_crime_model.summary())
weights = rob_crime_model.weights
idx = weights > 0
X = rob_crime_model.model.exog[idx.values]
ww = weights[idx] / weights[idx].mean()
hat_matrix_diag = ww*(X*np.linalg.pinv(X).T).sum(1)
resid = rob_crime_model.resid
resid2 = resid**2
resid2 /= resid2.sum()
nobs = int(idx.sum())
hm = hat_matrix_diag.mean()
rm = resid2.mean()
from statsmodels.graphics import utils
fig, ax = plt.subplots(figsize=(16,8))
ax.plot(resid2[idx], hat_matrix_diag, 'o')
ax = utils.annotate_axes(range(nobs), labels=rob_crime_model.model.data.row_labels[idx],
points=lzip(resid2[idx], hat_matrix_diag), offset_points=[(-5,5)]*nobs,
size="large", ax=ax)
ax.set_xlabel("resid2")
ax.set_ylabel("leverage")
ylim = ax.get_ylim()
ax.vlines(rm, *ylim)
xlim = ax.get_xlim()
ax.hlines(hm, *xlim)
ax.margins(0,0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Duncan's Prestige Dataset
Step2: Influence plots
Step3: As you can see there are a few worrisome observations. Both contractor and reporter have low leverage but a large residual. <br />
Step4: As you can see the partial regression plot confirms the influence of conductor, minister, and RR.engineer on the partial relationship between income and prestige. The cases greatly decrease the effect of income on prestige. Dropping these cases confirms this.
Step5: For a quick check of all the regressors, you can use plot_partregress_grid. These plots will not label the <br />
Step6: Component-Component plus Residual (CCPR) Plots
Step7: As you can see the relationship between the variation in prestige explained by education conditional on income seems to be linear, though you can see there are some observations that are exerting considerable influence on the relationship. We can quickly look at more than one variable by using plot_ccpr_grid.
Step8: Single Variable Regression Diagnostics
Step9: Fit Plot
Step10: Statewide Crime 2009 Dataset
Step11: Partial Regression Plots (Crime Data)
Step12: Leverage-Resid<sup>2</sup> Plot
Step13: Influence Plot
Step14: Using robust regression to correct for outliers.
Step15: There is not yet an influence diagnostics method as part of RLM, but we can recreate them. (This depends on the status of issue #888)
|
3,656
|
<ASSISTANT_TASK:>
Python Code:
text = "yeah, but no, but yeah, but no, but yeah"
# Exact match
text == "yeah"
# Match at start or end
text.startswith("yeah")
text.endswith("yeah")
text.endswith("no")
text.find("no")
text1 = "11/27/2012"
text2 = "Nov 27, 2012"
import re
# Simple matching: \d+ means match one or more digits
if re.match(r"\d+/\d+/\d+", text1):
print("yes")
else:
print("no")
if re.match(r"\d+/\d+/\d+", text2):
print("yes")
else:
print("no")
datepat = re.compile(r"\d+/\d+/\d+")
if datepat.match(text1):
print("yes")
else:
print("no")
if datepat.match(text2):
print("yes")
else:
print("no")
text = "Today is 11/27/2012, PyCon starts 3/13/2013."
datepat.findall(text)
datepat = re.compile(r"(\d+)/(\d+)/(\d+)")
m = datepat.match("11/27/2012")
m
# Extract the contents of each group
m.group(0)
m.group(1)
m.group(2)
m.group(3)
m.groups()
month, day, year = m.groups()
print(month, day, year)
# Find all matches (notice splitting into tuples)
text
datepat.findall(text)
for month, day, year in datepat.findall(text):
print("{}-{}-{}".format(year, month, day))
for m in datepat.finditer(text):
print(m.groups())
m = datepat.match("11/27/2012abcdef")
m
m.group()
datepat = re.compile(r"(\d+)/(\d+)/(\d+)$")
datepat.match("11/27/2012abcdef")
datepat.match("11/27/2012")
re.findall(r"(\d+)/(\d+)/(\d+)", text)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 对于复杂的匹配需要使用正则表达式和 re 模块。 为了解释正则表达式的基本原理,假设你想匹配数字格式的日期字符串比如 11/27/2012 ,你可以这样做:
Step2: 如果你想使用同一个模式去做多次匹配,你应该先将模式字符串预编译为模式对象。比如:
Step3: match() 总是从字符串开始去匹配,如果你想查找字符串任意部分的模式出现位置, 使用 findall() 方法去代替。比如:
Step4: 在定义正则式的时候,通常会利用括号去捕获分组。比如:
Step5: 捕获分组可以使得后面的处理更加简单,因为可以分别将每个组的内容提取出来。比如:
Step6: findall() 方法会搜索文本并以列表形式返回所有的匹配。 如果你想以迭代方式返回匹配,可以使用 finditer() 方法来代替,比如:
Step7: 讨论
Step8: 如果你想精确匹配,确保你的正则表达式以 $ 结尾,就像这么这样:
Step9: 最后,如果你仅仅是做一次简单的文本匹配 / 搜索操作的话,可以略过编译部分,直接使用 re 模块级别的函数。比如:
|
3,657
|
<ASSISTANT_TASK:>
Python Code:
import os
resFile = os.path.join(
os.environ["SERPENT_TOOLS_DATA"],
"InnerAssembly_res.m")
import numpy as np
import serpentTools
from serpentTools.settings import rc
rc['serpentVersion'] = '2.1.30'
res = serpentTools.read(resFile)
print(res.metadata['version']) # Serpent version used for the execution
print(res.metadata['decayDataFilePath']) # Directory path for data libraries
print(res.metadata['inputFileName']) # Directory path for data libraries
res.metadata.keys()
res.metadata['startDate']
res.metadata['pop'], res.metadata['skip'] , res.metadata['cycles']
sorted(res.resdata.keys())[0:5]
res.resdata['absKeff']
res.resdata['absKeff'][:,0]
res.resdata['burnup']
res.resdata['burnDays']
res.resdata['totCpuTime']
res["burnup"]
res.get("absKeff")
res.plot('absKeff')
res.plot('burnup', ['absKeff', 'colKeff'])
# plot multiple values with better labels and formatting
res.plot(
'burnup', {'absKeff': '$k_{eff}^{abs}$', 'colKeff': '$k_{eff}^{col}$'},
ylabel=r'Criticality $\pm 3\sigma$',
legend='above', ncol=2)
res.plot(
'burnStep', {'actinideIngTox': 'Actinide Ingestion Tox'},
right={'totCpuTime': "CPU Time [right]"}, sigma=0, rightlabel='CPU Time',
logy=[False, True])
for key in sorted(res.universes):
break
key
key[0]
assert key.burnup == key[1]
sorted(res.universes)[:10]
print(res.universes['0', 0, 0, 0])
univ0 = res.getUniv('0', index=0)
print(univ0)
univ3101 = res.getUniv('3101', index=3)
print(univ3101)
univ3102 = res.getUniv('3102', burnup=0.1)
print(univ3102)
print(res.getUniv('0', timeDays=24.0096))
sorted(univ0.infExp)[:10]
univ0.infExp['infAbs']
univ0["infAbs"]
univ0.infExp['infFlx']
univ0.infUnc['infFlx']
sorted(univ0.b1Exp)[:10]
univ0.b1Exp['b1Flx']
univ0["b1Flx"]
univ0.b1Exp['b1Abs']
sorted(univ0.gc)[:5]
univ3101.gc['betaEff']
univ3101["betaEff"]
univ3101.groups
univ3101.microGroups[:5:]
univ0.plot(['infAbs', 'b1Abs']);
univ0.plot(['infTot', 'infFlx', 'infMicroFlx'], legend='right');
fmt = r"Universe {u} - $\Sigma_{abs}^\infty$"
ax = univ3101.plot('infFiss', labelFmt=fmt)
univ3102.plot('infFiss', ax=ax, labelFmt=fmt, legend='above', ncol=2);
rc.keys()
rc['xs.variableGroups'] = ['versions', 'xs', 'eig', 'burnup-coeff']
rc['xs.getB1XS'] = False
resFilt = serpentTools.read(resFile)
resFilt.metadata.keys()
resFilt.resdata.keys()
univ0Filt = resFilt.getUniv('0', index=1)
univ0Filt.infExp.keys()
univ0Filt.b1Exp
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Results Reader
Step2: Metadata
Step3: Results Data
Step4: Values are presented in similar fashion as if they were read in to Matlab, with one exception. Serpent currently appends a new row for each burnup step, but also for each homogenized universe. This results in repetition of many quantities as Serpent loops over group constant data. The ResultsReader understands Serpent outputs and knows when to append "new" result data to avoid repetition.
Step5: Data in the resdata dictionary can be obtained by indexing directly into the reader with
Step6: Plotting Results Data
Step7: Pass a dictionary of {variable
Step8: Using the right argument, quantities can be plotted on the left and right y-axis. Similar formatting options are available
Step9: Homogenized Universes
Step10: Results, such as infinite cross-sections, b1-leakage corrected cross-sections, kinetic parameters, are included in .universes. All the results include values and uncertainties.
Step11: One can directly index into the universes dictionary to obtain data for a specific universe.
Step12: However this requires knowledge of all four parameters which may be difficult. The getUniv method retrieves a specific universe that matches universe id and time of interest.
Step13: Working with homogenized universe data
Step14: Uncertainties can be obtained by using the infUnc, b1Unc, and gcUnc dictionaries. Uncertainties are relative, as they appear in the output files.
Step15: Serpent also outputs the B1 cross-sections. However, the user must enable the B1 option by setting the fum card. If this card is not enabled by the user, the B1_ variables will all be zeros.
Step16: Data that does not contain the prefix INF_ or B1_ is stored under the gc and gcUnc fields. Criticality, kinetic, and other variables are stored under this field.
Step17: Macro- and micro-group structures are stored directly in the universe in MeV as they appear in the Serpent output files. This means that the macro-group structure is in order of descending energy, while micro-group are in order of increasing energy.
Step18: Plotting universes
Step19: Macroscopic and microscopic quantities, such as micro-group flux, can be plotted on the same figure.
Step20: For plotting data from multiple universes, pass the returned matplotlib.axes.Axes object, on which the plot was drawn, into the plot method for the next universe. The labelFmt argument can be used to differentiate between plotted data. The following strings are replaced when creating the labels
Step21: User Defined Settings
Step22: The rc object and various xs.* settings can be used to control the ResultsReader. Specifically, these settings can be used to store only specific bits of information. Here, we will store the version of Serpent, various cross sections, eigenvalues, and burnup data.
Step23: Further, instruct the reader to only read infinite medium cross sections, not critical spectrum B1 cross sections.
|
3,658
|
<ASSISTANT_TASK:>
Python Code:
from imitation.algorithms import preference_comparisons
from imitation.rewards.reward_nets import BasicRewardNet
from imitation.util.networks import RunningNorm
from imitation.policies.base import FeedForward32Policy, NormalizeFeaturesExtractor
import seals
import gym
from stable_baselines3.common.vec_env import DummyVecEnv
from stable_baselines3 import PPO
venv = DummyVecEnv([lambda: gym.make("seals/CartPole-v0")] * 8)
reward_net = BasicRewardNet(
venv.observation_space, venv.action_space, normalize_input_layer=RunningNorm
)
fragmenter = preference_comparisons.RandomFragmenter(warning_threshold=0, seed=0)
gatherer = preference_comparisons.SyntheticGatherer(seed=0)
reward_trainer = preference_comparisons.CrossEntropyRewardTrainer(
model=reward_net,
epochs=3,
)
agent = PPO(
policy=FeedForward32Policy,
policy_kwargs=dict(
features_extractor_class=NormalizeFeaturesExtractor,
features_extractor_kwargs=dict(normalize_class=RunningNorm),
),
env=venv,
seed=0,
n_steps=2048 // venv.num_envs,
batch_size=64,
ent_coef=0.0,
learning_rate=0.0003,
n_epochs=10,
)
trajectory_generator = preference_comparisons.AgentTrainer(
algorithm=agent,
reward_fn=reward_net,
exploration_frac=0.0,
seed=0,
)
pref_comparisons = preference_comparisons.PreferenceComparisons(
trajectory_generator,
reward_net,
fragmenter=fragmenter,
preference_gatherer=gatherer,
reward_trainer=reward_trainer,
comparisons_per_iteration=100,
fragment_length=100,
transition_oversampling=1,
initial_comparison_frac=0.1,
allow_variable_horizon=False,
seed=0,
initial_epoch_multiplier=2, # Note: set to 200 to achieve sensible results
)
pref_comparisons.train(
total_timesteps=1000, # Note: set to 40000 to achieve sensible results
total_comparisons=120, # Note: set to 4000 to achieve sensible results
)
from imitation.rewards.reward_wrapper import RewardVecEnvWrapper
learned_reward_venv = RewardVecEnvWrapper(venv, reward_net.predict)
from stable_baselines3 import PPO
from stable_baselines3.ppo import MlpPolicy
learner = PPO(
policy=MlpPolicy,
env=learned_reward_venv,
seed=0,
batch_size=64,
ent_coef=0.0,
learning_rate=0.0003,
n_epochs=10,
n_steps=64,
)
learner.learn(1000) # Note: set to 100000 to train a proficient expert
from stable_baselines3.common.evaluation import evaluate_policy
reward, _ = evaluate_policy(agent.policy, venv, 10)
print(reward)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then we can start training the reward model. Note that we need to specify the total timesteps that the agent should be trained and how many fragment comparisons should be made.
Step2: After we trained the reward network using the preference comparisons algorithm, we can wrap our environment with that learned reward.
Step3: Now we can train an agent, that only sees those learned reward.
Step4: Then we can evaluate it using the original reward.
|
3,659
|
<ASSISTANT_TASK:>
Python Code:
%%bash
curl -v -s --head https://demo.loris.ca/main.php 2>&1 |grep '[<>]'
%%bash
curl -k -i -s \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'username=demo&password=demo&login=Click+to+enter' \
https://demo.loris.ca/main.php
%%bash
curl -s https://demo.loris.ca/api/v0.0.3/login -d '{"username":"", "password": ""}'
%%bash
# Query the /candidates endpoint
# https://github.com/aces/Loris/blob/minor/docs/API/LorisRESTAPI.md#30-candidate-api
token=''
curl -k -s \
-H "Authorization: Bearer $token" \
https://demo.loris.ca/api/v0.0.3/candidates/
%%bash
# Query the /candidate_list module
token=''
curl -k -s \
-H "Authorization: Bearer $token" \
'https://demo.loris.ca/candidate_list/?format=json'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: With a html parser user agent (browser)
Step2: LORIS API ressources (endpoints)
Step3: Great, we have a token... now what?
|
3,660
|
<ASSISTANT_TASK:>
Python Code:
### Your code implementation goes here.
### Feel free to use as many code cells as needed.
### Your code implementation goes here.
### Feel free to use as many code cells as needed.
### Your code implementation goes here.
### Feel free to use as many code cells as needed.
### Your code implementation goes here.
### Feel free to use as many code cells as needed.
### Your optional code implementation goes here.
### Feel free to use as many code cells as needed.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question 1
Step2: Question 4
Step3: Question 7
Step4: Question 10
|
3,661
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image('diagrams/skip-gram.png')
from keras.preprocessing.sequence import skipgrams
from keras.preprocessing.text import Tokenizer, text_to_word_sequence
text1 = "I love deep learning."
text2 = "Read Douglas Adams as much as possible."
tokenizer = Tokenizer()
tokenizer.fit_on_texts([text1, text2])
word2id = tokenizer.word_index
word2id.items()
id2word = { wordid: word for word, wordid in word2id.items()}
id2word
encoded_text = [word2id[word] for word in text_to_word_sequence(text1)]
encoded_text
[word2id[word] for word in text_to_word_sequence(text2)]
sg = skipgrams(encoded_text, vocabulary_size=len(word2id.keys()), window_size=1)
sg
for i in range(len(sg[0])):
print "({0},{1})={2}".format(id2word[sg[0][i][0]], id2word[sg[0][i][1]], sg[1][i])
VOCAB_SIZE = len(word2id.keys())
VOCAB_SIZE
DENSEVEC_DIM = 50
import keras
from keras.layers.embeddings import Embedding
from keras.constraints import unit_norm
from keras.layers.merge import Dot
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers import Input, Dense
from keras.models import Model
word1 = Input(shape=(1,), dtype='int64', name='word1')
word2 = Input(shape=(1,), dtype='int64', name='word2')
shared_embedding = Embedding(
input_dim=VOCAB_SIZE+1,
output_dim=DENSEVEC_DIM,
input_length=1,
embeddings_constraint = unit_norm(),
name='shared_embedding')
embedded_w1 = shared_embedding(word1)
embedded_w2 = shared_embedding(word2)
w1 = Flatten()(embedded_w1)
w2 = Flatten()(embedded_w2)
dotted = Dot(axes=1, name='dot_product')([w1, w2])
prediction = Dense(1, activation='sigmoid', name='output_layer')(dotted)
sg_model = Model(inputs=[word1, word2], outputs=prediction)
sg_model.compile(optimizer='adam', loss='mean_squared_error')
sg_model.layers
def print_layer(model, num):
print model.layers[num]
print model.layers[num].input_shape
print model.layers[num].output_shape
print_layer(sg_model,3)
import numpy as np
pairs = np.array(sg[0])
targets = np.array(sg[1])
targets
pairs
w1_list = np.reshape(pairs[:, 0], (len(pairs), 1))
w1_list
w2_list = np.reshape(pairs[:, 1], (len(pairs), 1))
w2_list
w2_list.shape
w2_list.dtype
sg_model.fit(x=[w1_list, w2_list], y=targets, epochs=10)
sg_model.layers[2].weights
MAX_FEATURES = 20000 # number of unique words in the dataset
MAXLEN = 400 # max word (feature) length of a review
EMBEDDING_DIMS = 50
NGRAM_RANGE = 2
def create_ngram_set(input_list, ngram_value=2):
Extract a set of n-grams from a list of integers.
return set(zip(*[input_list[i:] for i in range(ngram_value)]))
create_ngram_set([1, 2, 3, 4, 5], ngram_value=2)
create_ngram_set([1, 2, 3, 4, 5], ngram_value=3)
def add_ngram(sequences, token_indice, ngram_range=2):
Augment the input list of list (sequences) by appending n-grams values.
new_sequences = []
for input_list in sequences:
new_list = input_list[:]
for i in range(len(new_list) - ngram_range + 1):
for ngram_value in range(2, ngram_range + 1):
ngram = tuple(new_list[i:i + ngram_value])
if ngram in token_indice:
new_list.append(token_indice[ngram])
new_sequences.append(new_list)
return new_sequences
sequences = [[1,2,3,4,5, 6], [6,7,8]]
token_indice = {(1,2): 20000, (4,5): 20001, (6,7,8): 20002}
add_ngram(sequences, token_indice, ngram_range=2)
add_ngram(sequences, token_indice, ngram_range=3)
from keras.datasets import imdb
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=MAX_FEATURES)
x_train[0:2]
y_train[0:2]
ngram_set = set()
for input_list in x_train:
for i in range(2, NGRAM_RANGE + 1):
set_of_ngram = create_ngram_set(input_list, ngram_value=i)
ngram_set.update(set_of_ngram)
len(ngram_set)
ngram_set.pop()
start_index = MAX_FEATURES + 1
token_indice = {v: k + start_index for k, v in enumerate(ngram_set)}
indice_token = {token_indice[k]: k for k in token_indice}
import numpy as np
MAX_FEATURES = np.max(list(indice_token.keys())) + 1
MAX_FEATURES
x_train = add_ngram(x_train, token_indice, NGRAM_RANGE)
x_test = add_ngram(x_test, token_indice, NGRAM_RANGE)
from keras.preprocessing import sequence
sequence.pad_sequences([[1,2,3,4,5], [6,7,8]], maxlen=10)
x_train = sequence.pad_sequences(x_train, maxlen=MAXLEN)
x_test = sequence.pad_sequences(x_test, maxlen=MAXLEN)
x_train.shape
x_test.shape
Image('diagrams/fasttext.png')
from keras.models import Sequential
from keras.layers.embeddings import Embedding
from keras.layers.pooling import GlobalAveragePooling1D
from keras.layers import Dense
ft_model = Sequential()
ft_model.add(Embedding(
input_dim = MAX_FEATURES,
output_dim = EMBEDDING_DIMS,
input_length= MAXLEN))
ft_model.add(GlobalAveragePooling1D())
ft_model.add(Dense(1, activation='sigmoid'))
ft_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
ft_model.layers
print_layer(ft_model, 0)
print_layer(ft_model, 1)
print_layer(ft_model, 2)
ft_model.fit(x_train, y_train, batch_size=100, epochs=3, validation_data=(x_test, y_test))
Image('diagrams/text-cnn-classifier.png')
embedding_dim = 50 # we'll get a vector representation of words as a by-product
filter_sizes = (2, 3, 4) # we'll make one convolutional layer for each filter we specify here
num_filters = 10 # each layer will contain this many filters
dropout_prob = (0.2, 0.2)
hidden_dims = 50
# Prepossessing parameters
sequence_length = 400
max_words = 5000
from keras.datasets import imdb
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_words) # limits vocab to num_words
?imdb.load_data
from keras.preprocessing import sequence
x_train = sequence.pad_sequences(x_train, maxlen=sequence_length, padding="post", truncating="post")
x_test = sequence.pad_sequences(x_test, maxlen=sequence_length, padding="post", truncating="post")
x_train[0]
vocabulary = imdb.get_word_index() # word to integer map
vocabulary['good']
len(vocabulary)
from keras.models import Model
from keras.layers import Input
from keras.layers import Embedding
from keras.layers import Dropout
from keras.layers import Conv1D
from keras.layers import MaxPooling1D
from keras.layers import Flatten
from keras.layers import Dense
from keras.layers.merge import Concatenate
# Input, embedding, and dropout layers
input_shape = (sequence_length,)
model_input = Input(shape=input_shape)
z = Embedding(
input_dim=len(vocabulary) + 1,
output_dim=embedding_dim,
input_length=sequence_length,
name="embedding")(model_input)
z = Dropout(dropout_prob[0])(z)
# Convolutional block
# parallel set of n convolutions; output of all n are
# concatenated into one vector
conv_blocks = []
for sz in filter_sizes:
conv = Conv1D(filters=num_filters, kernel_size=sz, activation="relu" )(z)
conv = MaxPooling1D(pool_size=2)(conv)
conv = Flatten()(conv)
conv_blocks.append(conv)
z = Concatenate()(conv_blocks) if len(conv_blocks) > 1 else conv_blocks[0]
z = Dropout(dropout_prob[1])(z)
# Hidden dense layer and output layer
z = Dense(hidden_dims, activation="relu")(z)
model_output = Dense(1, activation="sigmoid")(z)
cnn_model = Model(model_input, model_output)
cnn_model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
cnn_model.layers
print_layer(cnn_model, 12)
print_layer(cnn_model, 12)
cnn_model.fit(x_train, y_train, batch_size=64, epochs=3, validation_data=(x_test, y_test))
cnn_model.layers[1].weights
cnn_model.layers[1].get_weights()
cnn_model.layers[3].weights
Image('diagrams/LSTM.png')
from keras.models import Sequential
from keras.layers import Embedding
from keras.layers.core import SpatialDropout1D
from keras.layers.core import Dropout
from keras.layers.recurrent import LSTM
from keras.layers.core import Dense
hidden_dims = 50
embedding_dim = 50
lstm_model = Sequential()
lstm_model.add(Embedding(len(vocabulary) + 1, embedding_dim, input_length=sequence_length, name="embedding"))
lstm_model.add(SpatialDropout1D(Dropout(0.2)))
lstm_model.add(LSTM(hidden_dims, dropout=0.2, recurrent_dropout=0.2)) # first arg, like Dense, is dim of output
lstm_model.add(Dense(1, activation='sigmoid'))
lstm_model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
lstm_model.fit(x_train, y_train, batch_size=64, epochs=3, validation_data=(x_test, y_test))
lstm_model.layers
lstm_model.layers[2].input_shape
lstm_model.layers[2].output_shape
%matplotlib inline
import pandas as pd
import glob
datapath = "/Users/pfigliozzi/aclImdb/train/unsup"
files = glob.glob(datapath+"/*.txt")[:1000] #first 1000 (there are 50k)
df = pd.concat([pd.read_table(filename, header=None, names=['raw']) for filename in files], ignore_index=True)
df.raw.map(lambda x: len(x)).plot.hist()
50000. * 2000. / 10**6
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: skip-gram
Step2: ```python
Step3: Note word id's are numbered from 1, not zero
Step4: Model parameters
Step5: Model build
Step6: Create a dense vector for each word in the pair. The output of Embedding has shape (batch_size, sequence_length, output_dim) which in our case is (batch_size, 1, DENSEVEC_DIM). We'll use Flatten to get rid of that pesky middle dimension (1), so going into the dot product we'll have shape (batch_size, DENSEVEC_DIM).
Step7: At this point you can check out how the data flows through your compiled model.
Step8: Let's try training it with our toy data set!
Step9: Continuous Bag of Words (CBOW) model
Step12: Some data prep functions lifted from the example
Step13: load canned training data
Step14: Add n-gram features
Step15: Assign id's to the new features
Step16: Update MAX_FEATURES
Step17: Add n-grams to the input data
Step18: Make all input sequences the same length by padding with zeros
Step19: FastText Model
Step20: fastText classifier vs. convolutional neural network (CNN) vs. long short-term memory (LSTM) classifier
Step21: Diagram from Convolutional Neural Networks for Sentence Classification, Kim Yoon (2014)
Step22: Canned input data
Step23: Model build
Step24: An LSTM sentence classifier
Step25: Appendix
|
3,662
|
<ASSISTANT_TASK:>
Python Code:
from astropy.io import ascii
data = ascii.read('./XTE_J1550_564_30191011500A_2_13kev_001s_0_2505s.txt')
time = data['col1']
rate = data['col2']
dt = time[1] - time[0]
from hhtpywrapper.eemd import EEMD
eemd_post_processing = EEMD(rate, 6.0, 100, num_imf=10, seed_no=4, post_processing=True)
eemd_post_processing.get_oi()
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
tstart = int(np.fix(40 / dt))
tend = int(np.fix(50 / dt))
hi_noise = np.sum(eemd_post_processing.imfs[:,:2], axis=1)
c3 = eemd_post_processing.imfs[:,2]
c4 = eemd_post_processing.imfs[:,3]
c5 = eemd_post_processing.imfs[:,4]
low_noise = np.sum(eemd_post_processing.imfs[:,5:], axis=1)
plt.figure()
plt.subplot(611)
plt.plot(time[tstart:tend], rate[tstart:tend]/1000, 'k')
plt.xticks([])
plt.yticks([0, 20, 40])
plt.xlim([40, 50])
plt.ylabel('Data')
plt.subplot(612)
plt.plot(time[tstart:tend], hi_noise[tstart:tend]/1000, 'k')
plt.xticks([])
plt.yticks([-10, 0, 10])
plt.xlim([40, 50])
plt.ylabel(r'$c_{1} : c_{2}$')
plt.subplot(613)
plt.plot(time[tstart:tend], c3[tstart:tend]/1000, 'k')
plt.xticks([])
plt.yticks([-5, 0, 5])
plt.ylabel(r'$c_{3}$')
plt.xlim([40, 50])
plt.subplot(614)
plt.plot(time[tstart:tend], c4[tstart:tend]/1000, 'r')
plt.xticks([])
plt.yticks([-10, 0, 10])
plt.xlim([40, 50])
plt.ylabel(r'$c_{4}$')
plt.subplot(615)
plt.plot(time[tstart:tend], c5[tstart:tend]/1000, 'k')
plt.xticks([])
plt.yticks([-5, 0, 5])
plt.xlim([40, 50])
plt.ylabel(r'$c_{5}$')
plt.subplot(616)
plt.plot(time[tstart:tend], low_noise[tstart:tend]/1000, 'k')
plt.yticks([10, 15, 20, 25])
plt.xticks(np.arange(40,51))
plt.xlim([40, 50])
plt.xlabel('Time (s)')
plt.ylabel(r'$c_{6}$ : residual')
plt.show()
from hhtpywrapper.hsa import HSA
# Obtaining the instantaneous frequency and amplitude of the IMF c4 by Hilbert transform
ifa = HSA(c4, dt)
iamp = ifa.iamp
ifreq = ifa.ifreq
# Plot the IMF C4 and its instantaneous frequency and amplitude
plt.figure()
plt.subplot(411)
plt.plot(time[tstart:tend], rate[tstart:tend], 'k')
plt.xticks([])
plt.xlim([40, 50])
plt.ylabel('Data')
plt.subplot(412)
plt.plot(time[tstart:tend], c4[tstart:tend], 'k')
plt.xticks([])
plt.xlim([40, 50])
plt.ylabel(r'$c_{4}$')
plt.subplot(413)
plt.plot(time[tstart:tend], iamp[tstart:tend], 'b')
plt.xticks([])
plt.xlim([40, 50])
plt.ylabel('Amplitude')
plt.subplot(414)
plt.plot(time[tstart:tend], ifreq[tstart:tend], 'r')
plt.ylabel('Frequency (Hz)')
plt.xticks(np.arange(40,51))
plt.xlim([40, 50])
plt.xlabel('Time (s)')
plt.show()
# Plot the Hilbert spectrum
ifa.plot_hs(time, trange=[40, 50], frange=[1.0, 10], tres=1000, fres=1000, hsize=10, sigma=2, colorbar='amplitude')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Running EEMD of the QPO signal and checking the orthogonality of the IMF components
Step2: Reproducing Figure 2 in Su et al. 2015
Step3: Hilbert spectral analysis
|
3,663
|
<ASSISTANT_TASK:>
Python Code:
!polyglot --help
!polyglot --lang en tokenize --input testdata/cricket.txt | head -n 3
!polyglot tokenize --input testdata/cricket.txt | head -n 3
!polyglot count --help
!polyglot count --input testdata/cricket.txt --min-count 2
!polyglot --log debug --workers 5 count --input testdata/cricket.txt testdata/cricket.txt --min-count 3
!polyglot count --input testdata/cricket.txt | tail -n 10
!polyglot --lang en tokenize --input testdata/cricket.txt | polyglot count --min-count 2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notice that most of the operations are language specific.
Step2: In case the user did not supply the the language code, polyglot will peek ahead and read the first 1KB of data to detect the language used in the input.
Step3: Input formats
Step4: To avoid long output, we will restrict the count to the words that appeared at least twice
Step5: Let us consider the scenario where we have hundreds of files that contains words we want to count.
Step6: Building Pipelines
Step7: Observe that words like "2007." could have been considered two words "2007" and "." and the same for "Africa's".
|
3,664
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
from sympy import init_printing
from sympy import S
from sympy import sin, cos, tanh, exp, pi, sqrt
from boutdata.mms import x, y, z, t
from boutdata.mms import Delp2, DDX, DDY, DDZ
import os, sys
# If we add to sys.path, then it must be an absolute path
common_dir = os.path.abspath('./../../../../common')
# Sys path is a list of system paths
sys.path.append(common_dir)
from CELMAPy.MES import get_metric, make_plot, BOUT_print
init_printing()
folder = '../twoGaussians/'
metric = get_metric()
# Initialization
the_vars = {}
# We need Lx
from boututils.options import BOUTOptions
myOpts = BOUTOptions(folder)
Lx = eval(myOpts.geom['Lx'])
# Two normal gaussians
# The gaussian
# In cartesian coordinates we would like
# f = exp(-(1/(2*w^2))*((x-x0)^2 + (y-y0)^2))
# In cylindrical coordinates, this translates to
# f = exp(-(1/(2*w^2))*(x^2 + y^2 + x0^2 + y0^2 - 2*(x*x0+y*y0) ))
# = exp(-(1/(2*w^2))*(rho^2 + rho0^2 - 2*rho*rho0*(cos(theta)*cos(theta0)+sin(theta)*sin(theta0)) ))
# = exp(-(1/(2*w^2))*(rho^2 + rho0^2 - 2*rho*rho0*(cos(theta - theta0)) ))
w = 0.8*Lx
rho0 = 0.3*Lx
theta0 = 5*pi/4
the_vars['phi'] = exp(-(1/(2*w**2))*(x**2 + rho0**2 - 2*x*rho0*(cos(z - theta0)) ))
w = 0.5*Lx
rho0 = 0.2*Lx
theta0 = 0
the_vars['S_n'] = exp(-(1/(2*w**2))*(x**2 + rho0**2 - 2*x*rho0*(cos(z - theta0)) ))
the_vars['S'] = the_vars['S_n']*Delp2(the_vars['phi'], metric=metric)\
+ metric.g11*DDX(the_vars['S_n'], metric=metric)*DDX(the_vars['phi'], metric=metric)\
+ metric.g33*DDZ(the_vars['S_n'], metric=metric)*DDZ(the_vars['phi'], metric=metric)
make_plot(folder=folder, the_vars=the_vars, plot2d=True, include_aux=False)
BOUT_print(the_vars, rational=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initialize
Step2: Define the variables
Step3: Define manifactured solutions
Step4: Calculate the solution
Step5: Plot
Step6: Print the variables in BOUT++ format
|
3,665
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import scipy as scipy
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib import rc
# set to use tex, but make sure it is sans-serif fonts only
rc('text', usetex=True)
rc('text.latex', preamble=r'\usepackage{cmbright}')
rc('font', **{'family': 'sans-serif', 'sans-serif': ['Helvetica']})
# Magic function to make matplotlib inline;
# other style specs must come AFTER
%matplotlib inline
# This enables SVG graphics inline.
# There is a bug, so uncomment if it works.
%config InlineBackend.figure_formats = {'png', 'retina'}
# JB's favorite Seaborn settings for notebooks
rc = {'lines.linewidth': 2,
'axes.labelsize': 18,
'axes.titlesize': 18,
'axes.facecolor': 'DFDFE5'}
sns.set_context('notebook', rc=rc)
sns.set_style("dark")
mpl.rcParams['xtick.labelsize'] = 16
mpl.rcParams['ytick.labelsize'] = 16
mpl.rcParams['legend.fontsize'] = 14
x = np.random.normal(10, .5, 10)
y = np.random.standard_t(2.7, 10) + 8.5 # add 8.5 to make mean(y) = -0.5
# make two of the mutant data points outliers to bring the samples closer together
y[0:2] = np.random.normal(13, 1, 2)
# place the data in a dataframe
data = np.vstack([x, y]).transpose()
df = pd.DataFrame(data, columns=['wt', 'mutant'])
# tidy the dataframe, so each row is 1 observation
tidy = pd.melt(df, var_name = 'genotype', value_name='expression')
tidy.head()
sns.boxplot(x='genotype', y='expression', data=tidy)
sns.swarmplot(x='genotype', y='expression', data=tidy, size=7, color='#36454F')
plt.gca().axhline(10, xmin=0, xmax=0.5, color='blue', label='True WT Mean', lw=3)
plt.gca().axhline(8.5, xmin=0.5, xmax=1, color='green', label='True MT Mean', lw=3)
plt.legend()
plt.title('Difference in Expression between a (fictitious) WT and Mutant')
def non_parametric_bootstrap(x, f, nsim=1000, **kwargs):
Params:
x, y - data (numpy arrays)
f - test function to calculate
nsim - number of simulations to run
statistic = np.zeros(nsim)
for i in range(nsim):
# simulate x
indices = np.random.randint(0, len(x), len(x))
X = x[indices]
X += np.random.normal(0, 0.05, len(x))
statistic[i] = f(X, **kwargs)
return statistic
wt = tidy[tidy.genotype == 'wt'].expression.values
mt = tidy[tidy.genotype == 'mutant'].expression.values
meanx = non_parametric_bootstrap(wt, np.mean)
meany = non_parametric_bootstrap(mt, np.mean)
sns.distplot(meanx, label='Bootstrapped Mean WT')
sns.distplot(meany, label='Bootstrapped Mean Mut')
# plt.gca().axvline(logp.mean(), ls='--', color='k', label='mean pval')
# plt.gca().axvline(-np.log(0.05)/np.log(10), ls='--', color='r', label='statistical significance')
# plt.xlabel('$-\log_{10}{p}$')
plt.ylabel('Probability Density')
plt.title('Bootstrapped Mean Values')
plt.legend()
def print_mean_and_confidence_intervals(btstrp):
btstrp = np.sort(btstrp)
mean = btstrp.mean()
message = "Mean = {0:.2g}; CI = [{1:.2g}, {2:.2g}]"
five = int(np.floor(0.05*len(btstrp)))
ninetyfive = int(np.floor(0.95*len(btstrp)))
print(message.format(mean, btstrp[five], btstrp[ninetyfive]))
print('Wild-type:')
print_mean_and_confidence_intervals(meanx)
print('Mutant:')
print_mean_and_confidence_intervals(meany)
# scipy.stats.mannwhitneyu returns two things: a statistic value
# and a p-value. Choose the p-value by selecting the 2nd entry:
pvalue = scipy.stats.mannwhitneyu(wt, mt)[1]
if pvalue < 0.05:
print('We reject the null hypothesis with a p-value of {0:.2g}'.format(pvalue))
else:
print('We fail to reject the null hypothesis with a p-value of {0:.2g}'.format(pvalue))
def difference_of_means(x, y):
Calculate the difference in the means of two datasets x and y. Returns a scalar equal to mean(y) - mean(x)
return np.mean(y) - np.mean(x)
def test_null(x, y, statistic, iters=1000):
Given two datasets, test a null hypothesis using a permutation test for a given statistic.
Params:
x, y -- ndarrays, the data
statistic -- a function of x and y
iters -- number of times to bootstrap
Ouput:
a numpy array containing the bootstrapped statistic
def permute(x, y):
Given two datasets, return randomly shuffled versions of them
# concatenate the data
new = np.concatenate([x, y])
# shuffle the data
np.random.shuffle(new)
# return the permuted data sets:
return new[:len(x)], new[len(x):]
# do the bootstrap
return np.array([statistic(*permute(x, y)) for _ in range(iters)])
diff = test_null(wt, mt, difference_of_means, iters=10**5)
sns.distplot(diff)
plt.axvline(mt.mean() - wt.mean(), color='red',label='Observed Difference')
plt.title('Bootstrapped Difference in Sample Means')
plt.xlabel('Difference in Means')
plt.ylabel('Density')
plt.legend()
pvalue = len(diff[diff < mt.mean() - wt.mean()])/len(diff)
print('The p-value for these samples is {0:.2g}'.format(pvalue))
if pvalue < 0.05:
print('We can reject the null hypothesis that the means are equal between both samples')
else:
print('We cannot reject the null hypothesis that the means are equal between both samples')
def difference_of_variance(x, y):
Calculates the difference in variance between x and y.
return np.std(y)**2 - np.std(x)**2
diff_vars = test_null(wt - wt.mean(), mt - mt.mean(), difference_of_variance, iters=10**5)
sns.distplot(diff_vars)
plt.axvline(mt.std()**2 - wt.std()**2, color='red',label='Observed Difference')
plt.title('Bootstrapped Difference in Sample Variances')
plt.xlabel('Difference in Means')
plt.ylabel('Density')
plt.legend()
pvalue = len(diff_vars[diff_vars > mt.std()**2 - wt.std()**2])/len(diff)
print('The p-value for these samples is {0:.2g}'.format(pvalue))
if pvalue < 0.05:
print('We can reject the null hypothesis that the means are equal between both samples')
else:
print('We cannot reject the null hypothesis that the means are equal between both samples')
x = np.random.normal(10, .5, 10)
y = np.random.standard_t(2.7, 10) + 8.5 # subtract 0.5 to make mean(y) = -0.5
data = np.vstack([x, y]).transpose()
df = pd.DataFrame(data, columns=['wt', 'mutant'])
# tidy:
tidy = pd.melt(df, var_name = 'genotype', value_name='expression')
sns.boxplot(x='genotype', y='expression', data=tidy)
sns.swarmplot(x='genotype', y='expression', data=tidy, size=7, color='#36454F')
plt.title('Data Without Outliers')
# calculate the differences
wt = tidy[tidy.genotype == 'wt'].expression.values
mt = tidy[tidy.genotype == 'mutant'].expression.values
diff = test_null(wt, mt, difference_of_means, iters=10**5)
pvalue = len(diff[diff > mt.std()**2 - wt.std()**2])/len(diff)
print('Wild-type Mean')
print_mean_and_confidence_intervals(non_parametric_bootstrap(wt, np.mean))
print('Mutant-type Mean')
print_mean_and_confidence_intervals(non_parametric_bootstrap(mt, np.mean))
if pvalue < 0.05:
print('We can reject the hypothesis that the means are equal between both samples with a p = {0:.2g}'.format(pvalue))
else:
print('We cannot reject the hypothesis that the means are equal between both samples (p = {0:.2g})'.format(pvalue))
# do the same for the variance:
diff_vars = test_null(wt - wt.mean(), mt - mt.mean(), difference_of_variance, iters=10**5)
pvalue = len(diff_vars[diff_vars > mt.std()**2 - wt.std()**2])/len(diff_vars)
print('Wild-type Variance:')
print_mean_and_confidence_intervals(non_parametric_bootstrap(wt, np.var))
print('Mutant-type Variance:')
print_mean_and_confidence_intervals(non_parametric_bootstrap(mt, np.var))
if pvalue < 0.05:
print('We can reject the hypothesis that the variances are equal between both samples with a p = {0:.2g}'.format(pvalue))
else:
print('We cannot reject the hypothesis that the variances are equal between both samples (p = {0:.2g})'.format(pvalue))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generating Synthetic Data
Step3: From these results, we can already draw a number of tentative observations. Namely, the mutant data has a different spread from the wild-type data.
Step4: Using our non_parametric_bootstrap function, we can now go ahead and ask what the mean would be if we repeated this experiment many many times. How much would our guess of the mean vary each time?
Step5: Let's plot it
Step6: We can see that the wild-type sample mean is tightly centered around 10, whereas the mutant has a large spread centered at 9.5. Clearly, we are much more confident in the mean of the wild-type than in the mean of the mutant. We can quantify this by using the bootstrap to formally calculate our confidence intervals.
Step7: The Mann-Whitney U test for equality of distributions
Step11: We can reject the null hypothesis that the two distributions are the same. What does this mean? That is a very complex statement. Maybe we would prefer simpler statements about the mean, the median or the variance of the distributions. How can we get at these simpler questions?
Step12: Run the computation!
Step13: Let's plot it
Step15: Using non-parametric bootstraps, we can even ask whether the variances are statistically significantly different between the two groups
Step16: Repeating the analysis for a different sample is straightforward
Step17: Ask whether the null hypothesis that the means are the same can be rejected
Step18: Ask whether we can reject the null hypothesis that the variances are the same between the two samples
|
3,666
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -qq blackjax
!pip install -qq git+https://github.com/blackjax-devs/blackjax
import jax
import jax.numpy as jnp
import matplotlib.pyplot as plt
from jax.scipy.stats import multivariate_normal
jax.config.update("jax_platform_name", "cpu")
try:
from blackjax.hmc import kernel as hmc_kernel
except ModuleNotFoundError:
%pip install -qq blackjax
from blackjax.hmc import kernel as hmc_kernel
from blackjax.hmc import new_state as hmc_state
from blackjax.inference.smc.resampling import systematic
from blackjax.nuts import kernel as nuts_kernel
from blackjax.nuts import new_state as nuts_state
from blackjax.tempered_smc import TemperedSMCState, adaptive_tempered_smc
def V(x):
return 5 * jnp.square(jnp.sum(x**2) - 1)
def prior_log_prob(x):
d = x.shape[0]
return multivariate_normal.logpdf(x, jnp.zeros((d,)), jnp.eye(d))
linspace = jnp.linspace(-2, 2, 5000).reshape(-1, 1)
lambdas = jnp.array([1])
prior_logvals = jnp.vectorize(prior_log_prob, signature="(d)->()")(linspace)
potential_vals = jnp.vectorize(V, signature="(d)->()")(linspace)
log_res = prior_logvals.reshape(1, -1) - jnp.expand_dims(lambdas, 1) * potential_vals.reshape(1, -1)
density = jnp.exp(log_res)
normalizing_factor = jnp.sum(density, axis=1, keepdims=True) * (linspace[1] - linspace[0])
density /= normalizing_factor
fig, ax = plt.subplots(figsize=(12, 8))
ax.plot(linspace.squeeze(), density.T)
ax.legend(list(lambdas))
plt.savefig("bimodal-target.pdf")
linspace = jnp.linspace(-2, 2, 5000).reshape(-1, 1)
lambdas = jnp.linspace(0.0, 1.0, 5)
prior_logvals = jnp.vectorize(prior_log_prob, signature="(d)->()")(linspace)
potential_vals = jnp.vectorize(V, signature="(d)->()")(linspace)
log_res = prior_logvals.reshape(1, -1) - jnp.expand_dims(lambdas, 1) * potential_vals.reshape(1, -1)
density = jnp.exp(log_res)
normalizing_factor = jnp.sum(density, axis=1, keepdims=True) * (linspace[1] - linspace[0])
density /= normalizing_factor
fig, ax = plt.subplots(figsize=(12, 8))
ax.plot(linspace.squeeze(), density.T)
ax.legend(list(lambdas))
plt.savefig("bimodal-tempered.pdf")
def inference_loop(rng_key, mcmc_kernel, initial_state, num_samples):
def one_step(state, k):
state, _ = mcmc_kernel(k, state)
return state, state
keys = jax.random.split(rng_key, num_samples)
_, states = jax.lax.scan(one_step, initial_state, keys)
return states
def full_logprob(x):
return -V(x) + prior_log_prob(x)
inv_mass_matrix = jnp.eye(1)
n_samples = 10_000
%%time
key = jax.random.PRNGKey(42)
hmc_params = dict(step_size=1e-4, inverse_mass_matrix=inv_mass_matrix, num_integration_steps=50)
hmc_kernel_instance = jax.jit(hmc_kernel(full_logprob, **hmc_params))
initial_hmc_state = hmc_state(jnp.ones((1,)), full_logprob)
hmc_samples = inference_loop(key, hmc_kernel_instance, initial_hmc_state, n_samples)
_ = plt.hist(hmc_samples.position[:, 0], bins=100, density=True)
_ = plt.plot(linspace.squeeze(), density[-1])
plt.savefig("bimodal-hmc.pdf")
%%time
key = jax.random.PRNGKey(42)
nuts_params = dict(step_size=1e-4, inverse_mass_matrix=inv_mass_matrix)
nuts_kernel_instance = jax.jit(nuts_kernel(full_logprob, **nuts_params))
initial_nuts_state = nuts_state(jnp.ones((1,)), full_logprob)
nuts_samples = inference_loop(key, nuts_kernel_instance, initial_nuts_state, n_samples)
_ = plt.hist(nuts_samples.position[:, 0], bins=100, density=True)
_ = plt.plot(linspace.squeeze(), density[-1])
plt.savefig("bimodal-nuts.pdf")
%%time
hmc_factory_params = dict(step_size=1e-4, inverse_mass_matrix=inv_mass_matrix, num_integration_steps=1)
mcmc_kernel_factory = lambda logprob_fun: hmc_kernel(logprob_fun, **hmc_factory_params)
loglikelihood = lambda x: -V(x)
smc_kernel_instance = jax.jit(
adaptive_tempered_smc(
prior_log_prob,
loglikelihood,
mcmc_kernel_factory,
hmc_state,
systematic,
0.5,
mcmc_iter=1,
)
)
initial_smc_state = jax.random.multivariate_normal(jax.random.PRNGKey(0), jnp.zeros([1]), jnp.eye(1), (n_samples,))
initial_smc_state = TemperedSMCState(initial_smc_state, 0.0)
def smc_inference_loop(rng_key, smc_kernel, initial_state):
Run the temepered SMC algorithm.
We run the adaptive algorithm until the tempering parameter lambda reaches the value
lambda=1.
def cond(carry):
i, state, _k = carry
return state.lmbda < 1
def one_step(carry):
i, state, k = carry
k, subk = jax.random.split(k, 2)
state, _ = smc_kernel(subk, state)
return i + 1, state, k
n_iter, final_state, _ = jax.lax.while_loop(cond, one_step, (0, initial_state, rng_key))
return n_iter, final_state
key = jax.random.PRNGKey(42)
n_iter, smc_samples = smc_inference_loop(key, smc_kernel_instance, initial_smc_state)
print("Number of steps in the adaptive algorithm: ", n_iter.item())
_ = plt.hist(smc_samples.particles[:, 0], bins=100, density=True)
_ = plt.plot(linspace.squeeze(), density[-1])
plt.savefig("bimodal-smc-tempered.pdf")
def cond_fun(val):
i, buffer = val
return i < 5
def body_fun(val):
i, buffer = val
i = i + 1
cur = buffer["current"]
buffer["data"] = buffer["data"].at[cur].set(i)
buffer["current"] = cur + 1
return (i, buffer)
buffer_size = 100
buffer = {"data": jnp.zeros((buffer_size,)), "current": 0}
init_val = (0.0, buffer)
_, buffer = jax.lax.while_loop(cond_fun, body_fun, init_val)
print(buffer)
def cond_fun(val):
i, buffer = val
return (i < 5) & (buffer["current"] < buffer_size)
def body_fun(val):
i, buffer = val
i = i + 1
cur = buffer["current"]
buffer["data"] = buffer["data"].at[cur].set(i)
buffer["current"] = cur + 1
return (i, buffer)
buffer_size = 3
intermediates = []
i = 0.0
while i < 5:
buffer = {"data": jnp.zeros((buffer_size,)), "current": 0}
i, buffer = jax.lax.while_loop(cond_fun, body_fun, (i, buffer))
intermediates += [buffer["data"]]
print(jnp.concatenate(intermediates))
%%time
hmc_factory_params = dict(step_size=1e-4, inverse_mass_matrix=inv_mass_matrix, num_integration_steps=1)
mcmc_kernel_factory = lambda logprob_fun: hmc_kernel(logprob_fun, **hmc_factory_params)
loglikelihood = lambda x: -V(x)
smc_kernel_instance = jax.jit(
adaptive_tempered_smc(
prior_log_prob,
loglikelihood,
mcmc_kernel_factory,
hmc_state,
systematic,
0.5,
mcmc_iter=1,
)
)
initial_smc_state = jax.random.multivariate_normal(jax.random.PRNGKey(0), jnp.zeros([1]), jnp.eye(1), (n_samples,))
initial_smc_state = TemperedSMCState(initial_smc_state, 0.0)
def smc_inference_loop(rng_key, smc_kernel, initial_state):
Run the temepered SMC algorithm.
We run the adaptive algorithm until the tempering parameter lambda reaches the value
lambda=1.
def cond(carry):
i, state, key, buffer = carry
return state.lmbda < 1
def one_step(carry):
i, state, key, buffer = carry
key, subk = jax.random.split(key, 2)
state, _ = smc_kernel(subk, state)
# lambdas.append(state.lmbda)
cur = buffer["current"]
buffer["data"] = buffer["data"].at[cur].set(state.lmbda)
# buffer['data'] = buffer['data'].at[cur].set(i)
buffer["current"] = cur + 1
return i + 1, state, key, buffer
# lambdas = []
# use fixed size array to keep track of lambdas
buffer_size = 100
buffer = {"data": jnp.zeros((buffer_size,)), "current": 0}
n_iter, final_state, key, buffer = jax.lax.while_loop(cond, one_step, (0, initial_state, rng_key, buffer))
return n_iter, final_state, buffer["data"][:n_iter]
n_iter, smc_samples, lambdas = smc_inference_loop(key, smc_kernel_instance, initial_smc_state)
print(lambdas)
_ = plt.hist(smc_samples.particles[:, 0], bins=100, density=True)
_ = plt.plot(linspace.squeeze(), density[-1])
plt.savefig("bimodal-smc-tempered.pdf")
plt.figure()
plt.plot(lambdas)
plt.xlabel("t")
plt.ylabel("tempering exponent")
plt.savefig("bimodal-smc-exponent-vs-time.pdf")
smc_samples
smc_samples.particles.shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Target distribution
Step2: Tempered distribution
Step3: HMC
Step4: NUTS
Step6: SMC
Step7: SMC modified
Step8: If necessary, we can grow the buffer on demand.
Step10: Now we return to SMC.
Step11: Let's track the adaptive temperature.
|
3,667
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-2', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
|
3,668
|
<ASSISTANT_TASK:>
Python Code:
print("Calculation: %.3e"%(k_b * m_star/ (pi * hbar**2)))
print("Hard-Coded: %.3e"%nu0)
n_e = 3e15
E_f = E_fermi(n_e)
print('Fermi energy is: %.3f'%E_f)
eps = np.linspace(0, 500, 10000)
dens = np.ones(len(eps))
plt.plot (eps, dens)
plt.xlabel (r'$\epsilon$ (K)')
plt.ylabel (r'Reduced Density of states')
T_arr = np.linspace (0.001, 10, 20)
mu_arr = np.array([get_mu_at_T([eps, dens], T, n_e=n_e) for T in T_arr])
plt.plot(T_arr, mu_arr, 'o-b', mec = 'b')
plt.xlabel ('T (K)')
plt.ylabel ('$\mu$ (K)')
plt.ylim (0, 150)
v_f = v_fermi(n_e)
tau_tr = m_star * 1e2/q_e
print("%.3f ps"%(tau_tr*1e12))
#print("sigma_DC = %.3f"%sigma_DC(0, tau_tr, v_f, q=q_e))
C_arr = np.array([specific_heat([eps, dens], T, mu=E_f, n_e=n_e) for T in T_arr])
plt.plot(T_arr, k_b * C_arr * 1e-6 * 1e15)
plt.xlabel ("T (K)")
plt.ylabel (r"C ($\rm fJ/mm^2$)")
C_0 = k_b * specific_heat([eps, dens], 1)
print 'Numerical: C/T = %.5e k_b'%C_0
print "Analytical: C/T = %.5e k_b"%(pi * m_star * k_b **2 / (3 * hbar **2))
print "Or in units of k_b/electron, for n_e = 3e15/m^2: C/T = %.5f "%(C_0/k_b/n_e)
B = 0.5
tau_q = 5e-12
eps_B = np.linspace(0, 200, 10000)
eps_B, dens_B = generate_DOS(B, tau_q, eps=eps_B)
plt.plot (eps_B, dens_B)
plt.xlabel (r'$\epsilon$ (K)')
plt.ylabel ('reduced DOS')
get_mu_at_T ([eps, dens], 1, n_e)
k_b * specific_heat([eps, dens], 1, n_e = 3.05e15)
k_b * specific_heat([eps, dens_B], 1, mu = 1)
mu_arr_B = np.linspace (100, 150, 200)
C_arr_B = np.array([specific_heat([eps_B, dens_B], 0.1, mu, dT_frac = 0.001) for mu in mu_arr_B])
plt.plot (mu_arr_B * k_b / q_e * 1e3, k_b * C_arr_B *1e-6 * 1e15)
plt.xlabel (r'$\mu$ ($ \rm meV$)')
plt.ylabel (r'C ($\rm fJ\cdot K^{-1} \cdot mm^{-2}$)')
B_arr = np.linspace (0.01, 0.8, 1000)
T = 0.1
eps = generate_eps( T_low=T, T_high=T, n_e=n_e)
C_arr_B = np.array([specific_heat(generate_DOS(B_val, tau_q/5, eps=eps),
T, E_f, dT_frac = 0.001) for B_val in B_arr])
plt.plot (B_arr, C_arr_B/n_e)
plt.xlabel (r'B (T))')
plt.ylabel (r'C ($k_b {\rm \, per\, electron}$)')
plt.xlim(0, np.amax(B_arr))
plt.annotate("C = %.5f"%(C_0/k_b/n_e*T), [0, C_0/k_b/n_e*T], xytext =[0.1, 0.005],
arrowprops=dict(facecolor='black', shrink=0.05))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Zero field calculations
Step2: For now, let's choose $\epsilon \,$ to span from 0 to 500, with 10000 points. Later, we can be more clever in order to perform faster calculations. Because the code uses the reduced DOS, we just need to provide a numpy array of ones as our DOS to perform calculations for zero field.
Step3: We can calculate the chemical potential, $\mu$, as a function of temperature. We expect it to be constant and equal to $E_f$ for $T << E_f$.
Step4: Conductivity
Step5: Specific heat
Step6: Note that the calculation of C fails at T = 0, since there is a divide-by-zero error in the way it is calculates. However, you could take arbitrarily smalle values of T and find the limit as T approaches zero.
Step7: Calculations in a Magnetic Field
Step8: From here, we can easily calculate specific heat and conductivity.
|
3,669
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%pdb off
# set DISPLAY = True when running tutorial
DISPLAY = False
# set PARALLELIZE to true if you want to use ipyparallel
PARALLELIZE = False
import warnings
warnings.filterwarnings('ignore')
import deepchem as dc
from deepchem.utils import download_url
import os
data_dir = dc.utils.get_data_dir()
dataset_file = os.path.join(data_dir, "pdbbind_core_df.csv.gz")
if not os.path.exists(dataset_file):
print('File does not exist. Downloading file...')
download_url("https://s3-us-west-1.amazonaws.com/deepchem.io/datasets/pdbbind_core_df.csv.gz")
print('File downloaded...')
raw_dataset = dc.utils.save.load_from_disk(dataset_file)
print("Type of dataset is: %s" % str(type(raw_dataset)))
print(raw_dataset[:5])
print("Shape of dataset is: %s" % str(raw_dataset.shape))
import nglview
import tempfile
import os
import mdtraj as md
import numpy as np
import deepchem.utils.visualization
#from deepchem.utils.visualization import combine_mdtraj, visualize_complex, convert_lines_to_mdtraj
def combine_mdtraj(protein, ligand):
chain = protein.topology.add_chain()
residue = protein.topology.add_residue("LIG", chain, resSeq=1)
for atom in ligand.topology.atoms:
protein.topology.add_atom(atom.name, atom.element, residue)
protein.xyz = np.hstack([protein.xyz, ligand.xyz])
protein.topology.create_standard_bonds()
return protein
def visualize_complex(complex_mdtraj):
ligand_atoms = [a.index for a in complex_mdtraj.topology.atoms if "LIG" in str(a.residue)]
binding_pocket_atoms = md.compute_neighbors(complex_mdtraj, 0.5, ligand_atoms)[0]
binding_pocket_residues = list(set([complex_mdtraj.topology.atom(a).residue.resSeq for a in binding_pocket_atoms]))
binding_pocket_residues = [str(r) for r in binding_pocket_residues]
binding_pocket_residues = " or ".join(binding_pocket_residues)
traj = nglview.MDTrajTrajectory( complex_mdtraj ) # load file from RCSB PDB
ngltraj = nglview.NGLWidget( traj )
ngltraj.representations = [
{ "type": "cartoon", "params": {
"sele": "protein", "color": "residueindex"
} },
{ "type": "licorice", "params": {
"sele": "(not hydrogen) and (%s)" % binding_pocket_residues
} },
{ "type": "ball+stick", "params": {
"sele": "LIG"
} }
]
return ngltraj
def visualize_ligand(ligand_mdtraj):
traj = nglview.MDTrajTrajectory( ligand_mdtraj ) # load file from RCSB PDB
ngltraj = nglview.NGLWidget( traj )
ngltraj.representations = [
{ "type": "ball+stick", "params": {"sele": "all" } } ]
return ngltraj
def convert_lines_to_mdtraj(molecule_lines):
molecule_lines = molecule_lines.strip('[').strip(']').replace("'","").replace("\\n", "").split(", ")
tempdir = tempfile.mkdtemp()
molecule_file = os.path.join(tempdir, "molecule.pdb")
with open(molecule_file, "w") as f:
for line in molecule_lines:
f.write("%s\n" % line)
molecule_mdtraj = md.load(molecule_file)
return molecule_mdtraj
first_protein, first_ligand = raw_dataset.iloc[0]["protein_pdb"], raw_dataset.iloc[0]["ligand_pdb"]
protein_mdtraj = convert_lines_to_mdtraj(first_protein)
ligand_mdtraj = convert_lines_to_mdtraj(first_ligand)
complex_mdtraj = combine_mdtraj(protein_mdtraj, ligand_mdtraj)
ngltraj = visualize_complex(complex_mdtraj)
ngltraj
grid_featurizer = dc.feat.RdkitGridFeaturizer(
voxel_width=16.0, feature_types="voxel_combined",
voxel_feature_types=["ecfp", "splif", "hbond", "pi_stack", "cation_pi", "salt_bridge"],
ecfp_power=5, splif_power=5, parallel=True, flatten=True)
compound_featurizer = dc.feat.CircularFingerprint(size=128)
seed = 23
np.random.seed(seed)
PDBBIND_tasks, (train_dataset, valid_dataset, test_dataset), transformers = dc.molnet.load_pdbbind_grid()
from sklearn.ensemble import RandomForestRegressor
sklearn_model = RandomForestRegressor(n_estimators=10, max_features='sqrt')
sklearn_model.random_state = seed
model = dc.models.SklearnModel(sklearn_model)
model.fit(train_dataset)
from deepchem.utils.evaluate import Evaluator
import pandas as pd
metric = dc.metrics.Metric(dc.metrics.r2_score)
evaluator = Evaluator(model, train_dataset, transformers)
train_r2score = evaluator.compute_model_performance([metric])
print("RF Train set R^2 %f" % (train_r2score["r2_score"]))
evaluator = Evaluator(model, valid_dataset, transformers)
valid_r2score = evaluator.compute_model_performance([metric])
print("RF Valid set R^2 %f" % (valid_r2score["r2_score"]))
predictions = model.predict(test_dataset)
print(predictions)
# TODO(rbharath): This cell visualizes the ligand with highest predicted activity. Commenting it out for now. Fix this later
#from deepchem.utils.visualization import visualize_ligand
#top_ligand = predictions.iloc[0]['ids']
#ligand1 = convert_lines_to_mdtraj(dataset.loc[dataset['complex_id']==top_ligand]['ligand_pdb'].values[0])
#if DISPLAY:
# ngltraj = visualize_ligand(ligand1)
# ngltraj
# TODO(rbharath): This cell visualizes the ligand with lowest predicted activity. Commenting it out for now. Fix this later
#worst_ligand = predictions.iloc[predictions.shape[0]-2]['ids']
#ligand1 = convert_lines_to_mdtraj(dataset.loc[dataset['complex_id']==worst_ligand]['ligand_pdb'].values[0])
#if DISPLAY:
# ngltraj = visualize_ligand(ligand1)
# ngltraj
def rf_model_builder(model_params, model_dir):
sklearn_model = RandomForestRegressor(**model_params)
sklearn_model.random_state = seed
return dc.models.SklearnModel(sklearn_model, model_dir)
params_dict = {
"n_estimators": [10, 50, 100],
"max_features": ["auto", "sqrt", "log2", None],
}
metric = dc.metrics.Metric(dc.metrics.r2_score)
optimizer = dc.hyper.HyperparamOpt(rf_model_builder)
best_rf, best_rf_hyperparams, all_rf_results = optimizer.hyperparam_search(
params_dict, train_dataset, valid_dataset, transformers,
metric=metric)
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
rf_predicted_test = best_rf.predict(test_dataset)
rf_true_test = test_dataset.y
plt.scatter(rf_predicted_test, rf_true_test)
plt.xlabel('Predicted pIC50s')
plt.ylabel('True IC50')
plt.title(r'RF predicted IC50 vs. True pIC50')
plt.xlim([2, 11])
plt.ylim([2, 11])
plt.plot([2, 11], [2, 11], color='k')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's see what dataset looks like
Step2: One of the missions of deepchem is to form a synapse between the chemical and the algorithmic worlds
Step3: Now that we're oriented, let's use ML to do some chemistry.
Step4: Note how we separate our featurizers into those that featurize individual chemical compounds, compound_featurizers, and those that featurize molecular complexes, complex_featurizers.
Step5: Now, we conduct a train-test split. If you'd like, you can choose splittype="scaffold" instead to perform a train-test split based on Bemis-Murcko scaffolds.
Step6: In this simple example, in few yet intuitive lines of code, we traced the machine learning arc from featurizing a raw dataset to fitting and evaluating a model.
Step7: The protein-ligand complex view.
|
3,670
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
distance=np.array([4,7.75,7.75,14,14,19])
fake_reddening=np.array([0,0,2,2,5,5])
plt.plot(distance, fake_reddening,lw=5)
plt.xlabel(r'Distance Modulus')
plt.ylabel('Reddening')
plt.xlim(4,19)
plt.ylim(0,7)
plt.title("Our Contrived Reddening Profile")
plt.savefig("contrived_profile.pdf")
import pandas as pd
nstars=50
#randomly sprinkle 50 stars at 50 distances along the posterior horizontal axis (total bins=120)
randdist=np.random.randint(0,119,nstars)
#create our array to hold the corresponding reddening values taken from our contrived reddening profile
red=np.empty(nstars)
for i in range(0,nstars):
if randdist[i] < 30: #no clouds at a distance modulus bin <30, so reddening is zero
red[i]=0
if randdist[i] >= 30 and randdist[i] < 80: #one cloud between distance modulus bin of 30-80 (with reddening=2 mags)
red[i]=200
if randdist[i] >= 80: #two cloud between distance modulus bin of 80-120 (with cumulative reddening=5 mags)
red[i]=500
#create array to hold our stellar posterior information
post_array=np.empty((nstars,700,120))
#repeat 50 times for 50 stars
for i in range(0,nstars):
mean=np.array([randdist[i],red[i]]) #mean of bivariate normal distribution set to random distance and corresponding reddening value
cov=np.array([[1,0],[0,100]]) #standard deviations of ~0.1 in both reddening and distance modulus, converted to the number of bins that corresponds to on each axis
data = np.random.multivariate_normal(mean, cov, 5000) #draw samples from that distribution
df = pd.DataFrame(data, columns=["mu", "E"])
H, yedges, xedges = np.histogram2d(df['E'],df['mu'],bins=[700,120], range=[[0, 700], [0, 120]]) #use those samples to create 2dhistogram, our stellar posterior!
post_array[i,:,:]=H #store the posterior array for that star
sumarray=np.sum(post_array, axis=0)
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.set_xlabel(r'$\mu$', fontsize=16)
ax.set_ylabel(r'$\mathrm{E} \left( B - V \right)$', fontsize=16)
ax.imshow(sumarray, origin='lower',aspect='auto', cmap='bwr', interpolation='nearest')
ax.set_title("Our Stacked Stellar Posterior")
plt.savefig("simulated_stacked.pdf")
import h5py
nslices=12
fwrite = h5py.File("/n/fink1/czucker/Data/simulated_data.h5", "w")
pdfs = fwrite.create_dataset("/stellar_pdfs", (nstars,700,120), dtype='f')
pdfs[:,:,:]=post_array #write our stellar posterior arrays calculated above to an hdf5 dataset
intensity=np.array([0,0,0,0,0,0,2,0,0,0,3,0]) #our identical CO array for every star
co_array=np.empty((nstars,12))
for i in range(0,nstars):
co_array[i,:]=intensity
co_data = fwrite.create_dataset("/co_data", (nstars,nslices), dtype='f')
co_data[:,:]=co_array #write our co arrays to an hdf5 dataset
fwrite.close()
import emcee
from dustcurve import model
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from dustcurve import io
import warnings
import line_profiler
from dustcurve import plot_posterior
%matplotlib inline
#this code pulls snippets from the PHYS201 week 9 MCMC notebook written by Vinny Manohoran and the PHYS201 L9 solutions,
#written by Tom Dimiduk and Kevin Shain
#suppress obnoxious deprecation warning that doesn't affect output
warnings.filterwarnings("ignore", category=Warning, module="emcee")
# the model has 24 parameters; we'll use 100 walkers, 1000 steps each, at 5 different temps
nslices=12
ndim=24
nwalkers = 100
nsteps = 1000
ntemps=5
#fetch the required likelihood and prior arguments for PTSampler
ldata,pdata=io.fetch_args('simulated_data.h5',[4,19,0,10],1.0)
#set off the d7 and d11 parameters at random within a reasonable distance range surrounding the cloud
result=[np.random.randint(4,19) for i in range(nslices)]
result[6]=np.random.uniform(8,11)
result[10]=np.random.uniform(14,17)
result.extend(1.0 for i in range(nslices))
#set up the starting position array and add variance (up to 1 in distance modulus) to each walker
starting_positions = [[result + 1e-1*np.random.randn(ndim) for i in range(nwalkers)] for j in range(ntemps)]
#set up the sampler object
sampler = emcee.PTSampler(ntemps, nwalkers, ndim, model.log_likelihood, model.log_prior, loglargs=(ldata), logpargs=[pdata])
#burn in, and save final positions for all parameters, which we'll then set off our walkers at for the "real" thing
post_burn_pos, prob, state = sampler.run_mcmc(starting_positions, 300)
#use the autocorrelation times from the burnin run to set the thinning integer for the "real" run.
#autocorr=sampler.get_autocorr_time()
#autocorr_cold=autocorr[0,:]
#thin_int=int(np.mean(autocorr_cold))
sampler.reset()
# run the sampler. We use iPython's %time directive to tell us
# how long it took (in a script, you would leave out "%time")
%time sampler.run_mcmc(post_burn_pos, nsteps)
print('Done')
#Extract the coldest [beta=1] temperature chain from the sampler object; discard first 100 samples from every walker
samples_cold = sampler.chain[0,:,:,:]
traces_cold = samples_cold.reshape(-1, ndim).T
#check out acceptance fraction:
print("Our mean acceptance fraction for the coldest chain is %.2f" % np.mean(sampler.acceptance_fraction[0]))
#check out the autocorrelation times:
print("The autocorrelation times are...")
autocorr=sampler.get_autocorr_time()
autocorr_cold=autocorr[0,:]
print(autocorr_cold)
#set up the figures to plot the hot chain
fig, (ax_d7, ax_d11, ax_c7, ax_c11) = plt.subplots(4, figsize=(10,10))
plt.tight_layout()
ax_d7.set(ylabel='d7')
ax_d11.set(ylabel='d11')
ax_c7.set(ylabel='c7')
ax_c11.set(ylabel='c11')
#plot the chains for the first ten walkers
sns.tsplot(traces_cold[6], ax=ax_d7)
sns.tsplot(traces_cold[10], ax=ax_d11)
sns.tsplot(traces_cold[18], ax=ax_c7)
sns.tsplot(traces_cold[22], ax=ax_c11)
parameter_samples = pd.DataFrame({'d7': traces_cold[6], 'd11': traces_cold[10], 'c7': traces_cold[18], 'c11':traces_cold[22]})
q = parameter_samples.quantile([0.16,0.50,0.84], axis=0)
#what values do we get?
print("d7 = {:.2f} + {:.2f} - {:.2f}".format(q['d7'][0.50],
q['d7'][0.84]-q['d7'][0.50],
q['d7'][0.50]-q['d7'][0.16]))
print("d11 = {:.2f} + {:.2f} - {:.2f}".format(q['d11'][0.50],
q['d11'][0.84]-q['d11'][0.50],
q['d11'][0.50]-q['d11'][0.16]))
print("c7 = {:.2f} + {:.2f} - {:.2f}".format(q['c7'][0.50],
q['c7'][0.84]-q['c7'][0.50],
q['c7'][0.50]-q['c7'][0.16]))
print("c11 = {:.2f} + {:.2f} - {:.2f}".format(q['c11'][0.50],
q['c11'][0.84]-q['c11'][0.50],
q['c11'][0.50]-q['c11'][0.16]))
from dustcurve import pixclass
unique_co,indices,unique_post,ratio=ldata
pixObj=pixclass.PixStars('/n/fink1/czucker/Data/simulated_data.h5')
post_array=pixObj.get_p()
#find best fit values for each of the 24 parameters (12 d's and 12 c's)
theta=pd.DataFrame(traces_cold)
best=theta.quantile(.50, axis=1).values
#plot the reddening profile over the stacked, normalized stellar posterior surfaces
plot_posterior.plot_posterior(post_array,np.linspace(4,19,120),np.linspace(0,7,700),best,ratio,unique_co,normcol=False)
from dustcurve import diagnostics
fnames='simulated_data.h5'
gr, chain_ensemble=diagnostics.run_chains(fnames,nwalkers=100,nsteps=1000,runs=5, ratio=1.0, simulated=True)
print("The GR diagnostic corresponding to d7,d11,c7, and c11 are...")
print(gr[6],gr[10],gr[18],gr[22])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From the reddening profile, we can tell that this is a two cloud model towards a specific line of sight. We see that there are no clouds (and thus no reddening) between a distance modulus of 4 and 7.75. Then, we see that there's a cloud of dust at 7.75 which adds 2 magnitudes of reddening to our profile. Then we see that there's a second dust cloud at a distance modulus of 14 that adds 3 magnitudes of reddening to our profile, bringing our cumulative reddening at distance modulus 14 and beyond up to 5 magnitudes.
Step2: We've plotted below what the posterior would look like if you stacked all the individual stellar posteriors on top of eachother (i.e. you summed along the depth axis). See that it looks similar to our contrived reddening profile, which is what we want! Note that the x and y axes correspond to the bin indices. When we integrate over distance modulus in our likelihood function, we only care about the reddening "ledges". We'll perform this integral from distance modulus=4 to distance modulus=19, over the entire length of the stellar posterior array for each individual star.
Step3: Now we are going to make some fake CO (carbon monoxide) data to go with our fake stellar posteriors. We have twelve CO intensity values for each star (for twelve CO velocity slices). The set of intensity values will be the same for each star, since they are all presumed to be along the same line of sight. Our fake reddening profile corresponds to two clouds (two CO emission features) at two different velocity slices. The first cloud has a reddening of 2 magnitudes, producing a cumulative reddening of 2 magnitudes. The second cloud has a reddening of 3 magnitudes, producing a cumulative reddening of 5 magnitudes (3 from itself and 2 from the other cloud). Assuming gas to dust coefficients to one, this means that we need to set one slice to CO intensity ("CO_I") value of 2 and the second slice to a CO intensity value of 3. All the other velocity slice intensities will be set to zero, because we are assuming there are only two clouds in all the slices.
Step4: Based on our stellar posterior and CO data, we know what the values of d7, d11,c7, and c11 SHOULD be. D7 needs to be before D11, at a distance modulus of 7.75 and D11 needs to be at a distance modulus of 14. Both c7 and c11 should have values near 1, because that's what we assumed when created our reddening profile and assigning CO intensity values. Having those velocity slices correspond to those distances and coefficients will produce the highest value of the line integral along our reddening profile (see initial reddening profile figure), which is essentially our likelihood function. The distances and coefficients to the other ten slices cannot be constrained because their intensities are set to zero and thus contribute nothing to the reddening profile.
Step5: Success! We see that we returned a value of for d7, d11, c7, and c11 are close to the true values given by 7.75, 14.00, 1.0, and 1.0. Thus, our PTSampler produced accurate results, well within the estimated uncertainties.
|
3,671
|
<ASSISTANT_TASK:>
Python Code:
2+3
2*3
2/3
2**3
# Importar una libreria en Python
import numpy as np # el comando "as np" sirve para asignarle un codigo mas corto a la libreria y ser mas rapido.
np.sin(3)
(np.sin(3))*(np.sin(2))
np.log(3)
np.exp(3)
# Ejemplo
a = 5
print (a) # Imprimir mi variable
b = -15
print (b)
c = 3.1416
print (c)
d = 'clubes de ciencia'
print (d)
e = False
print (e)
print (type(a))
print (type(b))
print (type(c))
print (type(d))
print (type(e))
# Ejemplo
mi_lista = [1,2,3,5,6,-3.1416]
mi_lista_diversa = [1,2,'clubes', 'de', 'ciencia', 3.1416, False]
print (mi_lista)
print (mi_lista_diversa)
# Ejemplo
print (mi_lista[0]) # Leer el primer elemento que se encuentra en la posición n=0
print (mi_lista_diversa[0])
print (type(mi_lista[5])) # Leer el tipo de variable en la posición n=5
#Ejemplo
print (mi_lista[0:3]) # Leer entre n=0 y m=2
#Ejemplo
mi_lista = ('cadena de texto', 15, 2.8, 'otro dato', 25)
print (mi_lista)
print (mi_lista[2]) # leer el tercer elemento de la tupla
print (mi_lista[2:4]) # leer los dos ultimos elementos de la tupla
# Ejemplo 1
mi_diccionario = {'grupo_1':4, 'grupo_2':6, 'grupo_3':7, 'grupo_4':3}
print (mi_diccionario['grupo_2'])
# Ejemplo 2 con diferentes tipos de elementos
informacion_persona = {'nombres':'Elon', 'apellidos':'Musk', 'edad':45, 'nacionalidad':'Sudafricano',
'educacion':['Administracion de empresas','Física'],'empresas':['Zip2','PyPal','SpaceX','SolarCity']}
print (informacion_persona['educacion'])
print (informacion_persona['empresas'])
# Ejemplo
color_semaforo = 'amarillo'
if color_semaforo == 'verde':
print ("Cruzar la calle")
else:
print ("Esperar")
# ejemplo
dia_semana = 'lunes'
if dia_semana == 'sabado' or dia_semana == 'domingo':
print ('Me levanto a las 10 de la mañana')
else:
print ('Me levanto antes de las 7am')
# Ejemplo
costo_compra = 90
if costo_compra <= 100:
print ("Pago en efectivo")
elif costo_compra > 100 and costo_compra < 300:
print ("Pago con tarjeta de débito")
else:
print ("Pago con tarjeta de crédito")
# ejemplo
anio = 2001
while anio <= 2012:
print ("Informes del Año", str(anio))
anio = anio + 1 # aumentamos anio en 1
# ejemplo
cuenta = 10
while cuenta >= 0:
print ('faltan '+str(cuenta)+' minutos')
cuenta += -1
# Ejemplo
mi_tupla = ('rosa', 'verde', 'celeste', 'amarillo')
for color in mi_tupla:
print (color)
# Ejemplo
dias_semana = ['lunes','martes','miercoles','jueves','viernes','sabado','domingo']
for i in dias_semana:
if (i == dias_semana[-1]) or (i == dias_semana[-2]):
print ('Hoy seguire aprendiendo de programación')
else:
print ('Hoy tengo que ir al colegio')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Multiplicación
Step2: División
Step3: Potencia
Step4: Funciones Trigonometricas
Step5: Logaritmo y Exponencial
Step6: Reto de Programación
Step7: Variable tipo int
Step8: Variable tipo float
Step9: Variable tipo str
Step10: Variable tipo bool
Step11: Como averiguo el tipo de una variable ??
Step12: Reto de Programación
Step13: Como puedo mirar un elemento o elementos de mi lista??
Step14: Como leer los elementos entre la posición n y m??
Step15: Reto de Programación
Step16: Reto de Programación
Step17: Reto de Programación
Step18: Reto de Programación
Step19: Reto de Programación
|
3,672
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import seaborn as sns
from google.cloud import bigquery
import matplotlib as plt
%matplotlib inline
bq = bigquery.Client()
query =
SELECT
weight_pounds,
is_male,
gestation_weeks,
mother_age,
plurality,
mother_race
FROM
`bigquery-public-data.samples.natality`
WHERE
weight_pounds IS NOT NULL
AND is_male = true
AND gestation_weeks = 38
AND mother_age = 28
AND mother_race = 1
AND plurality = 1
AND RAND() < 0.01
df = bq.query(query).to_dataframe()
df.head()
fig = sns.distplot(df[["weight_pounds"]])
fig.set_title("Distribution of baby weight")
fig.set_xlabel("weight_pounds")
fig.figure.savefig("weight_distrib.png")
#average weight_pounds for this cross section
np.mean(df.weight_pounds)
np.std(df.weight_pounds)
weeks = 36
age = 28
query =
SELECT
weight_pounds,
is_male,
gestation_weeks,
mother_age,
plurality,
mother_race
FROM
`bigquery-public-data.samples.natality`
WHERE
weight_pounds IS NOT NULL
AND is_male = true
AND gestation_weeks = {}
AND mother_age = {}
AND mother_race = 1
AND plurality = 1
AND RAND() < 0.01
.format(weeks, age)
df = bq.query(query).to_dataframe()
print('weeks={} age={} mean={} stddev={}'.format(weeks, age, np.mean(df.weight_pounds), np.std(df.weight_pounds)))
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras.utils import to_categorical
from tensorflow import keras
from tensorflow import feature_column as fc
from tensorflow.keras import layers, models, Model
%matplotlib inline
df = pd.read_csv("./data/babyweight_train.csv")
# prepare inputs
df.is_male = df.is_male.astype(str)
df.mother_race.fillna(0, inplace = True)
df.mother_race = df.mother_race.astype(str)
# create categorical label
def categorical_weight(weight_pounds):
if weight_pounds < 3.31:
return 0
elif weight_pounds >= 3.31 and weight_pounds < 5.5:
return 1
elif weight_pounds >= 5.5 and weight_pounds < 8.8:
return 2
else:
return 3
df["weight_category"] = df.weight_pounds.apply(lambda x: categorical_weight(x))
df.head()
def encode_labels(classes):
one_hots = to_categorical(classes)
return one_hots
FEATURES = ['is_male', 'mother_age', 'plurality', 'gestation_weeks', 'mother_race']
LABEL_CLS = ['weight_category']
LABEL_REG = ['weight_pounds']
N_TRAIN = int(df.shape[0] * 0.80)
X_train = df[FEATURES][:N_TRAIN]
X_valid = df[FEATURES][N_TRAIN:]
y_train_cls = encode_labels(df[LABEL_CLS][:N_TRAIN])
y_train_reg = df[LABEL_REG][:N_TRAIN]
y_valid_cls = encode_labels(df[LABEL_CLS][N_TRAIN:])
y_valid_reg = df[LABEL_REG][N_TRAIN:]
# train/validation dataset for classification model
cls_train_data = tf.data.Dataset.from_tensor_slices((X_train.to_dict('list'), y_train_cls))
cls_valid_data = tf.data.Dataset.from_tensor_slices((X_valid.to_dict('list'), y_valid_cls))
# train/validation dataset for regression model
reg_train_data = tf.data.Dataset.from_tensor_slices((X_train.to_dict('list'), y_train_reg.values))
reg_valid_data = tf.data.Dataset.from_tensor_slices((X_valid.to_dict('list'), y_valid_reg.values))
# Examine the two datasets. Notice the different label values.
for data_type in [cls_train_data, reg_train_data]:
for dict_slice in data_type.take(1):
print("{}\n".format(dict_slice))
# create feature columns to handle categorical variables
numeric_columns = [fc.numeric_column("mother_age"),
fc.numeric_column("gestation_weeks")]
CATEGORIES = {
'plurality': list(df.plurality.unique()),
'is_male' : list(df.is_male.unique()),
'mother_race': list(df.mother_race.unique())
}
categorical_columns = []
for feature, vocab in CATEGORIES.items():
cat_col = fc.categorical_column_with_vocabulary_list(
key=feature, vocabulary_list=vocab, dtype=tf.string)
categorical_columns.append(fc.indicator_column(cat_col))
# create Inputs for model
inputs = {colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]}
inputs.update({colname: tf.keras.layers.Input(
name=colname, shape=(), dtype=tf.string)
for colname in ["plurality", "is_male", "mother_race"]})
# build DenseFeatures for the model
dnn_inputs = layers.DenseFeatures(categorical_columns+numeric_columns)(inputs)
# create hidden layers
h1 = layers.Dense(20, activation="relu")(dnn_inputs)
h2 = layers.Dense(10, activation="relu")(h1)
# create classification model
cls_output = layers.Dense(4, activation="softmax")(h2)
cls_model = tf.keras.models.Model(inputs=inputs, outputs=cls_output)
cls_model.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
# create regression model
reg_output = layers.Dense(1, activation="relu")(h2)
reg_model = tf.keras.models.Model(inputs=inputs, outputs=reg_output)
reg_model.compile(optimizer='adam',
loss=tf.keras.losses.MeanSquaredError(),
metrics=['mse'])
# train the classifcation model
cls_model.fit(cls_train_data.batch(50), epochs=1)
val_loss, val_accuracy = cls_model.evaluate(cls_valid_data.batch(X_valid.shape[0]))
print("Validation accuracy for classifcation model: {}".format(val_accuracy))
# train the classifcation model
reg_model.fit(reg_train_data.batch(50), epochs=1)
val_loss, val_mse = reg_model.evaluate(reg_valid_data.batch(X_valid.shape[0]))
print("Validation RMSE for regression model: {}".format(val_mse**0.5))
preds = reg_model.predict(x={"gestation_weeks": tf.convert_to_tensor([38]),
"is_male": tf.convert_to_tensor(["True"]),
"mother_age": tf.convert_to_tensor([28]),
"mother_race": tf.convert_to_tensor(["1.0"]),
"plurality": tf.convert_to_tensor(["Single(1)"])},
steps=1).squeeze()
preds
preds = cls_model.predict(x={"gestation_weeks": tf.convert_to_tensor([38]),
"is_male": tf.convert_to_tensor(["True"]),
"mother_age": tf.convert_to_tensor([28]),
"mother_race": tf.convert_to_tensor(["1.0"]),
"plurality": tf.convert_to_tensor(["Single(1)"])},
steps=1).squeeze()
preds
objects = ('very_low', 'low', 'average', 'high')
y_pos = np.arange(len(objects))
predictions = list(preds)
plt.bar(y_pos, predictions, align='center', alpha=0.5)
plt.xticks(y_pos, objects)
plt.title('Baby weight prediction')
plt.show()
# Read in the data and preprocess
df = pd.read_csv("./data/babyweight_train.csv")
# prepare inputs
df.is_male = df.is_male.astype(str)
df.mother_race.fillna(0, inplace = True)
df.mother_race = df.mother_race.astype(str)
# create categorical label
MIN = np.min(df.weight_pounds)
MAX = np.max(df.weight_pounds)
NBUCKETS = 50
def categorical_weight(weight_pounds, weight_min, weight_max, nbuckets=10):
buckets = np.linspace(weight_min, weight_max, nbuckets)
return np.digitize(weight_pounds, buckets) - 1
df["weight_category"] = df.weight_pounds.apply(lambda x: categorical_weight(x, MIN, MAX, NBUCKETS))
def encode_labels(classes):
one_hots = to_categorical(classes)
return one_hots
FEATURES = ['is_male', 'mother_age', 'plurality', 'gestation_weeks', 'mother_race']
LABEL_COLUMN = ['weight_category']
N_TRAIN = int(df.shape[0] * 0.80)
X_train, y_train = df[FEATURES][:N_TRAIN], encode_labels(df[LABEL_COLUMN][:N_TRAIN])
X_valid, y_valid = df[FEATURES][N_TRAIN:], encode_labels(df[LABEL_COLUMN][N_TRAIN:])
# create the training dataset
train_data = tf.data.Dataset.from_tensor_slices((X_train.to_dict('list'), y_train))
valid_data = tf.data.Dataset.from_tensor_slices((X_valid.to_dict('list'), y_valid))
# create feature columns to handle categorical variables
numeric_columns = [fc.numeric_column("mother_age"),
fc.numeric_column("gestation_weeks")]
CATEGORIES = {
'plurality': list(df.plurality.unique()),
'is_male' : list(df.is_male.unique()),
'mother_race': list(df.mother_race.unique())
}
categorical_columns = []
for feature, vocab in CATEGORIES.items():
cat_col = fc.categorical_column_with_vocabulary_list(
key=feature, vocabulary_list=vocab, dtype=tf.string)
categorical_columns.append(fc.indicator_column(cat_col))
# create Inputs for model
inputs = {colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]}
inputs.update({colname: tf.keras.layers.Input(
name=colname, shape=(), dtype=tf.string)
for colname in ["plurality", "is_male", "mother_race"]})
# build DenseFeatures for the model
dnn_inputs = layers.DenseFeatures(categorical_columns+numeric_columns)(inputs)
# model
h1 = layers.Dense(20, activation="relu")(dnn_inputs)
h2 = layers.Dense(10, activation="relu")(h1)
output = layers.Dense(NBUCKETS, activation="softmax")(h2)
model = tf.keras.models.Model(inputs=inputs, outputs=output)
model.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
# train the model
model.fit(train_data.batch(50), epochs=1)
preds = model.predict(x={"gestation_weeks": tf.convert_to_tensor([38]),
"is_male": tf.convert_to_tensor(["True"]),
"mother_age": tf.convert_to_tensor([28]),
"mother_race": tf.convert_to_tensor(["1.0"]),
"plurality": tf.convert_to_tensor(["Single(1)"])},
steps=1).squeeze()
objects = [str(_) for _ in range(NBUCKETS)]
y_pos = np.arange(len(objects))
predictions = list(preds)
plt.bar(y_pos, predictions, align='center', alpha=0.5)
plt.xticks(y_pos, objects)
plt.title('Baby weight prediction')
plt.show()
import numpy as np
import tensorflow as tf
from tensorflow import keras
MIN_Y = 3
MAX_Y = 20
input_size = 10
inputs = keras.layers.Input(shape=(input_size,))
h1 = keras.layers.Dense(20, 'relu')(inputs)
h2 = keras.layers.Dense(1, 'sigmoid')(h1) # 0-1 range
output = keras.layers.Lambda(lambda y : (y*(MAX_Y-MIN_Y) + MIN_Y))(h2) # scaled
model = keras.Model(inputs, output)
# fit the model
model.compile(optimizer='adam', loss='mse')
batch_size = 2048
for i in range(0, 10):
x = np.random.rand(batch_size, input_size)
y = 0.5*(x[:,0] + x[:,1]) * (MAX_Y-MIN_Y) + MIN_Y
model.fit(x, y)
# verify
min_y = np.finfo(np.float64).max
max_y = np.finfo(np.float64).min
for i in range(0, 10):
x = np.random.randn(batch_size, input_size)
y = model.predict(x)
min_y = min(y.min(), min_y)
max_y = max(y.max(), max_y)
print('min={} max={}'.format(min_y, max_y))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Reframing Design Pattern
Step3: Comparing categorical label and regression
Step4: We'll use the same features for both models. But we need to create a categorical weight label for the classification model.
Step5: Create tf.data datsets for both classification and regression.
Step6: First, train the classification model and examine the validation accuracy.
Step7: Next, we'll train the regression model and examine the validation RMSE.
Step8: The regression model gives a single numeric prediction of baby weight.
Step9: The classification model predicts a probability for each bucket of values.
Step10: Increasing the number of categorical labels
Step11: Create the feature columns and build the model.
Step12: Make a prediction on the example above.
Step13: Restricting the prediction range
|
3,673
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
a = 0
b = 1
greyscale_min = 0
greyscale_max = 255
return a + (((x - greyscale_min)*(b - a))/(greyscale_max - greyscale_min))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
encoded_labels = np.zeros((len(x), 10))
for i in range(len(x)):
encoded_labels[i][x[i]] = 1
return encoded_labels
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
import numpy as np
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
image_input = tf.placeholder(tf.float32, shape=(None, *image_shape), name="x")
return image_input
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
label_input = tf.placeholder(tf.float32, shape=(None, n_classes), name="y")
return label_input
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
height = conv_ksize[0]
width = conv_ksize[1]
input_depth = x_tensor.get_shape().as_list()[3]
filter_weights = tf.Variable(tf.truncated_normal((height, width, input_depth, conv_num_outputs), stddev=0.1)) # (height, width, input_depth, output_depth)
filter_bias = tf.Variable(tf.zeros(conv_num_outputs))
conv = tf.nn.conv2d(x_tensor, filter_weights, (1, *conv_strides, 1), 'SAME') + filter_bias
conv_layer = tf.nn.relu(conv)
conv_pool = tf.nn.max_pool(conv_layer, (1, *pool_ksize, 1), (1, *pool_strides, 1), 'SAME')
return conv_pool
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
shape_list = x_tensor.get_shape().as_list()[1:4]
#final_size = shape_list[1]*shape_list[2]*shape_list[3]
final_size = np.prod(np.array(shape_list))
return tf.reshape(x_tensor, [-1, final_size])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weights = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros([num_outputs]))
fc = tf.add(tf.matmul(x_tensor, weights), bias)
fc = tf.nn.relu(fc)
# fc = tf.nn.dropout(fc, neural_net_keep_prob_input())
# fc = tf.nn.dropout(fc, keep_prob = keep_prob)
return fc
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
weights = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros([num_outputs]))
out = tf.add(tf.matmul(x_tensor, weights), bias)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = 128
conv_ksize = (2, 2)
conv_strides = (1, 1)
pool_ksize = (1, 1)
pool_strides = (1, 1)
fc_num_outputs = 1024
num_outputs = 10
conv = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flat = flatten(conv)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc = fully_conn(flat, fc_num_outputs)
fc_after_drop_out = fc = tf.nn.dropout(fc, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out = output(fc_after_drop_out, num_outputs)
# TODO: return output
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1.0})
valid_acc = session.run(accuracy, feed_dict={
x: valid_features,
y: valid_labels,
keep_prob: 1.0})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(
loss,
valid_acc))
# TODO: Tune Parameters
epochs = 10
batch_size = 512
keep_probability = 0.75
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
|
3,674
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense, LSTM, GRU, Dropout
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from keras.callbacks import TensorBoard
from keras import backend
# fix random seed for reproducibility
np.random.seed(7)
import shutil
import os
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:97% !important; }</style>")) #Set width of iPython cells
# load the dataset but only keep the top n words, zero the rest
# docs at: https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb/load_data
top_words = 5000
start_char=1
oov_char=2
index_from=3
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words,
start_char=start_char, oov_char = oov_char, index_from = index_from )
print(X_train.shape)
print(y_train.shape)
print(len(X_train[0]))
print(len(X_train[1]))
print(X_test.shape)
print(y_test.shape)
X_train[0]
# truncate and pad input sequences
max_review_length = 500
X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)
print(X_train.shape)
print(y_train.shape)
print(len(X_train[0]))
print(len(X_train[1]))
print(X_test.shape)
print(y_test.shape)
X_train[0]
y_train[0:20] # first 20 sentiment labels
word_index = imdb.get_word_index()
inv_word_index = np.empty(len(word_index)+index_from+3, dtype=np.object)
for k, v in word_index.items():
inv_word_index[v+index_from]=k
inv_word_index[0]='<pad>'
inv_word_index[1]='<start>'
inv_word_index[2]='<oov>'
word_index['ai']
inv_word_index[16942+index_from]
inv_word_index[:50]
def toText(wordIDs):
s = ''
for i in range(len(wordIDs)):
if wordIDs[i] != 0:
w = str(inv_word_index[wordIDs[i]])
s+= w + ' '
return s
for i in range(5):
print()
print(str(i) + ') sentiment = ' + ('negative' if y_train[i]==0 else 'positive'))
print(toText(X_train[i]))
backend.clear_session()
embedding_vector_length = 5
rnn_vector_length = 150
#activation = 'relu'
activation = 'sigmoid'
model = Sequential()
model.add(Embedding(top_words, embedding_vector_length, input_length=max_review_length))
model.add(Dropout(0.2))
#model.add(LSTM(rnn_vector_length, activation=activation))
model.add(GRU(rnn_vector_length, activation=activation))
model.add(Dropout(0.2))
model.add(Dense(1, activation=activation))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
log_dir = '/data/kaggle-tensorboard'
shutil.rmtree(log_dir, ignore_errors=True)
os.makedirs(log_dir)
tbCallBack = TensorBoard(log_dir=log_dir, histogram_freq=0, write_graph=True, write_images=True)
full_history=[]
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=8, batch_size=64, callbacks=[tbCallBack])
full_history += history.history['loss']
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
print( 'embedding_vector_length = ' + str( embedding_vector_length ))
print( 'rnn_vector_length = ' + str( rnn_vector_length ))
history.history # todo: add graph of all 4 values with history
plt.plot(history.history['loss'])
plt.yscale('log')
plt.show()
plt.plot(full_history)
plt.yscale('log')
plt.show()
import re
words_only = r'[^\s!,.?\-":;0-9]+'
re.findall(words_only, "Some text to, tokenize. something's.Something-else?".lower())
def encode(reviewText):
words = re.findall(words_only, reviewText.lower())
reviewIDs = [start_char]
for word in words:
index = word_index.get(word, oov_char -index_from) + index_from # defaults to oov_char for missing
if index > top_words:
index = oov_char
reviewIDs.append(index)
return reviewIDs
toText(encode('To code and back again. ikkyikyptangzooboing ni !!'))
# reviews from:
# https://www.pluggedin.com/movie-reviews/solo-a-star-wars-story
# http://badmovie-badreview.com/category/bad-reviews/
user_reviews = ["This movie is horrible",
"This wasn't a horrible movie and I liked it actually",
"This movie was great.",
"What a waste of time. It was too long and didn't make any sense.",
"This was boring and drab.",
"I liked the movie.",
"I didn't like the movie.",
"I like the lead actor but the movie as a whole fell flat",
"I don't know. It was ok, some good and some bad. Some will like it, some will not like it.",
"There are definitely heroic seeds at our favorite space scoundrel's core, though, seeds that simply need a little life experience to nurture them to growth. And that's exactly what this swooping heist tale is all about. You get a yarn filled with romance, high-stakes gambits, flashy sidekicks, a spunky robot and a whole lot of who's-going-to-outfox-who intrigue. Ultimately, it's the kind of colorful adventure that one could imagine Harrison Ford's version of Han recalling with a great deal of flourish … and a twinkle in his eye.",
"There are times to be politically correct and there are times to write things about midget movies, and I’m afraid that sharing Ankle Biters with the wider world is an impossible task without taking the low road, so to speak. There are horrible reasons for this, all of them the direct result of the midgets that this film contains, which makes it sound like I am blaming midgets for my inability to regulate my own moral temperament but I like to think I am a…big…enough person (geddit?) to admit that the problem rests with me, and not the disabled.",
"While Beowulf didn’t really remind me much of Beowulf, it did reminded me of something else. At first I thought it was Van Helsing, but that just wasn’t it. It only hit me when Beowulf finally told his backstory and suddenly even the dumbest of the dumb will realise that this is a simple ripoff of Blade. The badass hero, who is actually born from evil, now wants to destroy it, while he apparently has to fight his urges to become evil himself (not that it is mentioned beyond a single reference at the end of Beowulf) and even the music fits into the same range. Sadly Beowulf is not even nearly as interesting or entertaining as its role model. The only good aspects I can see in Beowulf would be the stupid beginning and Christopher Lamberts hair. But after those first 10 minutes, the movie becomes just boring and you don’t care much anymore.",
"You don't frighten us, English pig-dogs! Go and boil your bottoms, son of a silly person! I blow my nose at you, so-called Arthur King! You and all your silly English Knnnnnnnn-ighuts!!!"
]
X_user = np.array([encode(review) for review in user_reviews ])
X_user
X_user_pad = sequence.pad_sequences(X_user, maxlen=max_review_length)
X_user_pad
for row in X_user_pad:
print()
print(toText(row))
user_scores = model.predict(X_user_pad)
is_positive = user_scores >= 0.5 # I'm an optimist
for i in range(len(user_reviews)):
print( '\n%.2f %s:' % (user_scores[i][0], 'positive' if is_positive[i] else 'negative' ) + ' ' + user_reviews[i] )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load IMDB Dataset
Step2: Pad sequences so they are all the same length (required by keras/tensorflow).
Step3: Setup Vocabulary Dictionary
Step4: Convert Encoded Sentences to Readable Text
Step5: Build the model
Step6: Setup Tensorboard
Step7: Train the Model
Step8: Accuracy on the Test Set
Step9: # Hyper Parameter Tuning Notes
Step10: Evaluate on Custom Text
Step11: Features View
Step12: Results
|
3,675
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
# This is needed to display the images.
%matplotlib inline
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
sys.path
from utils import label_map_util
from utils import visualization_utils as vis_util
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ]
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
scores = detection_graph.get_tensor_by_name('detection_scores:0')
classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Actual detection.
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Env setup
Step2: Object detection imports
Step3: Model preparation
Step4: Download Model
Step5: Load a (frozen) Tensorflow model into memory.
Step6: Loading label map
Step7: Helper code
Step8: Detection
|
3,676
|
<ASSISTANT_TASK:>
Python Code:
data = 79,05 102,40 115,40 126,10 217,50 240,70
82,4 101,5 114,1 123,1 215,8 239
81,90 104,80 113,20 121,50 214,20 237,50
data = data.replace(',', '.')
lines = data.split('\n')
values = [line.split('\t') for line in lines]
values
import numpy as np
import pandas as pd
s = pd.DataFrame(values)
s
s.values.astype(np.float)
df = pd.DataFrame(s.values.astype(np.float), columns=range(6))
df
df ** 2
(df ** 2).ix[0][0]
freqs = np.array([82.4, 110., 146.8, 196., 246.9, 329.6]) # frequencies of the guitar strings, from low E to high E, in Hz
calibration_tensions = np.array([9.59, 11.61, 11.22, 8.43, 8.09, 8.9]) * 9.81 # calibration tensions found on package (in kg) converted to N
mu = calibration_tensions / (4 * 0.648**2 * freqs**2)
mu
psi = 4 * 0.648**2 * mu
psi
T = (df ** 2).values * psi
T
T_end = T[1, :]
T_start = T[0, :]
mat = np.zeros((5, 6))
dT = T_end - T_start
dT
mat
np.nonzero(dT > 0)
tuned_string = np.nonzero(dT > 0)[0]
assert tuned_string.size == 1
tuned_string = tuned_string[0]
tuned_string
cnt = 0
for string in range(6):
if string == tuned_string:
continue
else:
for other_string in range(6):
if other_string == tuned_string:
mat[cnt, other_string] = 0
elif other_string == string:
mat[cnt, other_string] = dT[tuned_string] + dT[string]
else:
mat[cnt, other_string] = dT[string]
cnt += 1
mat[0]
mat[1]
mat
def make_matrix(T_end, T_start):
builds the matrix that describes the effect of the individual rho_i on the
overall tuning change
M * [rho_i] = [dT]
mat = np.zeros((5, 6))
dT = T_end - T_start
upstrings = np.nonzero(dT > 0)[0]
downstrings = np.nonzero(dT < 0)[0]
if (upstrings.size == 5) and (downstrings.size == 1):
tuned_string = downstrings[0]
elif (upstrings.size == 1) and (downstrings.size == 5):
tuned_string = upstrings[0]
else:
raise Exception('problem: no changed string was detected!')
cnt = 0
for string in range(6):
if string == tuned_string:
continue
else:
for other_string in range(6):
if other_string == tuned_string:
mat[cnt, other_string] = 0
elif other_string == string:
mat[cnt, other_string] = dT[tuned_string] + dT[string]
else:
mat[cnt, other_string] = dT[string]
cnt += 1
rhs = -dT[[_ for _ in range(6) if _ != tuned_string]]
return mat, rhs
make_matrix(T_end, T_start)
dT
mat1, rhs1 = make_matrix(T[1, :], T[0, :])
mat2, rhs2 = make_matrix(T[2, :], T[1, :])
mat2
rhs2
total_mat = np.vstack((mat1, mat2))
total_rhs = np.vstack((rhs1[:, np.newaxis],
rhs2[:, np.newaxis]))
total_mat
total_rhs
total_mat.shape
total_rhs.shape
rho, err, rank, eigs = np.linalg.lstsq(total_mat, total_rhs)
err
rho
err
np.dot(mat1, rho)
rhs1
np.dot(mat2, rho)
rhs2
tuning_mat = np.zeros((6, 6))
for other_string in range(6):
for tuning_string in range(6):
if tuning_string == other_string:
tuning_mat[other_string, tuning_string] = 1.
else:
tuning_mat[other_string, tuning_string] = \
psi[tuning_string] / psi[other_string] * \
(- rho[other_string] / (1 + np.sum([rho[i] for i in range(6) if i != tuning_string])))
tuning_mat
np.dot(tuning_mat, np.array([1, 0, 0, 0, 0, 0]))
psi[0] / psi[1] * (- rho[1] / (1 + np.sum([rho[k] for k in range(6) if k != 0])))
def compute_tuning_matrix(psi, rho):
tuning_mat = np.zeros((6, 6))
for other_string in range(6):
for tuning_string in range(6):
if tuning_string == other_string:
tuning_mat[other_string, tuning_string] = 1.
else:
tuning_mat[other_string, tuning_string] = \
psi[tuning_string] / psi[other_string] * \
(- rho[other_string] / (1 + np.sum([rho[i] for i in range(6) if i != tuning_string])))
return tuning_mat
compute_tuning_matrix(psi, rho)
freqs
target_freqs = freqs.copy()
current_freqs = df.values[2, :]
target_freqs - current_freqs
target_dF = (target_freqs - current_freqs) ** 2
Delta_F = np.linalg.solve(tuning_mat, target_dF)
Delta_F
np.sqrt(Delta_F)
np.sqrt(np.dot(tuning_mat, Delta_F))
current_freqs
current_freqs + np.sqrt(np.dot(tuning_mat, Delta_F))
np.sqrt(Delta_F)
current_freqs + np.sqrt(Delta_F)
freqs = np.array([82.4, 110., 146.8, 196., 246.9, 329.6]) # frequencies of the guitar strings, from low E to high E, in Hz
calibration_tensions = np.array([7.94, 8.84, 8.34, 6.67, 4.99, 5.94]) * 9.81 # calibration tensions found on package (in kg) converted to N
mu = calibration_tensions / (4 * 0.648**2 * freqs**2)
mu
psi = 4 * 0.648**2 * mu
psi
lines = 83,55 94,70 193,7 138,8 203 190
89,2 93,3 192,55 135,2 200,55 186,9
87,8 99,2 191,25 130,9 197,85 183,7.replace(',', '.').split('\n')
history = np.array([line.split('\t') for line in lines], dtype=np.float)
history
T = (history ** 2) * psi
mat1, rhs1 = make_matrix(T[1, :], T[0, :])
mat2, rhs2 = make_matrix(T[2, :], T[1, :])
total_mat = np.vstack((mat1, mat2))
total_rhs = np.vstack((rhs1[:, np.newaxis],
rhs2[:, np.newaxis]))
rho, err, rank, eigs = np.linalg.lstsq(total_mat, total_rhs)
rho
err
np.dot(mat1, rho)
rhs1
np.dot(mat2, rho)
rhs2
np.dot(total_mat, rho)
total_rhs
tuning_mat = compute_tuning_matrix(psi, rho)
tuning_mat
target_freqs = freqs.copy()
current_freqs = history[2, :]
target_freqs - current_freqs
target_dF = (target_freqs - current_freqs) ** 2
Delta_F = np.linalg.solve(tuning_mat, target_dF)
Delta_F
np.sqrt(Delta_F)
for _ in np.sqrt(Delta_F):
print("{:.2f}".format(_))
freqs = np.array([82.4, 110., 146.8, 196., 246.9, 329.6]) # frequencies of the guitar strings, from low E to high E, in Hz
calibration_tensions = np.array([7.94, 8.84, 8.34, 6.67, 4.99, 5.94]) * 9.81 # calibration tensions found on package (in kg) converted to N
mu = calibration_tensions / (4 * 0.648**2 * freqs**2)
psi = 4 * 0.648**2 * mu
history = np.array([[84.6,111.4,148.8,193.8,244.3,328.7],
[82.1,111.6,149.0,194.1,244.5,329.0],
[81.8,114.6,148.8,193.7,244.2,328.7]])
history
T = (history ** 2) * psi
T
T.sum(axis=1) / 9.81
mat1, rhs1 = make_matrix(T[1, :], T[0, :])
mat2, rhs2 = make_matrix(T[2, :], T[1, :])
total_mat = np.vstack((mat1, mat2))
total_rhs = np.vstack((rhs1[:, np.newaxis],
rhs2[:, np.newaxis]))
rho, err, rank, eigs = np.linalg.lstsq(total_mat, total_rhs)
rho
err
np.dot(total_mat, rho)
total_rhs
tuning_mat = compute_tuning_matrix(psi, rho)
tuning_mat
def predict_changes(initial_T, final_T, tuning_mat):
predicts changes in tuning (frequency) from a vector of tensions
print("initial tunings: {:.2f}, {:.2f}, {:.2f}, {:.2f}, {:.2f}, {:.2f}".format(*[x for x in np.sqrt(initial_T))]))
print("final tunings: {:.2f}, {:.2f}, {:.2f}, {:.2f}, {:.2f}, {:.2f}".format(*[x for x in np.sqrt(final_T))]))
print("predicted tunings")
target_freqs = freqs.copy()
current_freqs = history[2, :]
target_dF = target_freqs**2 - current_freqs**2
Delta_F = np.linalg.solve(tuning_mat, target_dF)
Delta_F
np.sqrt(current_freqs**2 + np.dot(tuning_mat, Delta_F))
target_freqs
print("initial: {:.1f}, {:.1f}, {:.1f}, {:.1f}, {:.1f}, {:.1f}".format(*[x for x in current_freqs]))
for step in range(6):
new_F = np.sqrt(current_freqs**2 + np.dot(tuning_mat, Delta_F * (np.arange(6) <= step)))
print(" step {}: {:.1f}, {:.1f}, {:.1f}, {:.1f}, {:.1f}, {:.1f}".format(step, *[x for x in new_F]))
def tuning_step(tuning_mat, initial_freqs, Delta_F, step_number):
predicts observed tuning as a function of tuning step
convention: step 0 means nothing has changed
step = step_number - 1
if step == -1:
return initial_freqs
return np.sqrt(initial_freqs**2 + np.dot(tuning_mat, Delta_F * (np.arange(6) <= step)))
print_strings = lambda v: print("{:.1f}, {:.1f}, {:.1f}, {:.1f}, {:.1f}, {:.1f}".format(*v))
%matplotlib inline
import matplotlib.pyplot as plt
initial_freqs = current_freqs.copy()
for target, string in zip([tuning_step(tuning_mat, initial_freqs, Delta_F, i)[i-1] for i in range(1, 7)],
["low E", "A", "D", "G", "B", "high E"]):
print("string: {}, target frequency: {:.1f}".format(string, target))
measured_freqs = [82.0,114.5,148.5,193.5,244.0,328.7]
expected_freqs = tuning_step(tuning_mat, initial_freqs, Delta_F, 1)
print_strings(measured_freqs)
print_strings(expected_freqs)
plt.plot(measured_freqs, 'o')
plt.plot(expected_freqs, 'o')
measured_freqs = [82.3,110.0,148.8,194.1,244.5,329.2]
expected_freqs = tuning_step(tuning_mat, initial_freqs, Delta_F, 2)
print_strings(measured_freqs)
print_strings(expected_freqs)
plt.plot(measured_freqs, 'o')
plt.plot(expected_freqs, 'o')
measured_freqs = [82.4,110.2,146.9,194.4,244.7,329.5]
expected_freqs = tuning_step(tuning_mat, initial_freqs, Delta_F, 3)
print_strings(measured_freqs)
print_strings(expected_freqs)
plt.plot(measured_freqs, 'o')
plt.plot(expected_freqs, 'o')
measured_freqs = [82.4,110.2,147.0,196.1,244.6,329.2]
expected_freqs = tuning_step(tuning_mat, initial_freqs, Delta_F, 4)
print_strings(measured_freqs)
print_strings(expected_freqs)
plt.plot(measured_freqs, 'o')
plt.plot(expected_freqs, 'o')
measured_freqs = [82.4,110.2,146.9,196.3,246.8,329.2]
expected_freqs = tuning_step(tuning_mat, initial_freqs, Delta_F, 5)
print_strings(measured_freqs)
print_strings(expected_freqs)
plt.plot(measured_freqs, 'o')
plt.plot(expected_freqs, 'o')
measured_freqs = [82.4,110.1,146.9,196.3,246.7,329.6]
expected_freqs = tuning_step(tuning_mat, initial_freqs, Delta_F, 6)
print_strings(measured_freqs)
print_strings(expected_freqs)
plt.plot(measured_freqs, 'o')
plt.plot(expected_freqs, 'o')
freqs = np.array([82.4, 110., 146.8, 196., 246.9, 329.6]) # frequencies of the guitar strings, from low E to high E, in Hz
calibration_tensions = np.array([7.94, 8.84, 8.34, 6.67, 4.99, 5.94]) * 9.81 # calibration tensions found on package (in kg) converted to N
mu = calibration_tensions / (4 * 0.648**2 * freqs**2)
psi = 4 * 0.648**2 * mu
history = np.array([[83.7,112.7,151.3,204.2,251.2,333.4],
[84.9,112.3,150.9,203.3,250.5,332.8],
[85.0,109.4,151.1,203.9,250.9,333.0]])
history
T = (history ** 2) * psi
T.sum(axis=1) / 9.81
mat1, rhs1 = make_matrix(T[1, :], T[0, :])
mat2, rhs2 = make_matrix(T[2, :], T[1, :])
total_mat = np.vstack((mat1, mat2))
total_rhs = np.vstack((rhs1[:, np.newaxis],
rhs2[:, np.newaxis]))
rho, err, rank, eigs = np.linalg.lstsq(total_mat, total_rhs)
rho
err
np.dot(total_mat, rho)
total_rhs
history = np.vstack((history,
np.array([82.3,109.5,151.3,204.0,251.1,333.3])))
history
T = (history ** 2) * psi
mat3, rhs3 = make_matrix(T[3, :], T[2, :])
total_mat = np.vstack((mat1, mat2, mat3))
total_rhs = np.vstack((rhs1[:, np.newaxis],
rhs2[:, np.newaxis],
rhs3[:, np.newaxis]))
rho, err, rank, eigs = np.linalg.lstsq(total_mat, total_rhs)
rho
err
np.dot(total_mat, rho)
total_rhs
history = np.array([[81.08,110.53,151.17,203.89,251.09,333.24],
[82.9,110.47,151.04,203.57,250.96,333.19],
[82.87,112.37,150.96,203.42,250.69,333.02]])
history
T = (history ** 2) * psi
T.sum(axis=1) / 9.81
mat1, rhs1 = make_matrix(T[1, :], T[0, :])
mat2, rhs2 = make_matrix(T[2, :], T[1, :])
total_mat = np.vstack((mat1, mat2))
total_rhs = np.vstack((rhs1[:, np.newaxis],
rhs2[:, np.newaxis]))
rho, err, rank, eigs = np.linalg.lstsq(total_mat, total_rhs)
rho
err
np.dot(total_mat, rho)
total_rhs
tuning_mat = compute_tuning_matrix(psi, rho)
target_freqs = freqs.copy()
current_freqs = history[2, :]
target_dF = target_freqs**2 - current_freqs**2
Delta_F = np.linalg.solve(tuning_mat, target_dF)
Delta_F
initial_freqs = current_freqs.copy()
initial_freqs
for target, string in zip([tuning_step(tuning_mat, initial_freqs, Delta_F, i)[i-1] for i in range(1, 7)],
["low E", "A", "D", "G", "B", "high E"]):
print("string: {}, target frequency: {:.1f}".format(string, target))
measured_freqs = [82.18,112.37,150.94,203.43,250.67,333.08]
expected_freqs = tuning_step(tuning_mat, initial_freqs, Delta_F, 1)
print_strings(measured_freqs)
print_strings(expected_freqs)
measured_freqs = [82.44,109.75,150.91,203.92,251.13,333.26]
expected_freqs = tuning_step(tuning_mat, initial_freqs, Delta_F, 2)
print_strings(measured_freqs)
print_strings(expected_freqs)
measured_freqs = [82.66,110.12,146.65,204.46,251.57,333.51]
expected_freqs = tuning_step(tuning_mat, initial_freqs, Delta_F, 3)
print_strings(measured_freqs)
print_strings(expected_freqs)
measured_freqs = [82.86,110.39,146.96,195.81,251.94,333.99]
expected_freqs = tuning_step(tuning_mat, initial_freqs, Delta_F, 4)
print_strings(measured_freqs)
print_strings(expected_freqs)
measured_freqs = [82.97,110.52,147.1,196.29,246.53,334.24]
expected_freqs = tuning_step(tuning_mat, initial_freqs, Delta_F, 5)
print_strings(measured_freqs)
print_strings(expected_freqs)
measured_freqs = [83.2,110.75,147.17,196.55,246.67,329.84]
expected_freqs = tuning_step(tuning_mat, initial_freqs, Delta_F, 6)
print_strings(measured_freqs)
print_strings(expected_freqs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1ère partie
Step2: Il nous faut construire le vecteur des $\Delta T _ i$ à partir des mesures de fréquences. On construit d'abord le vecteur des $F_i$
Step3: note importante
Step4: attention les tensions sont bien en Newton, pas en kg comme dans la feuille Numbers.
Step5: On s'attend à ce qu'il n'y ait qu'une valeur qui a augmenté, ici c'est la première.
Step7: On peut écrire une fonction avec le code précédent
Step8: On vérifie que le RHS est correct
Step9: On peut maintenant construire les deux matrices
Step10: On peut les concaténer
Step11: Inversion du système ainsi construit
Step12: On a la solution !
Step13: Mouai...
Step14: Ceci est peut être dû à l'erreur à l'issue de la régression linéaire... on va quand même essayer d'implémenter la méthode d'inversion pour l'accordage.
Step15: On vérifie que ces termes sont les bons.
Step16: On écrit la fonction pour calculer la matrice
Step17: On peut maintenant inverser la matrice en calculant l'accordage cible.
Step18: Le $\Delta f$ à appliquer peut se calculer en faisant la différence à obtenir
Step19: L'écart que l'on cherche à obtenir est donc
Step20: On doit le mettre au carré
Step21: On doit prendre la racine de ce grand F pour trouver les fréquences à appliquer
Step22: Cela devrait marcher. Voyons voir quels deltas de fréquence il faut imposer.
Step23: Essai pratique
Step24: Conclusion
Step25: J'ai remesuré la longueur des cordes et j'ai trouvé 65 cm. Je décide de garder la longueur standard de 0.648 m.
Step27: On peut maintenant remesurer les différentes fréquences sur la guitare
Step28: Cette fois-ci on constate que les rho_i sont dans l'ordre !
Step29: Bizarre la vérification...
Step30: 3ème partie
Step31: Afin de faire les prochaines mesures, on écrit la partie restante de l'application avec PyQt.
Step32: On peut regarder la somme des tensions (et la remettre en kg pour avoir une petite idée)
Step33: On constate qu'on reste proche, mais qu'on ne conserve pas exactement les tensions. Je me demande ce que prédit le modèle à ce sujet.
Step34: Cette fois-ci, la prédiction n'est pas si mauvaise...
Step36: Et maintenant, on peut prédire les changements à partir d'un état vers un autre
Step37: attention au calcul des target_dF, c'est
Step38: On peut maintenant écrire la séquence attendue
Step40: On peut écrire une fonction qui facilite le calcul
Step41: Etape 0
Step42: Les fréquences cibles des cordes sont
Step43: Etape 1
Step44: La
Step45: Ré
Step46: Sol
Step47: Si
Step48: Mi aigu
Step49: J'ai envie de penser que cela suffit.
Step50: On peut remarquer que l'inversion ne donne pas des valeurs satisfaisantes car les signes du produit matrice vecteur ne respectent pas celles observées (que des moins ou que des plus). On peut imaginer faire plus de mesures pour voir ce que ça donne
Step51: On laisse tomber et on recommence la procédure de calibration, avec un précision supérieure
Step52: Tout est de même signe, c'est pas si mal. Et l'erreur est faible.
Step53: Mi grave
Step54: La
Step55: Re
Step56: Sol
Step57: Si
Step58: Mi
|
3,677
|
<ASSISTANT_TASK:>
Python Code:
import IPython as IP
IP.display.Image("example_of_name_matching_problems_mod.png",width=400,height=200,embed=True)
IP.display.Image("../aux/bad_csv_data_mod.png",width=500,height=500,embed=True)
# name of database
db_name = "tennis"
# name of db user
username = "testuser"
# db password for db user
password = "test623"
# location of atp data files
atpfile_directory = "../data/tennis_atp-master/"
# location of odds data files
oddsfiles_directory = "../data/odds_data/"
# we'll read and write pickle files here
pickle_dir = '../pickle_files/'
import sqlalchemy # pandas-mysql interface library
import sqlalchemy.exc # exception handling
from sqlalchemy import create_engine # needed to define db interface
import sys # for defining behavior under errors
import numpy as np # numerical libraries
import scipy as sp
import pandas as pd # for data analysis
import pandas.io.sql as sql # for interfacing with MySQL database
import matplotlib as mpl # a big library with plotting functionality
import matplotlib.pyplot as plt # a subset of matplotlib with most of the useful tools
import IPython as IP
%matplotlib inline
import pdb
#%qtconsole
# create an engine for interacting with the MySQL database
try:
eng_str = 'mysql+mysqldb://' + username + ':' + password + '@localhost/' + db_name
engine = create_engine(eng_str)
connection = engine.connect()
version = connection.execute("SELECT VERSION()")
print("Database version : ")
print(version.fetchone())
# report what went wrong if this fails.
except sqlalchemy.exc.DatabaseError as e:
reason = e.message
print("Error %s:" % (reason))
sys.exit(1)
# close the connection
finally:
if connection:
connection.close()
else:
print("Failed to create connection.")
# focus on most recent data; exclude Davis Cup stuff
startdate = '20100101'
enddate = '20161231'
with engine.begin() as connection:
odds = pd.read_sql_query(SELECT * FROM odds \
WHERE DATE >= ' + startdate + ' \
AND DATE <= ' + enddate + ';, connection)
with engine.begin() as connection:
matches = pd.read_sql_query(SELECT * FROM matches \
WHERE tourney_date >= ' + startdate + ' \
AND tourney_date <= ' + enddate + ' \
AND tourney_name NOT LIKE 'Davis%%';, connection)
# view results
IP.display.display(odds[['ATP','Location','Tournament','Date','Round',
'Winner','Loser']].sort_values('Date')[0:5])
IP.display.display(matches[['tourney_id','tourney_name','tourney_date','round',
'winner_name','loser_name']].sort_values('tourney_date')[0:5])
odds[['Location','Winner','Loser']] = \
odds[['Location','Winner','Loser']].\
apply(lambda x: x.str.strip().str.lower().str.replace('-',' '),axis=1)
matches[['tourney_name','winner_name','loser_name']] = \
matches[['tourney_name','winner_name','loser_name']].\
apply(lambda x: x.str.strip().str.lower().str.replace('-',' '),axis=1)
# matches tournament identifiers are unique
g_matches = matches.groupby('tourney_id')
# odds tournament identifiers are recycled every year
g_odds= odds.groupby(['ATP','fname'])
def extract_odds_features(group):
sizes = len(group)
min_date = group['Date'].min()
max_date = group['Date'].max()
location = group['Location'].unique()[0]
return pd.Series({'size': sizes,'min_date':min_date,\
'max_date':max_date,'location':location})
def extract_matches_features(group):
sizes = len(group)
min_date = group['tourney_date'].min()
max_date = group['tourney_date'].max()
tourney_name = group['tourney_name'].unique()[0]
return pd.Series({'size': sizes,'min_date':min_date,\
'max_date':max_date,'tourney_name':tourney_name})
g_odds = g_odds.apply(extract_odds_features).reset_index()
g_matches = g_matches.apply(extract_matches_features).reset_index()
tourney_lookup = pd.read_pickle(pickle_dir + 'tourney_lookup.pkl')
print("Snapshot of lookup table:")
IP.display.display(tourney_lookup.sort_values('o_name')[15:25])
def get_tourney_ID(o_row):
function: get_tourney_ID(o_row)
Input: row from dataframe g_odds
Output: a Series object with two elements: 1) a match ID,
and 2), a flag of True if the sizes of the two tournmanets are identical
# calculate the diffence in start/stop dates between this tournament and those in `matches`.
min_date_delta = np.abs(g_matches['min_date'] - o_row['min_date']).apply(lambda x: x.days)
max_date_delta = np.abs(g_matches['max_date'] - o_row['max_date']).apply(lambda x: x.days)
# find a list of candidate tournament names, based on lookup table
mtchs = (tourney_lookup['o_name']==o_row['location'])
if sum(mtchs)>0:
m_name = tourney_lookup.loc[mtchs,'m_name']
else:
print('no match found for record {}'.format(o_row['location']))
return ['Nan','Nan']
# the "right" tournament has the right name, and reasonable close start or stop dates
idx = ((min_date_delta <=3) | (max_date_delta <=1)) & (g_matches['tourney_name'].isin(m_name))
record = g_matches.loc[idx,'tourney_id']
# if there are no matches, print some diagnostic information and don't assign a match
if len(record)<1:
print("Warning: no match found for `odds` match {}, year {}".format(o_row.ATP, o_row.fname))
print("min date delta: {}, max date delta: {}, g_matches: {}".format(np.min(min_date_delta), \
np.min(max_date_delta), \
g_matches.loc[g_matches['tourney_name'].isin(m_name),'tourney_name']))
return pd.Series({'ID':'None','size':'NA'})
# if there are too many matches, print a warning and don't assign a match.
elif (len(record)>1):
print("Warning: multiple matches found for `odds` match {}".format(o_row.ATP))
return pd.Series({'ID':'Multiple','size':'NA'})
# otherwise, assign a match, and check if the sizes of the matches are consistent (a good double-check)
else:
size_flag = (g_matches.loc[idx,'size']==o_row['size'])
return pd.Series({'ID':record.iloc[0],'size':size_flag.iloc[0]})
# add columns to g_odds to hold match ID and also info about size-correspondence
g_odds.insert(len(g_odds.columns),'ID','None')
g_odds.insert(len(g_odds.columns),'sizes_match','NA')
# perform the match
g_odds[['ID','sizes_match']] = g_odds.apply(get_tourney_ID,axis=1).values
# add "size" columns to both dataframes
odds = pd.merge(g_odds[['ATP','fname','ID','size']],odds,how='inner',on=['ATP','fname'])
matches = pd.merge(g_matches[['tourney_id','size']],matches,how='inner',on=['tourney_id'])
# sum the sizes
if sum(g_odds['sizes_match']==True) != len(g_odds):
print("Warning: at least one tournament in `odds` is matched to a \
tournament in `matches` of a different size.")
else:
print("Sizes seem to match up.")
# for each tournament, label match numbers from 1 to n_tourneys
odds.insert(5,'match_num',0)
grouped = odds[['ID','match_num']].groupby('ID')
odds['match_num'] = grouped.transform(lambda x: 1+np.arange(len(x)))
# add keys to both odds and match data
odds.insert(len(odds.columns),'key',np.arange(len(odds)))
matches.insert(len(matches.columns),'key',np.arange(len(matches)))
# figure out how many discrete sizes there are
print("size in odds: ", odds['size'].unique())
print("size in matches: ", matches['size'].unique())
print("unique round designators in odds: ", odds.Round.unique())
print("unique round designators in matches: ", matches['round'].unique())
# create a lookup table to be able to match on rounds
m_rounds = ['R128','R64','R32','R16','QF','SF','F','RR']
o_rounds = ['1st Round','2nd Round','3rd Round','4th Round', \
'Quarterfinals','Semifinals','The Final','Round Robin']
round_lookup_small = pd.DataFrame({'m_rounds': m_rounds[2:-1],\
'o_rounds':o_rounds[0:2]+o_rounds[4:-1]})
round_lookup_medium = pd.DataFrame({'m_rounds': m_rounds[1:-1],\
'o_rounds':o_rounds[0:3]+o_rounds[4:-1]})
round_lookup_large = pd.DataFrame({'m_rounds': m_rounds[0:-1],\
'o_rounds':o_rounds[0:-1]})
round_lookup_RR = pd.DataFrame({'m_rounds':m_rounds[5:],\
'o_rounds':o_rounds[5:]})
def map_rounds(x):
cur_name = x['Round']
t_size = x['size']
if t_size in [27,31]:
new_name = round_lookup_small.loc[round_lookup_small.o_rounds==cur_name,'m_rounds']
elif t_size in [47,55]:
new_name = round_lookup_medium.loc[round_lookup_medium.o_rounds==cur_name,'m_rounds']
elif t_size in [95, 127]:
new_name = round_lookup_large.loc[round_lookup_large.o_rounds==cur_name,'m_rounds']
else:
new_name = round_lookup_RR.loc[round_lookup_RR.o_rounds==cur_name,'m_rounds']
return new_name.iloc[0]
# translate round indentifier appropriately
odds.insert(4,'round','TBD')
odds['round'] = odds.apply(map_rounds,axis=1).values
IP.display.display(odds[0:4])
IP.display.display(matches[0:4])
t1=odds.ID.drop_duplicates().sort_values()
t2=matches.tourney_id.drop_duplicates().sort_values()
m_sizes=matches.loc[matches.tourney_id.isin(t1),['tourney_id','size']].drop_duplicates()
o_sizes=odds.loc[odds.ID.isin(t2),['ID','size']].drop_duplicates()
#comp = pd.merge(o_sizes,m_sizes,how='outer',left_on='ID',right_on='tourney_id')
print('sum of sizes of tournaments in odds: ', np.sum(o_sizes['size']))
print('sum of sizes of tournaments in matches: ', np.sum(m_sizes['size']))
matches = matches.loc[matches.tourney_id.isin(t1),:]
print("number of records in `odds`: ", len(odds))
print("number of records in `matches`: ", len(matches))
# extract dataframe with player names split into discrete 'words'
m_players = pd.merge(matches.winner_name.str.split(pat=' ',expand=True), \
matches.loser_name.str.split(pat=' ',expand=True), \
how='inner',left_index=True, right_index=True,suffixes=('_W','_L'))
# add on tournament, round, and match identifiers
m_players = pd.merge(matches[['tourney_id','match_num', 'round','key']], m_players,\
how='inner',left_index=True, right_index=True).sort_values(['tourney_id','round','1_W','1_L'])
# extract dataframe with player names split into discrete 'words'
o_players = pd.merge(odds.Winner.str.split(pat=' ',expand=True), \
odds.Loser.str.split(pat=' ',expand=True), \
how='inner',left_index=True, right_index=True,suffixes=('_W','_L'))
# add on tournament and round identifiers
o_players = pd.merge(odds[['ID','round','match_num','key']], o_players,\
how='inner',left_index=True, right_index=True).sort_values(['ID','round','0_W','0_L'])
print("m_players: ")
IP.display.display(m_players[0:5])
print("o_players")
IP.display.display(o_players[0:5])
# try for an exact match on last names of both winner and loser
A = pd.merge(m_players[['tourney_id','round','key','1_W','1_L']],\
o_players[['ID','round','key','0_W','0_L']],how='inner',\
left_on=['tourney_id','round','1_W','1_L'],\
right_on=['ID','round','0_W','0_L'],suffixes=['_m','_o'])
m_extras = m_players.loc[~m_players.key.isin(A.key_m),:]
o_extras = o_players.loc[~o_players.key.isin(A.key_o),:]
print("A total of {} matches down. {} remain.".format(len(A),len(m_extras)))
A.key_o.unique().size
IP.display.display(m_extras[0:10])
IP.display.display(o_extras[0:10])
def comp_str_lists(a,b):
checks to see if any of the strings in list a are also in list b
for i in a:
if i in b:
return True
return False
def comp_all_cols(o_row):
input: row of o_players
output:
m_chunk = m_extras.loc[(m_extras.tourney_id==o_row['ID']) & (m_extras['round']==o_row['round'])]
o_winner = list(o_row[['0_W','1_W','2_W','3_W','4_W']].dropna())
o_loser = list(o_row[['0_L','1_L','2_L','3_L','4_L']].dropna())
pairing = []
if len(m_chunk)==0:
print("warning: no match/round pairing found for o_row key {}".format(o_row['key']))
return 0
for i, m_row in m_chunk.iterrows():
m_winner = list(m_row[['0_W','1_W','2_W','3_W','4_W']].dropna())
m_loser = list(m_row[['0_L','1_L','2_L','3_L','4_L']].dropna())
pairing.append(comp_str_lists(o_winner,m_winner) & (comp_str_lists(o_loser,o_loser)))
if sum(pairing) == 1:
m_row = m_chunk.iloc[np.array(pairing),:]
return pd.Series({'key_o':o_row['key'],'key_m':m_row['key'].iloc[0]})
elif sum(pairing)<1:
print("warning: no name matches for o_row key {}".format(o_row['key']))
return 0
else:
print("warning: multiple name matches for o_row key {}".format(o_row['key']))
return 0
new_matches = o_extras.apply(comp_all_cols,axis=1)
new_matches = new_matches.loc[(new_matches.key_m!=0)&(new_matches.key_o!=0),:]
A = pd.concat([A[['key_m','key_o']],new_matches])
m_extras = m_players.loc[~m_players.key.isin(A.key_m),:]
o_extras = o_players.loc[~o_players.key.isin(A.key_o),:]
print("A total of {} matches down. {} remain.".format(len(A),len(m_extras)))
IP.display.display(m_extras.sort_values('0_W')[0:10])
IP.display.display(o_extras.sort_values('1_L')[0:10])
B = pd.merge(m_extras[['tourney_id','key','1_W','1_L']],\
o_extras[['ID','round','key','0_W','0_L']],how='inner',\
left_on=['tourney_id','1_W','1_L'],\
right_on=['ID','0_W','0_L'],suffixes=['_m','_o'])
A = pd.concat([A,B[['key_m','key_o']]])
m_extras = m_players.loc[~m_players.key.isin(A.key_m),:]
o_extras = o_players.loc[~o_players.key.isin(A.key_o),:]
print("A total of {} matches down. {} remain.".format(len(A),len(m_extras)))
IP.display.display(m_extras[0:4])
IP.display.display(o_extras[0:4])
def comp_all_cols_no_rounds(o_row):
input: row of o_players
output:
m_chunk = m_extras.loc[(m_extras.tourney_id==o_row['ID'])]
o_winner = list(o_row[['0_W','1_W','2_W','3_W','4_W']].dropna())
o_loser = list(o_row[['0_L','1_L','2_L','3_L','4_L']].dropna())
pairing = []
if len(m_chunk)==0:
print("warning: no match/round pairing found for o_row key {}".format(o_row['key']))
return 0
for i, m_row in m_chunk.iterrows():
m_winner = list(m_row[['0_W','1_W','2_W','3_W','4_W']].dropna())
m_loser = list(m_row[['0_L','1_L','2_L','3_L','4_L']].dropna())
pairing.append(comp_str_lists(o_winner,m_winner) & (comp_str_lists(o_loser,m_loser)))
if sum(pairing) == 1:
m_row = m_chunk.iloc[np.array(pairing),:]
return pd.Series({'key_o':o_row['key'],'key_m':m_row['key'].iloc[0]})
elif sum(pairing)<1:
print("warning: no name matches for o_row key {}".format(o_row['key']))
return pd.Series({'key_o':0,'key_m':0})
else:
print("warning: multiple name matches for o_row key {}".format(o_row['key']))
print(m_chunk.iloc[np.array(pairing),:])
return pd.Series({'key_o':0,'key_m':0})
new_matches = o_extras.apply(comp_all_cols_no_rounds,axis=1)
new_matches = new_matches.loc[(new_matches.key_m!=0)&(new_matches.key_o!=0),:]
A = pd.concat([A[['key_m','key_o']],new_matches])
m_extras = m_players.loc[~m_players.key.isin(A.key_m),:]
o_extras = o_players.loc[~o_players.key.isin(A.key_o),:]
print("A total of {} matches down. {} remain.".format(len(A),len(m_extras)))
IP.display.display(m_extras)
IP.display.display(o_extras)
# solve the delbonis problem
o_extras.loc[o_extras['1_W']=='bonis',('0_W','1_W')] = ['delbonis',None]
B = pd.merge(m_extras[['tourney_id','key','1_W','1_L']],\
o_extras[['ID','round','key','0_W','0_L']],how='inner',\
left_on=['tourney_id','1_W','1_L'],\
right_on=['ID','0_W','0_L'],suffixes=['_m','_o'])
A = pd.concat([A,B[['key_m','key_o']]])
m_extras = m_players.loc[~m_players.key.isin(A.key_m),:]
o_extras = o_players.loc[~o_players.key.isin(A.key_o),:]
print("A total of {} matches down. {} remain.".format(len(A),len(m_extras)))
IP.display.display(m_extras)
IP.display.display(o_extras)
m_extras = m_extras.sort_values(['tourney_id','1_W'])
o_extras = o_extras.sort_values(['ID','0_L'])
dregs=pd.DataFrame(list(zip(m_extras['key'].values, \
o_extras['key'].values)),\
columns=['key_m','key_o'])
A = pd.concat([A,dregs[['key_m','key_o']]])
def find_winner(o_row):
ID = o_row['ID']
# nominal winner in `odds`
name1 = o_row['0_W']
# nominal loser in `odds`
name2 = o_row['0_L']
# number of rounds played by nominal winner
rnds1 = len(odds.loc[(odds.ID==ID) & \
(odds.Winner.str.contains(name1) | \
odds.Loser.str.contains(name1)),:])
# number of rounds played by nominal loser
rnds2 = len(odds.loc[(odds.ID==ID) & \
(odds.Winner.str.contains(name2) | \
odds.Loser.str.contains(name2)),:])
# if nominal winner played more rounds, `odds` is right and `matches` is wrong
if rnds1>rnds2:
print('Winner: ', name1)
return 'm'
# otherwise, `odds` is wrong and `matches` is right.
elif rnds1<rnds2:
print('Winner: ', name2)
return 'o'
else:
print("function find_winner: ambigous outcome")
return np.nan
mistake_idx = o_extras.apply(find_winner,axis=1)
# fix messed up `odds` records
o_errs = o_extras.loc[mistake_idx.values=='o',:]
if len(o_errs)!=0:
temp = odds.loc[odds.key.isin(o_errs['key']),'Winner']
odds.loc[odds.key.isin(o_errs['key']),'Winner']=\
odds.loc[odds.key.isin(o_errs['key']),'Loser'].values
odds.loc[odds.key.isin(o_errs['key']),'Loser']=temp.values
# fix messed up `matches` records
m_errs = m_extras.loc[mistake_idx.values=='m',:]
if len(m_errs)!=0:
temp = matches.loc[matches.key.isin(o_errs['key']),'winner_name']
matches.loc[matches.key.isin(o_errs['key']),'winner_name']=\
matches.loc[matches.key.isin(o_errs['key']),'loser_name'].values
matches.loc[matches.key.isin(o_errs['key']),'loser_name']=temp.values
#sanity check
print("odds has {} records".format(len(odds)))
print("our lookup table is of size {}".format(len(A)))
print("the table has {} unique keys for `matches`".format(len(A.key_m.unique())))
print("the table has {} unique keys for `odds`".format(len(A.key_o.unique())))
# take the key originally assigned to `matches` to be the main key
A.rename(columns = {'key_m':'key'},inplace=True)
# change name of `odds` key to match that in `A`
odds.rename(columns = {'key':'key_o'},inplace=True)
# add `matches` key to `odds`, and get rid of `odds` key
odds = pd.merge(odds,A,how='inner',on='key_o')
del odds['key_o']
# use the `odds` match numbers on `matches`
matches = matches.rename(columns={'match_num':'match_num_old'})
matches = pd.merge(matches,odds[['match_num','key']],how='inner',on='key')
matches.to_pickle(pickle_dir + 'matches.pkl')
odds.to_pickle(pickle_dir + 'odds.pkl')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bad match dates
Step2: Setup MySQL connection
Step3: All import statements here.
Step4: Try to connect to the tennis database on the local mysql host. If successful, print out the MySQL version number, if unsuccessful, exit gracefully.
Step11: Extract data from MySQL database
Step12: Note some issues already
Step13: Step I
Step14: For each tournament (group), extract features to assist in "tourney-matching", including the number of matches in the tournament, the maxium and minimum listed dates, the location (in odds) and the tournament name (in matches.)
Step15: Define lookup-table connecting Location in odds to tourney_name in matches. Establishing this matching is not totally straightforward
Step17: Define a function that will take as input a set of features from one tournament in odds, and try to match it to one tournament in matches. The algorithm make sure that dates are roughly the same, and that there is a correspondence between Location and tourney_name.
Step18: Perform the matches.
Step19: Merge match numbers back into bigger odds table, checking that the sizes of each "matched" tournament are the same.
Step20: Step II
Step21: We'll use round information to help with the task of pairing individual matches . To use round information effectively, we need to establish a correspondence between round signifiers in the two datasets. The exact correspondence depends on how many tennis matches are in each tournmanet.
Step22: With the round lookup tables defind, define a function that takes a row of odds, and figures how to map its round information to the round information in matches.
Step23: We'll apply that mapping to each row of the odds dataframe.
Step24: Before going on, a quick sanity check
Step25: Looks good. Now pare down the matches dataframe to contain only records in odds.
Step26: To do player name matching, we split each name, and attempt to match on substrings.
Step27: Match individual matches as follows
Step28: Take a quick peak at the remaining names to get a sense for what the issues are
Step30: Now match across all substrings in a name. To do so, we need a function that will return True if there is a match between one or more strings in two string lists.
Step32: We also need a function that will take each row of odds, and try to find a match for some appropriate subchunk of matches.
Step33: Update match list, and check progress.
Step34: Take a peak at the remaining names and see what the problems are.
Step35: Some rounds are wrong. Try re-matching both winner and loser last names, without insisting on round information.
Step37: That solved some. Now try matching unusual names, ignorning rounds. This involves slightly modifying the comparison function.
Step38: Still some errors. Let's take a look.
Step39: Two big problems. One is 'delbonis' vs. 'del bonis'.
Step40: What remains is exclusively a mismatch of winner and loser. First sort and match keys, then check who really won and correct the data.
Step41: To see who really won, calculate who played the most rounds.
Step42: Correct mistakes any mistakes in either table.
Step43: Now we can assign a single key for both odds and matches which corresponds on the match level. We can also standardize match numbers within tournaments.
Step44: Finally, we save our cleansed and matched datasets.
|
3,678
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, [None, real_dim], name='input_real')
inputs_z = tf.placeholder(tf.float32, [None, z_dim], name='input_z')
return inputs_real, inputs_z
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse): # finish this
# Hidden layer
#h1 = tf.contrib.layers.fully_connected(z, n_units)
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha*h1, h1)
# Logits and tanh output
#logits = tf.contrib.layers.fully_connected(h1, out_dim)
logits = tf.layers.dense(h1, out_dim)
out = tf.tanh(logits)
return out
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # finish this
# Hidden layer
#h1 = tf.contrib.layers.fully_connected(x, n_units)
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha*h1, h1)
#logits = tf.contrib.layers.fully_connected(h1, 1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_logits_real, labels=tf.ones_like(d_logits_real)*(1.0-smooth)
))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_logits_fake, labels=tf.zeros_like(d_logits_real)
))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_logits_fake, labels=tf.ones_like(d_logits_fake)
))
for v in tf.trainable_variables():
print(v.name)
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = []
d_vars = []
for v in t_vars:
if v.name.startswith('generator'):
g_vars.append(v)
elif v.name.startswith('discriminator'):
d_vars.append(v)
else:
print('Unexpected variable: {}'.format(v))
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model Inputs
Step2: Generator network
Step3: Discriminator
Step4: Hyperparameters
Step5: Build network
Step6: Discriminator and Generator Losses
Step7: Optimizers
Step8: Training
Step9: Training loss
Step10: Generator samples from training
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
|
3,679
|
<ASSISTANT_TASK:>
Python Code:
# not so functional function
a = 0
def global_sum(x):
global a
x += a
return x
print(global_sum(1))
print(a)
a = 11
print(global_sum(1))
print(a)
# not so functional function
a = 0
def global_sum(x):
global a
return x + a
print(global_sum(x=1))
print(a)
a = 11
print(global_sum(x=1))
print(a)
# a better functional function
def better_sum(a, x):
return a+x
num = better_sum(1, 1)
print(num)
num = better_sum(1, 3)
print(num)
num = better_sum(1, 1)
print(num)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the above example, the output of the function global_sum changed due to the value of a, thus it is unfunctional function.
|
3,680
|
<ASSISTANT_TASK:>
Python Code:
employee_names = ['A','B','C','D','E','F','G','H']
n_days = 14 # number of days
days = list(range(n_days))
max_seq = 5 # max number of consecutive shifts
min_seq = 2 # min sequence without gaps
max_work = 10 # max total number of shifts
min_work = 7 # min total number of shifts
max_weekend = 3 # max number of weekend shifts
# number of required shifts for each day
shift_requirements =\
{
0: 5,
1: 7,
2: 6,
3: 4,
4: 5,
5: 5,
6: 5,
7: 6,
8: 7,
9: 4,
10: 2,
11: 5,
12: 6,
13: 4
}
# specific shift requests by employees for days
shift_requests =\
[
('A',0),
('B',5),
('C',8),
('D',2),
('E',9),
('F',5),
('G',1),
('H',7),
('A',3),
('B',4),
('C',4),
('D',9),
('F',1),
('F',2),
('F',3),
('F',5),
('F',7),
('H',13)
]
from pyschedule import Scenario, solvers, plotters, alt
# Create employee scheduling scenari
S = Scenario('employee_scheduling',horizon=n_days)
# Create enployees as resources indexed by namesc
employees = { name : S.Resource(name) for name in employee_names }
# Create shifts as tasks
shifts = { (day,i) : S.Task('S_%s_%s'%(str(day),str(i)))
for day in shift_requirements if day in days
for i in range(shift_requirements[day]) }
# distribute shifts to days
for day,i in shifts:
# Assign shift to its day
S += shifts[day,i] >= day
# The shifts on each day are interchangeable, so add them to the same group
shifts[day,i].group = day
# Weekend shifts get attribute week_end
if day % 7 in {5,6}:
shifts[day,i].week_end = 1
# There are no restrictions, any shift can be done by any employee
for day,i in shifts:
shifts[day,i] += alt( S.resources() )
# Capacity restrictions
for name in employees:
# Maximal number of shifts
S += employees[name] <= max_work
# Minimal number of shifts
S += employees[name] >= min_work
# Maximal number of weekend shifts using attribute week_end
S += employees[name]['week_end'] <= max_weekend
# Max number of consecutive shifts
for name in employees:
for day in range(n_days):
S += employees[name][day:day+max_seq+1] <= max_seq
# Min sequence without gaps
for name in employees:
# No increase in last periods
S += employees[name][n_days-min_seq:].inc <= 0
# No decrease in first periods
S += employees[name][:min_seq].dec <= 0
# No diff during time horizon
for day in days[:-min_seq]:
S += employees[name][day:day+min_seq+1].diff <= 1
# Solve and plot scenario
if solvers.mip.solve(S,kind='CBC',msg=1,random_seed=6):
%matplotlib inline
plotters.matplotlib.plot(S,fig_size=(12,5))
else:
print('no solution found')
import random
import time
time_limit = 10 # time limit for each run
repeats = 5 # repeated random runs because CBC might get stuck
# Iteratively add shift requests until no solution exists
for name,day in shift_requests:
S += employees[name][day] >= 1
for i in range(repeats):
random_seed = random.randint(0,10000)
start_time = time.time()
status = solvers.mip.solve(S,kind='CBC',time_limit=time_limit,
random_seed=random_seed,msg=0)
# Break when solution found
if status:
break
print(name,day,'compute time:', time.time()-start_time)
# Break if all computed solution runs fail
if not status:
S -= employees[name][day] >= 1
print('cant fit last shift request')
# Plot the last computed solution
%matplotlib inline
plotters.matplotlib.plot(S,fig_size=(12,5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Solving without shift requests
Step2: Solving with shift requests
|
3,681
|
<ASSISTANT_TASK:>
Python Code:
import rebound
sim = rebound.Simulation()
sim.integrator = "whfast"
sim.dt = 2.*3.1415/365.*6 # 6 days in units where G=1
sim.add(m=1.)
sim.add(m=1e-3,a=1.)
sim.add(m=5e-3,a=2.25)
sim.move_to_com()
sim.automateSimulationArchive("simulationarchive.bin", walltime=1.,deletefile=True)
sim.integrate(2e5)
sim = None
sim = rebound.Simulation("simulationarchive.bin")
print("Time after loading simulation %.1f" %sim.t)
sim.automateSimulationArchive("simulationarchive.bin", walltime=1.,deletefile=False)
sim.integrate(sim.t+2e5)
sim = None
sim = rebound.Simulation("simulationarchive.bin")
print("Time after loading simulation %.1f" %sim.t)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We then initialize the SA and specify the output filename and output cadence. We can choose the output interval to either correspond to constant intervals in walltime (in seconds) or simulation time. Here, we choose walltime. To choose simulation time instead replace the walltime argument with interval.
Step2: Now, we can run the simulation forward in time.
Step3: Depending on how fast your computer is, the above command may take a couple of seconds. Once the simulation is done, we can delete it from memory and load it back in from the SA. You could do this at a later time. Note that this will even work if the SA file was generated on a different computer with a different operating system and even a different version of REBOUND. See Rein & Tamayo (2017) for a full discussion on machine independent code.
Step4: If we want to integrate the simulation further in time and append snapshots to the same SA, then we need to call the automateSimulationArchive method again (this is fail safe mechanism to avoid accidentally modifying a SA file). Note that we set the deletefile flag to False. Otherwise we would create a new empty SA file. This outputs a warning because the file already exists (which is ok since we want to append that file).
Step5: Now, let's integrate the simulation further in time.
Step6: If we repeat the process, one can see that the SA binary file now includes the new snapshots from the restarted simulation.
|
3,682
|
<ASSISTANT_TASK:>
Python Code:
def strange_sort_list(lst):
'''
Given list of integers, return list in strange order.
Strange sorting, is when you start with the minimum value,
then maximum of the remaining integers, then minimum and so on.
Examples:
strange_sort_list([1, 2, 3, 4]) == [1, 4, 2, 3]
strange_sort_list([5, 5, 5, 5]) == [5, 5, 5, 5]
strange_sort_list([]) == []
'''
res, switch = [], True
while lst:
res.append(min(lst) if switch else max(lst))
lst.remove(res[-1])
switch = not switch
return res
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
3,683
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
fig = plt.figure()
plt.show()
ax = plt.axes()
plt.show()
ax = plt.axes()
line1, = ax.plot([0, 1, 2, 1.5], [3, 1, 2, 4])
plt.show()
plt.plot([0, 1, 2, 1.5], [3, 1, 2, 4])
plt.show()
ax_left = plt.subplot(1, 2, 1)
plt.plot([2,1,3,4])
plt.title('left = #1')
ax_left = plt.subplot(1, 2, 2)
plt.plot([4,1,3,2])
plt.title('right = #2')
plt.show()
top_right_ax = plt.subplot(2, 3, 3, title='#3 = top-right')
bottom_left_ax = plt.subplot(2, 3, 4, title='#4 = bottom-left')
plt.show()
import numpy as np
x = np.linspace(-180, 180, 60)
y = np.linspace(-90, 90, 30)
x2d, y2d = np.meshgrid(x, y)
data = np.cos(3 * np.deg2rad(x2d)) + np.sin(2 * np.deg2rad(y2d))
plt.contourf(x, y, data)
plt.show()
plt.imshow(data, extent=[-180, 180, -90, 90],
interpolation='nearest', origin='lower')
plt.show()
plt.pcolormesh(x, y, data)
plt.show()
plt.scatter(x2d, y2d, c=data, s=15)
plt.show()
plt.bar(x, data.sum(axis=0), width=np.diff(x)[0])
plt.show()
plt.plot(x, data.sum(axis=0), linestyle='--',
marker='d', markersize=10, color='red')
plt.show()
fig = plt.figure()
ax = plt.axes()
# Adjust the created axes so its topmost extent is 0.8 of the figure.
fig.subplots_adjust(top=0.8)
fig.suptitle('Figure title', fontsize=18, fontweight='bold')
ax.set_title('Axes title', fontsize=16)
ax.set_xlabel('The X axis')
ax.set_ylabel('The Y axis $y=f(x)$', fontsize=16)
ax.text(0.5, 0.5, 'Text centered at (0.5, 0.5)\nin data coordinates.',
horizontalalignment='center', fontsize=14)
plt.show()
x = np.linspace(-3, 7, 200)
plt.plot(x, 0.5*x**3 - 3*x**2, linewidth=2,
label='$f(x)=0.5x^3-3x^2$')
plt.plot(x, 1.5*x**2 - 6*x, linewidth=2, linestyle='--',
label='Gradient of $f(x)$', )
plt.legend(loc='lower right')
plt.grid()
plt.show()
x = np.linspace(-180, 180, 60)
y = np.linspace(-90, 90, 30)
x2d, y2d = np.meshgrid(x, y)
data = np.cos(3 * np.deg2rad(x2d)) + np.sin(2 * np.deg2rad(y2d))
plt.contourf(x, y, data)
plt.colorbar(orientation='horizontal')
plt.show()
x = np.linspace(-3, 7, 200)
plt.plot(x, 0.5*x**3 - 3*x**2, linewidth=2)
plt.annotate('Local minimum',
xy=(4, -18),
xytext=(-2, -40), fontsize=15,
arrowprops={'facecolor': 'black', 'frac': 0.3})
plt.grid()
plt.show()
plt.plot(range(10))
plt.savefig('simple.svg')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The matplotlib Figure
Step2: On its own, drawing the Figure is uninteresting and will result in an empty piece of paper (that's why we didn't see anything above).
Step3: Matplotlib's pyplot module makes the process of creating graphics easier by allowing us to skip some of the tedious object construction. For example, we did not need to manually create the Figure with plt.figure because it was implicit that we needed a Figure when we created the Axes.
Step4: Notice how the Axes view limits (ax.viewLim) have been updated to include the whole of the line.
Step5: The simplicity of this example shows how visualisations can be produced quickly and easily with matplotlib, but it is worth remembering that for full control of the Figure and Axes artists we can mix the convenience of pyplot with the power of matplotlib's object oriented design.
Step6: Likewise, for plots above + below one another we would use two rows and one column, as in subplot(2, 1, <plot_number>).
Step7: Exercise 3 continued
Step8: Titles, legends, colorbars and annotations
Step9: The creation of a legend is as simple as adding a "label" to lines of interest. This can be done in the call to plt.plot and then followed up with a call to plt.legend
Step10: Colorbars are created with the plt.colorbar function
Step11: Matplotlib comes with powerful annotation capabilities, which are described in detail at http
Step12: Savefig & backends
|
3,684
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import dismalpy as dp
import matplotlib.pyplot as plt
dta = pd.read_stata('data/lutkepohl2.dta')
dta.index = dta.qtr
endog = dta.ix['1960-04-01':'1978-10-01', ['dln_inv', 'dln_inc', 'dln_consump']]
exog = pd.Series(np.arange(len(endog)), index=endog.index, name='trend')
exog = endog['dln_consump']
mod = dp.ssm.VARMAX(endog[['dln_inv', 'dln_inc']], order=(2,0), trend='nc', exog=exog)
res = mod.fit(maxiter=1000)
print(res.summary())
mod = dp.ssm.VARMAX(endog[['dln_inv', 'dln_inc']], order=(0,2), error_cov_type='diagonal')
res = mod.fit(maxiter=1000)
print(res.summary())
mod = dp.ssm.VARMAX(endog[['dln_inv', 'dln_inc']], order=(1,1))
res = mod.fit(maxiter=1000)
print(res.summary())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model specification
Step2: Example 2
Step3: Caution
|
3,685
|
<ASSISTANT_TASK:>
Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py and create directories
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/code/soln/utils.py
if not os.path.exists('figs'):
!mkdir figs
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from empiricaldist import Pmf, Cdf
from utils import decorate, savefig
from scipy.stats import hypergeom
N = 100
K = 23
n = 19
ks = np.arange(12)
ps = hypergeom(N, K, n).pmf(ks)
plt.bar(ks, ps)
decorate(xlabel='Number of bears observed twice',
ylabel='PMF',
title='Hypergeometric distribution of k (known population 200)')
Ns = np.arange(50, 501)
prior_N = Pmf(1, Ns)
prior_N.index.name = 'N'
K = 23
n = 19
k = 4
likelihood = hypergeom(Ns, K, n).pmf(k)
posterior_N = prior_N * likelihood
posterior_N.normalize()
posterior_N.plot()
decorate(xlabel='Population of bears (N)',
ylabel='PDF',
title='Posterior distribution of N')
posterior_N.max_prob()
posterior_N.mean()
posterior_N.credible_interval(0.9)
ps = np.linspace(0, 1, 101)
prior_p = Pmf(1, ps)
prior_p.index.name = 'p'
from utils import make_joint
joint_prior = make_joint(prior_N, prior_p)
N_mesh, p_mesh = np.meshgrid(Ns, ps)
N_mesh.shape
from scipy.stats import binom
like1 = binom.pmf(K, N_mesh, p_mesh)
like1.sum()
like2 = binom.pmf(k, K, p_mesh) * binom.pmf(n-k, N_mesh-K, p_mesh)
like2.sum()
from utils import normalize
joint_posterior = joint_prior * like1 * like2
normalize(joint_posterior)
def plot_contour(joint, **options):
Plot a joint distribution.
joint: DataFrame representing a joint PMF
cs = plt.contour(joint.columns, joint.index, joint, **options)
decorate(xlabel=joint.columns.name,
ylabel=joint.index.name)
return cs
plot_contour(joint_posterior)
decorate(title='Joint posterior distribution of N and p')
from utils import marginal
marginal_N = marginal(joint_posterior, 0)
posterior_N.plot(color='gray')
marginal_N.plot()
decorate(xlabel='Population of bears (N)',
ylabel='PDF',
title='Posterior marginal distribution of N')
marginal_N.mean(), marginal_N.credible_interval(0.9)
marginal_p = marginal(joint_posterior, 1)
marginal_p.plot()
decorate(xlabel='Probability of observing a bear',
ylabel='PDF',
title='Posterior marginal distribution of p')
from seaborn import JointGrid
def joint_plot(joint, **options):
x = joint.columns.name
x = 'x' if x is None else x
y = joint.index.name
y = 'y' if y is None else y
# make a JointGrid with minimal data
data = pd.DataFrame({x:[0], y:[0]})
g = JointGrid(x, y, data, **options)
# replace the contour plot
g.ax_joint.contour(joint.columns,
joint.index,
joint,
cmap='viridis')
# replace the marginals
marginal_x = marginal(joint, 0)
g.ax_marg_x.plot(marginal_x.qs, marginal_x.ps)
marginal_y = marginal(joint, 1)
g.ax_marg_y.plot(marginal_y.ps, marginal_y.qs)
joint_plot(joint_posterior)
mean = (23 + 19) / 2
N1 = 138
p = mean/N1
p
from scipy.stats import binom
binom(N1, p).std()
binom(N1, p).pmf([23, 19]).prod()
N2 = 173
p = mean/N2
p
binom(N2, p).std()
binom(N2, p).pmf([23, 19]).prod()
n0 = 20
n1 = 15
k11 = 3
k10 = n0 - k11
k01 = n1 - k11
k10, k01
Ns = np.arange(32, 350)
prior_N = Pmf(1, Ns)
prior_N.index.name = 'N'
p0, p1 = 0.2, 0.15
like0 = binom.pmf(n0, Ns, p0)
like0.sum()
like1 = binom.pmf(k01, Ns-n0, p1) * binom.pmf(k11, n0, p1)
like1.sum()
likelihood = like0 * like1
likelihood.shape
prior_N.shape
posterior_N = prior_N * likelihood
posterior_N.normalize()
posterior_N.plot()
decorate(xlabel='n',
ylabel='PMF',
title='Posterior marginal distribution of n with known p1, p2')
posterior_N.mean()
p0 = np.linspace(0, 1, 61)
prior_p0 = Pmf(1, p0)
prior_p0.index.name = 'p0'
p1 = np.linspace(0, 1, 51)
prior_p1 = Pmf(1, p1)
prior_p1.index.name = 'p1'
from utils import make_joint
joint = make_joint(prior_p0, prior_p1)
joint.shape
joint_pmf = Pmf(joint.transpose().stack())
joint_pmf.head()
joint_prior = make_joint(prior_N, joint_pmf)
joint_prior.shape
joint_prior.head()
likelihood = joint_prior.copy()
Ns = joint_prior.columns
for (p0, p1) in joint_prior.index:
like0 = binom.pmf(n0, Ns, p0)
like1 = binom.pmf(k01, Ns-n0, p1) * binom.pmf(k11, n0, p1)
likelihood.loc[p0, p1] = like0 * like1
likelihood.to_numpy().sum()
from utils import normalize
joint_posterior = joint_prior * likelihood
normalize(joint_posterior)
joint_posterior.shape
from utils import marginal
posterior_N = marginal(joint_posterior, 0)
posterior_N.plot()
posterior_pmf = marginal(joint_posterior, 1)
posterior_pmf.shape
posterior_joint_ps = posterior_pmf.unstack().transpose()
posterior_joint_ps.head()
def plot_contour(joint, **options):
Plot a joint distribution.
joint: DataFrame representing a joint PMF
cs = plt.contour(joint.columns, joint.index, joint, **options)
decorate(xlabel=joint.columns.name,
ylabel=joint.index.name)
return cs
plot_contour(posterior_joint_ps)
decorate(title='Posterior joint distribution for p1 and p2')
posterior_p1 = marginal(posterior_joint_ps, 0)
posterior_p2 = marginal(posterior_joint_ps, 1)
posterior_p1.plot(label='p1')
posterior_p2.plot(label='p2')
decorate(xlabel='Probability of finding a bug',
ylabel='PDF',
title='Posterior marginal distributions of p1 and p2')
posterior_p1.mean(), posterior_p1.credible_interval(0.9)
posterior_p2.mean(), posterior_p2.credible_interval(0.9)
joint_plot(posterior_joint_ps)
data = [-1000, 63, 55, 18, 69, 17, 21, 28]
index = pd.MultiIndex.from_product([[0, 1]]*3)
Kijk = pd.Series(data, index)
Kijk
Kijk.xs(1, level=0).sum()
Kijk.xs(1, level=1).sum()
Kijk.xs(1, level=2).sum()
n = pd.Series(1, range(3))
for level in n.index:
n[level] = Kijk.xs(1, level=level).sum()
n
Kixx = Kijk.sum(level=0)
Kixx
Kijx = Kijk.sum(level=[0,1])
Kijx
def num_observed(K):
return np.asarray(K)[1:].sum()
s0 = num_observed(Kixx)
s0
s1 = num_observed(Kijx)
s1
s2 = num_observed(Kijk)
s2
s = [s0, s1, s2]
s
k = pd.concat([Kixx, Kijx, Kijk])
k
k[1]
k[0,1]
k[0,0,1]
k[0]
N = 300
k0 = N - s[0]
k0
k00 = N - s[1]
k00
k000 = N - s[2]
k000
Ns = np.arange(s2, 500, 5)
prior_N = Pmf(1, Ns)
prior_N.index.name = 'N'
ps = np.linspace(0, 1, 101)
prior_p = Pmf(1, ps)
prior_p.index.name = 'p'
from utils import make_joint
joint_prior = make_joint(prior_N, prior_p)
def likelihood_round1(ps, n, k, s, joint_prior):
like = joint_prior.copy()
for N in joint_prior:
like[N] = binom.pmf(n[0], N, ps)
return like
like1 = likelihood_round1(ps, n, k, s, joint_prior)
def likelihood_round2(ps, n, k, s, joint_prior):
like = joint_prior.copy()
for N in joint_prior:
k0 = N - s[0]
like[N] = (binom.pmf(k[0,1], k0, ps) *
binom.pmf(k[1,1], k[1], ps))
return like
like2 = likelihood_round2(ps, n, k, s, joint_prior)
like2.to_numpy().sum()
joint_posterior2 = joint_prior * like1 * like2
normalize(joint_posterior2)
plot_contour(joint_posterior2)
decorate(title='Joint posterior of N and p')
marginal_N = marginal(joint_posterior2, 0)
marginal_N.plot()
decorate(xlabel='N',
ylabel='PDF',
title='Posterior marginal distribution of N')
marginal_N.mean(), marginal_N.credible_interval(0.9)
marginal_p = marginal(joint_posterior2, 1)
marginal_p.plot()
decorate(xlabel='p',
ylabel='PDF',
title='Posterior marginal distribution of p')
def likelihood_round3(ps, n, k, s, joint_prior):
like = joint_prior.copy()
for N in joint_prior:
k00 = N - s[1]
like[N] = (binom.pmf(k[0,0,1], k00, ps) *
binom.pmf(k[0,1,1], k[0,1], ps) *
binom.pmf(k[1,0,1], k[1,0], ps) *
binom.pmf(k[1,1,1], k[1,1], ps))
return like
like3 = likelihood_round3(ps, n, k, s, joint_prior)
like3.to_numpy().sum()
joint_posterior3 = joint_posterior2 * like3
normalize(joint_posterior3)
plot_contour(joint_posterior3)
decorate(title='Joint posterior of N and p')
marginal_N = marginal(joint_posterior3, 0)
marginal_N.plot()
decorate(xlabel='N',
ylabel='PDF',
title='Posterior marginal distribution of N')
marginal_N.mean(), marginal_N.credible_interval(0.9)
marginal_p = marginal(joint_posterior3, 1)
marginal_p.plot()
decorate(xlabel='p',
ylabel='PDF',
title='Posterior marginal distribution of p')
joint_plot(joint_posterior3)
data_sb = [-100, 60, 49, 4, 247, 112, 142, 12]
num_observed(data_sb)
def make_stats(data, num_rounds):
index = pd.MultiIndex.from_product([[0, 1]]*num_rounds)
K = pd.Series(data, index)
n = pd.Series(0, range(num_rounds))
for level in n.index:
n[level] = K.xs(1, level=level).sum()
t = [K.sum(level=list(range(i+1)))
for i in range(num_rounds)]
s = [num_observed(Kx) for Kx in t]
k = pd.concat(t)
return n, k, s
n, k, s = make_stats(data_sb, 3)
n
k
s
Ns = np.arange(s[2], 1000, 5)
prior_N = Pmf(1, Ns)
prior_N.index.name = 'N'
prior_N.shape
probs0 = np.linspace(0.5, 1.0, 51)
prior_p0 = Pmf(1, probs0)
prior_p0.index.name = 'p0'
prior_p0.head()
probs1 = np.linspace(0.1, 0.5, 41)
prior_p1 = Pmf(1, probs1)
prior_p1.index.name = 'p1'
prior_p1.head()
probs2 = np.linspace(0.1, 0.4, 31)
prior_p2 = Pmf(1, probs2)
prior_p2.index.name = 'p2'
prior_p2.head()
def make_joint3(prior0, prior1, prior2):
joint2 = make_joint(prior0, prior1)
joint2_pmf = Pmf(joint2.transpose().stack())
joint3 = make_joint(prior2, joint2_pmf)
return joint3
joint_prior = make_joint3(prior_p0, prior_p1, prior_N)
joint_prior.head()
likelihood = joint_prior.copy()
Ns = joint_prior.columns
for (p0, p1) in joint_prior.index:
like0 = binom.pmf(k[1], Ns, p0)
k0 = Ns - s[0]
like1 = binom.pmf(k[0,1], k0, p1) * binom.pmf(k[1,1], k[1], p1)
likelihood.loc[p0, p1] = like0 * like1
likelihood.to_numpy().sum()
from utils import normalize
joint_posterior = joint_prior * likelihood
normalize(joint_posterior)
joint_posterior.shape
from utils import marginal
posterior_N = marginal(joint_posterior, 0)
posterior_N.plot()
decorate(xlabel='N',
ylabel='PDF',
title='Posterior for N after two rounds')
posterior_N.mean(), posterior_N.credible_interval(0.9)
posterior_pmf = marginal(joint_posterior, 1)
posterior_pmf.head()
posterior_joint_ps = posterior_pmf.unstack().transpose()
posterior_joint_ps.head()
plot_contour(posterior_joint_ps)
decorate(title='Posterior joint distribution for p1 and p2')
posterior_p1 = marginal(posterior_joint_ps, 0)
posterior_p2 = marginal(posterior_joint_ps, 1)
posterior_p1.plot(label='p1')
posterior_p2.plot(label='p2')
decorate(xlabel='Probability of finding a bug',
ylabel='PDF',
title='Posterior marginal distributions of p1 and p2')
joint3 = make_joint(prior_p2, posterior_pmf)
joint3.head()
joint3_pmf = Pmf(joint3.stack())
joint3_pmf.head()
prior4 = make_joint(posterior_N, joint3_pmf)
prior4.head()
prior4.shape
joint2 = make_joint(posterior_N, prior_p2)
joint2.head()
like2 = joint2.copy()
Ns = joint2.columns
for p2 in joint2.index:
k00 = Ns - s[1]
like = (binom.pmf(k[0,0,1], k00, p2) *
binom.pmf(k[0,1,1], k[0,1], p2) *
binom.pmf(k[1,0,1], k[1,0], p2) *
binom.pmf(k[1,1,1], k[1,1], p2))
like2.loc[p2] = like
like2.to_numpy().sum()
like2.head()
like4 = prior4.copy()
for (p0, p1, p2) in prior4.index:
like4.loc[p0, p1, p2] = like2.loc[p2]
like4.to_numpy().sum()
posterior4 = prior4 * like4
normalize(posterior4)
marginal_N = marginal(posterior4, 0)
marginal_N.plot()
decorate(xlabel='N',
ylabel='PDF',
title='Posterior for N after three rounds')
marginal_N.mean(), marginal_N.credible_interval(0.9)
posterior_p012 = marginal(posterior4, 1)
posterior_p012.unstack().head()
posterior_p2 = marginal(posterior_p012.unstack(), 0)
posterior_p2.plot()
posterior_p01 = marginal(posterior_p012.unstack(), 1)
joint_plot(posterior_p01.unstack().transpose())
data4 = [-10000, 10, 182, 8, 74, 7, 20, 14, 709, 12, 650, 46, 104, 18, 157, 58]
num_observed(data4)
n, k, s = make_stats(data4, 4)
n
k
s
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: So that's the distribution of k given N, K, and n.
Step3: So that's our prior.
Step4: We can compute the posterior in the usual way.
Step5: And here's what it looks like.
Step6: The most likely value is 109.
Step7: But the distribution is skewed to the right, so the posterior mean is substantially higher.
Step9: Two parameter model
Step10: Two parameters better than one?
Step11: The Lincoln index problem
Step13: Unknown probabilities
Step14: Chao et al
Step15: Spina bifida
Step16: Diabetes
|
3,686
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Denis Engemann <denis.engemann@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import f_threshold_mway_rm, f_mway_rm, fdr_correction
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
tmin, tmax = -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
include = []
raw.info['bads'] += ['MEG 2443'] # bads
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
ch_name = 'MEG 1332'
# Load conditions
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0), preload=True,
reject=reject)
epochs.pick_channels([ch_name]) # restrict example to one channel
epochs.equalize_event_counts(event_id)
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet.
decim = 2
freqs = np.arange(7, 30, 3) # define frequencies of interest
n_cycles = freqs / freqs[0]
zero_mean = False # don't correct morlet wavelet to be of mean zero
# To have a true wavelet zero_mean should be True but here for illustration
# purposes it helps to spot the evoked response.
epochs_power = list()
for condition in [epochs[k] for k in event_id]:
this_tfr = tfr_morlet(condition, freqs, n_cycles=n_cycles,
decim=decim, average=False, zero_mean=zero_mean,
return_itc=False)
this_tfr.apply_baseline(mode='ratio', baseline=(None, 0))
this_power = this_tfr.data[:, 0, :, :] # we only have one channel.
epochs_power.append(this_power)
n_conditions = len(epochs.event_id)
n_replications = epochs.events.shape[0] // n_conditions
factor_levels = [2, 2] # number of levels in each factor
effects = 'A*B' # this is the default signature for computing all effects
# Other possible options are 'A' or 'B' for the corresponding main effects
# or 'A:B' for the interaction effect only (this notation is borrowed from the
# R formula language)
n_freqs = len(freqs)
times = 1e3 * epochs.times[::decim]
n_times = len(times)
data = np.swapaxes(np.asarray(epochs_power), 1, 0)
# reshape last two dimensions in one mass-univariate observation-vector
data = data.reshape(n_replications, n_conditions, n_freqs * n_times)
# so we have replications * conditions * observations:
print(data.shape)
fvals, pvals = f_mway_rm(data, factor_levels, effects=effects)
effect_labels = ['modality', 'location', 'modality by location']
# let's visualize our effects by computing f-images
for effect, sig, effect_label in zip(fvals, pvals, effect_labels):
plt.figure()
# show naive F-values in gray
plt.imshow(effect.reshape(8, 211), cmap=plt.cm.gray, extent=[times[0],
times[-1], freqs[0], freqs[-1]], aspect='auto',
origin='lower')
# create mask for significant Time-frequency locations
effect = np.ma.masked_array(effect, [sig > .05])
plt.imshow(effect.reshape(8, 211), cmap='RdBu_r', extent=[times[0],
times[-1], freqs[0], freqs[-1]], aspect='auto',
origin='lower')
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title(r"Time-locked response for '%s' (%s)" % (effect_label, ch_name))
plt.show()
effects = 'A:B'
def stat_fun(*args):
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=False)[0]
# The ANOVA returns a tuple f-values and p-values, we will pick the former.
pthresh = 0.001 # set threshold rather high to save some time
f_thresh = f_threshold_mway_rm(n_replications, factor_levels, effects,
pthresh)
tail = 1 # f-test, so tail > 0
n_permutations = 256 # Save some time (the test won't be too sensitive ...)
T_obs, clusters, cluster_p_values, h0 = mne.stats.permutation_cluster_test(
epochs_power, stat_fun=stat_fun, threshold=f_thresh, tail=tail, n_jobs=1,
n_permutations=n_permutations, buffer_size=None)
good_clusters = np.where(cluster_p_values < .05)[0]
T_obs_plot = np.ma.masked_array(T_obs,
np.invert(clusters[np.squeeze(good_clusters)]))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
freqs[0], freqs[-1]], aspect='auto',
origin='lower')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title("Time-locked response for 'modality by location' (%s)\n"
" cluster-level corrected (p <= 0.05)" % ch_name)
plt.show()
mask, _ = fdr_correction(pvals[2])
T_obs_plot2 = np.ma.masked_array(T_obs, np.invert(mask))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot2], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
freqs[0], freqs[-1]], aspect='auto',
origin='lower')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title("Time-locked response for 'modality by location' (%s)\n"
" FDR corrected (p <= 0.05)" % ch_name)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: We have to make sure all conditions have the same counts, as the ANOVA
Step3: Create TFR representations for all conditions
Step4: Setup repeated measures ANOVA
Step5: Now we'll assemble the data matrix and swap axes so the trial replications
Step6: While the iteration scheme used above for assembling the data matrix
Step7: Account for multiple comparisons using FDR versus permutation clustering test
Step8: A stat_fun must deal with a variable number of input arguments.
Step9: Create new stats image with only significant clusters
Step10: Now using FDR
|
3,687
|
<ASSISTANT_TASK:>
Python Code:
import ckanapi
from datetime import datetime
import json
import os
import requests
from slugify import slugify
from harvest_helpers import *
from secret import CKAN, ARCGIS
print("The ARCGIS REST service endpoint QC lives at {0}".format(ARCGIS["SLIPFUTURE"]["url"]))
print("The catalogue \"ca\" lives at {0}".format(CKAN["ca"]["url"]))
print("The catalogue \"cb\" lives at {0}".format(CKAN["cb"]["url"]))
print("The catalogue \"ct\" lives at {0}".format(CKAN["ct"]["url"]))
## enable one of:
#ckan = ckanapi.RemoteCKAN(CKAN["ct"]["url"], apikey=CKAN["ct"]["key"])
ckan = ckanapi.RemoteCKAN(CKAN["ca"]["url"], apikey=CKAN["ca"]["key"])
#ckan = ckanapi.RemoteCKAN(CKAN["cb"]["url"], apikey=CKAN["cb"]["key"])
print("Using CKAN {0}".format(ckan.address))
baseurl = ARCGIS["SLIPFUTURE"]["url"]
folders = ARCGIS["SLIPFUTURE"]["folders"]
print("The base URL {0} contains folders {1}\n".format(baseurl, str(folders)))
services = get_arc_services(baseurl, folders[0])
print("The services in folder {0} are:\n{1}\n".format(str(folders), str(services)))
service_url = services[0]
mrwa = get_arc_servicedict(services[0])
print("Service {0}\ncontains layers and extensions:\n{1}".format(services[0], mrwa))
harvest_arcgis_service(services[0],
ckan,
owner_org_id = ckan.action.organization_show(id="mrwa")["id"],
author = "Main Roads Western Australia",
author_email = "irissupport@mainroads.wa.gov.au",
debug=False)
a = ["SLIP Classic", "Harvested"]
a.append("test")
a
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup
Step2: Harvest
|
3,688
|
<ASSISTANT_TASK:>
Python Code:
import os
import requests
from datetime import datetime
from clint.textui import progress
import pandas
pandas.set_option('display.float_format', lambda x: '%.2f' % x)
pandas.set_option('display.max_columns', None)
import matplotlib.pyplot as plt
import matplotlib.dates as dates
%matplotlib inline
def download_csv_to_dataframe(name):
Accepts the name of a calaccess.download CSV and returns it as a pandas dataframe.
path = os.path.join(os.getcwd(), '{}.csv'.format(name))
if not os.path.exists(path):
url = "http://calaccess.download/latest/{}.csv".format(name)
r = requests.get(url, stream=True)
with open(path, 'w') as f:
total_length = int(r.headers.get('content-length'))
for chunk in progress.bar(r.iter_content(chunk_size=1024), expected_size=(total_length/1024) + 1):
if chunk:
f.write(chunk)
f.flush()
return pandas.read_csv(path)
def remove_amended_filings(df):
Accepts a dataframe with FILING_ID and AMEND_ID files.
Returns only the highest amendment for each unique filing id.
max_amendments = df.groupby('FILING_ID')['AMEND_ID'].agg("max").reset_index()
merged_df = pandas.merge(df, max_amendments, how='inner', on=['FILING_ID', 'AMEND_ID'])
print "Removed {} amendments".format(len(df)-len(merged_df))
print "DataFrame now contains {} rows".format(len(merged_df))
return merged_df
quarterly_df = download_csv_to_dataframe("smry_cd")
quarterly_df.info()
quarterly_df.head()
quarterly_df.groupby(['FORM_TYPE'])['FILING_ID'].agg(['count'])
quarterly_ie_filings = quarterly_df[quarterly_df['FORM_TYPE'] == 'F461']
print len(quarterly_ie_filings)
quarterly_ie_filings.groupby(['LINE_ITEM'])['FILING_ID'].agg(['count'])
quarterly_ie_totals = quarterly_ie_filings[quarterly_ie_filings['LINE_ITEM'] == '3']
print len(quarterly_ie_totals)
real_ie_totals = remove_amended_filings(quarterly_ie_totals)
itemized_df = download_csv_to_dataframe("expn_cd")
itemized_df.info()
itemized_ies = itemized_df[itemized_df['EXPN_CODE'] == 'IND']
print len(itemized_ies)
real_itemized_ies = remove_amended_filings(itemized_ies)
real_itemized_ies.sort_values('EXPN_DATE', ascending=False).head()
late_df = download_csv_to_dataframe("s496_cd")
late_df['EXP_DATE'] = pandas.to_datetime(
late_df['EXP_DATE'],
errors="coerce"
)
late_df.info()
late_df.head()
late_df.groupby(['REC_TYPE'])['FILING_ID'].agg(['count'])
late_df.groupby(['FORM_TYPE'])['FILING_ID'].agg(['count'])
late_df.groupby(['LINE_ITEM'])['FILING_ID'].agg(['count'])
real_late_filings = remove_amended_filings(late_df)
real_late_filings[real_late_filings['LINE_ITEM'] == 104]
longest_late_filing = real_late_filings[real_late_filings['FILING_ID'] == 1717649]
longest_late_filing
real_late_filings['AMOUNT'].sum()
def trim_to_year(row):
try:
return row['EXP_DATE'].year
except TypeError:
return pandas.NaN
real_late_filings["year"] = real_late_filings.apply(trim_to_year, axis=1)
late_by_year = real_late_filings.groupby('year')['AMOUNT'].agg('sum')
late_by_year = late_by_year.to_frame('sum').reset_index()
fig = plt.figure(1, figsize=(16,8))
ax1 = fig.add_subplot(211)
ax1.plot(late_by_year['year'], late_by_year['sum'])
ax1.set_title('Independent expenditure spending')
ax1.set_xlabel('Year')
ax1.set_ylabel('Sum')
def trim_to_month(row):
try:
return datetime(year=row['EXP_DATE'].year, month=row['EXP_DATE'].month, day=1)
except TypeError:
return pandas.NaT
real_late_filings["month"] = real_late_filings.apply(trim_to_month, axis=1)
late_by_month = real_late_filings.groupby('month')['AMOUNT'].agg('sum')
late_by_month = late_by_month.to_frame('sum').reset_index()
fig = plt.figure(1, figsize=(16,8))
ax1 = fig.add_subplot(211)
ax1.plot(late_by_month['month'], late_by_month['sum'])
ax1.set_title('Independent expenditure spending')
ax1.set_xlabel('Month')
ax1.set_ylabel('Sum')
by_description = real_late_filings.groupby('EXPN_DSCR')['AMOUNT'].agg('sum')
by_description = by_description.to_frame('sum').reset_index()
by_description.sort_values("sum", ascending=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Independent expenditures experiments
Step3: Download quarterly filings
Step4: Basic information about the file
Step5: Frequency counts on the fields
Step6: Filter down to only Form 461 filings
Step7: Check what line items are recorded for this form
Step8: Here's the paper form where those five line-items can be seen in the right-hand summary section. It looks to be that line three is the total made during the current reporting period.
Step9: Prepare the table for analysis
Step10: Download itemized independent expenditures
Step11: Download late independent expenditure filings
Step12: Convert the date field to a datetime object
Step13: Basic information about the file
Step14: Frequency counts on the fields
Step15: Preparing the file for analysis
Step16: Figure out what to do with the different line numbers
Step17: That filing can be reviewed here. The expenditure count is 104, which matches this data. There are three late contributions at the bottom that do not appear in this table and must be recorded elsewhere.
Step18: Spending by year
Step19: Spending by month
Step20: Summing up the expenditure types
|
3,689
|
<ASSISTANT_TASK:>
Python Code:
from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images.shape
len(train_labels)
train_labels
test_images.shape
len(test_labels)
test_labels
from keras import models
from keras import layers
network = models.Sequential()
network.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,)))
network.add(layers.Dense(10, activation='softmax'))
network.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
train_images = train_images.reshape((60000, 28 * 28))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255
from keras.utils import to_categorical
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
network.fit(train_images, train_labels, epochs=5, batch_size=128)
test_loss, test_acc = network.evaluate(test_images, test_labels)
print('test_acc:', test_acc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: train_images and train_labels form the "training set", the data that the model will learn from. The model will then be tested on the
Step2: Let's have a look at the test data
Step3: Our workflow will be as follow
Step4: The core building block of neural networks is the "layer", a data-processing module which you can conceive as a "filter" for data. Some
Step5: Before training, we will preprocess our data by reshaping it into the shape that the network expects, and scaling it so that all values are in
Step6: We also need to categorically encode the labels, a step which we explain in chapter 3
Step7: We are now ready to train our network, which in Keras is done via a call to the fit method of the network
Step8: Two quantities are being displayed during training
|
3,690
|
<ASSISTANT_TASK:>
Python Code:
import berrl as bl
import numpy as np
import pandas as pd
apikey='pk.eyJ1IjoibXVycGh5MjE0IiwiYSI6ImNpam5kb3puZzAwZ2l0aG01ZW1uMTRjbnoifQ.5Znb4MArp7v3Wwrn6WFE6A'
data=pd.read_csv('wv_traffic_fatals.csv')
#data=data[data.CNTYNAME=='Clay County']
a=bl.make_points(data,list=True)
bl.parselist(a,'fatals.geojson')
# returns url to be used in show() function
url=bl.loadparsehtml(['fatals.geojson'],apikey,frame=True)
bl.show(url)
# getting only the fatalities on certain routes
roadways=bl.get_filetype('roadways','csv')
totaluniques=[]
count=0
filenames=[]
for row in roadways:
count+=1
temp=bl.map_table(row,6)
uniques=np.unique(temp['GEOHASH']).tolist()
totaluniques+=uniques
temp['color']='light green'
a=bl.make_line(temp,list=True)
bl.parselist(a,str(count)+'.geojson')
filenames.append(str(count)+'.geojson')
totaluniques=np.unique(totaluniques)
# mapping all traffic fatals in WV to a geohash
data=bl.map_table(data,6,list=True)
matched=[]
# getting matching uniques
for row in bl.df2list(data):
oldrow=row
for row in totaluniques:
if oldrow[-1]==row:
matched.append(row)
# getting point
for row in matched:
count+=1
temp=data[data.GEOHASH==row]
temp['color']='red'
a=bl.make_points(temp,list=True)
bl.parselist(a,str(count)+'.geojson')
filenames.append(str(count)+'.geojson')
newurl=bl.loadparsehtml(filenames,apikey,colorkey='color',frame=True)
bl.show(newurl)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Mapping all fatalities and getting unique hashs for each
Step2: Showing the new url made with fatalities along certain routes
|
3,691
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title Install { display-mode: "form" }
TF_Installation = 'System' #@param ['TF Nightly', 'TF Stable', 'System']
if TF_Installation == 'TF Nightly':
!pip install -q --upgrade tf-nightly
print('Installation of `tf-nightly` complete.')
elif TF_Installation == 'TF Stable':
!pip install -q --upgrade tensorflow
print('Installation of `tensorflow` complete.')
elif TF_Installation == 'System':
pass
else:
raise ValueError('Selection Error: Please select a valid '
'installation option.')
#@title Install { display-mode: "form" }
TFP_Installation = "System" #@param ["Nightly", "Stable", "System"]
if TFP_Installation == "Nightly":
!pip install -q tfp-nightly
print("Installation of `tfp-nightly` complete.")
elif TFP_Installation == "Stable":
!pip install -q --upgrade tensorflow-probability
print("Installation of `tensorflow-probability` complete.")
elif TFP_Installation == "System":
pass
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
import numpy as np
import arviz as az
import pandas as pd
import tensorflow as tf
import tensorflow_probability as tfp
import scipy.stats as stats
# visualization
import matplotlib.pyplot as plt
# aliases
tfd = tfp.distributions
az.style.use('seaborn-colorblind')
# define a list of constants
# divide by ways each value can occur
ways = tf.constant([0., 3, 8, 9, 0])
new_ways = ways / tf.reduce_sum(ways)
new_ways
# probability of 6 successes in 9 trials with 0.5 probability
tfd.Binomial(total_count=9, probs=0.5).prob(6)
# define grid
n_points = 20 # change to an odd number for Code 2.5 graphs to
# match book examples in Figure 2.6
p_grid = tf.linspace(start=0., stop=1., num=n_points)
#define prior
prior = tf.ones([n_points])
# compute likelihood at each value in grid
likelihood = tfd.Binomial(total_count=9, probs=p_grid).prob(6)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / tf.reduce_sum(unstd_posterior)
posterior
_, ax = plt.subplots(figsize=(9, 4))
ax.plot(p_grid, posterior, "-o")
ax.set(
xlabel="probability of water",
ylabel="posterior probability",
title="20 points");
first_prior = tf.where(condition=p_grid < 0.5, x=0., y=1)
second_prior = tf.exp(-5 * abs(p_grid - 0.5))
_, axes = plt.subplots(nrows=2, ncols=2, figsize=(12, 4),
constrained_layout=True)
axes[0, 0].plot(p_grid, first_prior)
axes[0, 0].set_title('First prior')
axes[1, 0].plot(p_grid, first_prior * likelihood)
axes[1, 0].set_title('First posterior')
axes[0, 1].plot(p_grid, second_prior)
axes[0, 1].set_title('Second prior')
axes[1, 1].plot(p_grid, second_prior * likelihood)
axes[1, 1].set_title('Second posterior');
W = 6
L = 3
dist = tfd.JointDistributionNamed({
"water": lambda probability: tfd.Binomial(total_count=W + L,
probs=probability),
"probability": tfd.Uniform(low=0., high=1.)
})
def neg_log_prob(x):
return tfp.math.value_and_gradient(
lambda p: -dist.log_prob(
water=W,
probability=tf.clip_by_value(p[-1], 0., 1.)),
x,
)
results = tfp.optimizer.bfgs_minimize(neg_log_prob, initial_position=[0.5])
assert results.converged
approximate_posterior = tfd.Normal(
results.position,
tf.sqrt(results.inverse_hessian_estimate),
)
print(
"mean:", approximate_posterior.mean(),
"\nstandard deviation: ", approximate_posterior.stddev(),
)
_, ax = plt.subplots(figsize=(9, 4))
x = tf.linspace(0., 1., num=101)
ax.plot(x, tfd.Beta(W + 1, L + 1).prob(x), label='Analytic posterior')
# values obained from quadratic approximation
ax.plot(x, tf.squeeze(approximate_posterior.prob(x)), "--",
label='Quadratic approximation')
ax.set(
xlabel='Probability of water',
ylabel='Posterior probability',
title='Comparing quadratic approximation to analytic posterior'
)
ax.legend();
@tf.function
def do_sampling():
def get_model_log_prob(probs):
return tfd.Binomial(total_count=W + L, probs=probs).log_prob(W)
sampling_kernel = tfp.mcmc.RandomWalkMetropolis(get_model_log_prob)
return tfp.mcmc.sample_chain(
num_results=5000,
current_state=.5,
kernel=sampling_kernel,
num_burnin_steps=500,
trace_fn=None,
)
samples = do_sampling()
_, ax = plt.subplots(figsize=(9, 4))
az.plot_kde(samples, label="Metropolis approximation", ax=ax)
x = tf.linspace(0., 1., num=100)
ax.plot(x, tfd.Beta(W + 1, L + 1).prob(x), "C1", label="True posterior")
ax.legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Chapter 2 - Small Worlds and Large Worlds
Step2: 2.1.3. From counts to probability
Step3: 2.3.2.1. Observed variables
Step4: 2.4.3.Grid Approximation
Step5: Code 2.4
Step6: Code 2.3 and 2.4 using TFP distributions (OPTIONAL)
Step7: 2.4.4. Quadratic Approximation
Step8: The results object itself has a lot of information about the optimization process. The estimate is called the position, and we get the standard error from the inverse_hessian_estimate.
Step9: Let's compare the mean and standard deviation from the quadratic approximation with an analytical approach based on the Beta distribution.
Step10: 2.4.5. Markov chain Monte Carlo
Step11: Code 2.9
|
3,692
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'mpiesm-1-2-ham', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
3,693
|
<ASSISTANT_TASK:>
Python Code:
!wget -qN ftp://sidads.colorado.edu/pub/DATASETS/nsidc0611_seaice_age/data/2012/iceage-2012w19.bin
import numpy as np
filename = 'iceage-2012w19.bin'
data = np.fromfile(filename, dtype=np.uint8) # read the data as unsigned bytes
print (data.shape)
print 722 * 722
data = data.reshape(722, 722)
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
print (np.unique(data))
with mpl.rc_context(rc={'figure.figsize': (10,9), 'axes.grid':False}, ):
plt.imshow(data, cmap="jet")
ocean = '#78B4FF'
year1 = '#2400F5'
year2 = '#00F6FF'
year3 = '#15FF00'
year4 = '#FFC700'
year5 = 'r'
cmap = mpl.colors.ListedColormap([ocean, year1, year2, year3, year4, year5, 'black', 'brown'])
bounds = [ 0, 5, 10, 15, 20, 25, 254, 255, 256]
norm = mpl.colors.BoundaryNorm(bounds, cmap.N)
with mpl.rc_context(rc={'figure.figsize': (10,9), 'axes.grid':False}, ):
plt.imshow(data, cmap=cmap, norm=norm)
!rm iceage-2012w19.bin
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: read file into a numpy array
Step2: see that we loaded all of the data and it matches the size of the expected dataset
Step3: reshape the data to the size of the grid
Step4: Do some validataion to ensure we have loaded the files correctly.
Step5: check the values in the file
Step6: plot of the raw data
Step7: Add some fancy colors.
Step8: clean up your downloaded file
|
3,694
|
<ASSISTANT_TASK:>
Python Code:
from datetime import datetime
import os
REGION = 'us-central1'
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT
MODEL_TYPE = "cnn" # "linear", "dnn", "dnn_dropout", or "cnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
Parses command-line arguments.
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
Parses command line arguments and kicks off model training.
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
Scales images from a 0-255 int range to a 0-1 float range
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
Loads MNIST dataset into a tf.data.Dataset
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
Constructs layers for a keras model based on a dict of model types.
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO 1: Your code here
],
'dnn_dropout': [
# TODO 2: Your code here
],
'cnn': [
# TODO 3: Your code here
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
Compiles keras model for image classification.
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
Compiles keras model and loads data into it for training.
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
!python3 -m mnist_models.trainer.test
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
%%writefile mnist_models/setup.py
from setuptools import find_packages
from setuptools import setup
setup(
name='mnist_trainer',
version='0.1',
packages=find_packages(),
include_package_data=True,
description='MNIST model training application.'
)
%%bash
cd mnist_models
python ./setup.py sdist --formats=gztar
cd ..
gsutil cp mnist_models/dist/mnist_trainer-0.1.tar.gz gs://${BUCKET}/mnist/
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
%%bash
echo $JOB_DIR $REGION $JOB_NAME
PYTHON_PACKAGE_URIS=gs://${BUCKET}/mnist/mnist_trainer-0.1.tar.gz
MACHINE_TYPE=n1-standard-4
REPLICA_COUNT=1
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-3:latest"
PYTHON_MODULE=trainer.task
WORKER_POOL_SPEC="machine-type=$MACHINE_TYPE,\
replica-count=$REPLICA_COUNT,\
executor-image-uri=$PYTHON_PACKAGE_EXECUTOR_IMAGE_URI,\
python-module=$PYTHON_MODULE"
gcloud ai custom-jobs create \
--region=${REGION} \
--display-name=$JOB_NAME \
--python-package-uris=$PYTHON_PACKAGE_URIS \
--worker-pool-spec=$WORKER_POOL_SPEC \
--args="--job-dir=$JOB_DIR,--model_type=$MODEL_TYPE"
%%bash
SAVEDMODEL_DIR=${JOB_DIR}keras_export
echo $SAVEDMODEL_DIR
gsutil ls $SAVEDMODEL_DIR
%%bash
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
MODEL_DISPLAYNAME=mnist_$TIMESTAMP
ENDPOINT_DISPLAYNAME=mnist_endpoint_$TIMESTAMP
IMAGE_URI="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest"
SAVEDMODEL_DIR=${JOB_DIR}keras_export
echo $SAVEDMODEL_DIR
# Model
MODEL_RESOURCENAME=$(gcloud ai models upload \
--region=$REGION \
--display-name=$MODEL_DISPLAYNAME \
--container-image-uri=$IMAGE_URI \
--artifact-uri=$SAVEDMODEL_DIR \
--format="value(model)")
echo "MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}"
echo "MODEL_RESOURCENAME=${MODEL_RESOURCENAME}"
# Endpoint
ENDPOINT_RESOURCENAME=$(gcloud ai endpoints create \
--region=$REGION \
--display-name=$ENDPOINT_DISPLAYNAME \
--format="value(name)")
echo "ENDPOINT_DISPLAYNAME=${ENDPOINT_DISPLAYNAME}"
echo "ENDPOINT_RESOURCENAME=${ENDPOINT_RESOURCENAME}"
# Deployment
DEPLOYED_MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}_deployment
MACHINE_TYPE=n1-standard-2
gcloud ai endpoints deploy-model $ENDPOINT_RESOURCENAME \
--region=$REGION \
--model=$MODEL_RESOURCENAME \
--display-name=$DEPLOYED_MODEL_DISPLAYNAME \
--machine-type=$MACHINE_TYPE \
--min-replica-count=1 \
--max-replica-count=1 \
--traffic-split=0=100
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = # TODO 4: Your code here
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
!cat test.json
%%bash
ENDPOINT_RESOURCENAME=#Insert ENDPOINT_RESOURCENAME from above
gcloud ai endpoints predict $ENDPOINT_RESOURCENAME \
--region=$REGION \
--json-request=test.json
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Building a dynamic model
Step6: Next, group non-model functions into a util file to keep the model file simple. Use the scale and load_dataset functions to scale images from a 0-255 int range to a 0-1 float range and load MNIST dataset into a tf.data.Dataset.
Step10: Now you can code the models. The tf.keras API accepts an array of layers into a model object, so you can create a dictionary of layers based on the different model types you want to use. mnist_models/trainer/model.py file has three functions
Step11: Local Training
Step12: Now you know that your models are working as expected. Now ,you can run it on Google Cloud within Vertex AI. You can run it as a python module locally first using the command line.
Step13: The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorter, as defined in your mnist_models/trainer/task.py file.
Step14: Training on the cloud
Step15: Then, you can start the Vertex AI Custom Job using the pre-built container. You can pass your source distribution URI using the --python-package-uris flag.
Step16: After submitting the following job, view the status in Vertex AI > Training and select Custom Jobs tab. Wait for the job to finish.
Step17: Deploying and predicting with model
Step18: To predict with the model, take one of the example images.
Step19: Finally, you can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
|
3,695
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
sales = graphlab.SFrame('kc_house_data_small.gl/')
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
def normalize_features(feature_matrix):
norms = np.linalg.norm(feature_matrix, axis=0)
return feature_matrix/norms, norms
(train_and_validation, test) = sales.random_split(.8, seed=1) # initial train/test split
(train, validation) = train_and_validation.random_split(.8, seed=1) # split training set into training and validation sets
feature_list = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated',
'lat',
'long',
'sqft_living15',
'sqft_lot15']
features_train, output_train = get_numpy_data(train, feature_list, 'price')
features_test, output_test = get_numpy_data(test, feature_list, 'price')
features_valid, output_valid = get_numpy_data(validation, feature_list, 'price')
features_train, norms = normalize_features(features_train) # normalize training set features (columns)
features_test = features_test / norms # normalize test set by training set norms
features_valid = features_valid / norms # normalize validation set by training set norms
query_house = features_test[0]
query_house
house_10th = features_train[9]
def get_euclidean_distance(house1, house2):
delta = house1 - house2
return np.sqrt(np.sum(delta ** 2))
get_euclidean_distance(query_house, house_10th)
get_euclidean_distance(np.array([0,0]), np.array([3,4]))
np.array([2,2]) ** 2
[get_euclidean_distance(query_house, house) for house in features_train[:10]]
np.argmin([get_euclidean_distance(query_house, house) for house in features_train[:10]])
for i in xrange(3):
print features_train[i]-features_test[0]
# should print 3 vectors of length 18
print features_train[0:3] - features_test[0]
# verify that vectorization works
results = features_train[0:3] - features_test[0]
print results[0] - (features_train[0]-features_test[0])
# should print all 0's if results[0] == (features_train[0]-features_test[0])
print results[1] - (features_train[1]-features_test[0])
# should print all 0's if results[1] == (features_train[1]-features_test[0])
print results[2] - (features_train[2]-features_test[0])
# should print all 0's if results[2] == (features_train[2]-features_test[0])
diff = features_train - query_house
print diff[-1].sum() # sum of the feature differences between the query and last training house
# should print -0.0934339605842
print np.sum(diff**2, axis=1)[15] # take sum of squares across each row, and print the 16th sum
print np.sum(diff[15]**2) # print the sum of squares for the 16th row -- should be same as above
distances = np.sqrt(np.sum((features_train - query_house)**2, axis=1))
print distances[100] # Euclidean distance between the query house and the 101th training house
# should print 0.0237082324496
def compute_distances(features_train, query_house):
return np.sqrt(np.sum((features_train - query_house)**2, axis=1))
distances = compute_distances(features_train, features_test[2])
np.argmin(distances)
output_train[382]
def find_nearest_k(features_train, query_house, k):
distances = compute_distances(features_train, query_house)
argsorted = np.argsort(distances)
nearest_k = argsorted[:k]
return nearest_k, distances[nearest_k]
top_k, distances = find_nearest_k(features_train, features_test[2], 4)
top_k
distances
def predict_by_nearest_k(features_train, output_train, query_house, k):
distances = compute_distances(features_train, query_house)
argsorted = np.argsort(distances)
nearest_k = argsorted[:k]
return np.average(output_train[nearest_k])
predict_by_nearest_k(features_train, output_train, features_test[2], 4)
predict_by_nearest_k(features_train, output_train, features_test[2], 1)
def predict_all_by_nearest_k(features_train, output_train, query_houses, k):
return [predict_by_nearest_k(features_train, output_train, query_house, k) for query_house in query_houses]
predict_all_by_nearest_k(features_train, output_train, features_test[2:3], 4)
predictions = predict_all_by_nearest_k(features_train, output_train, features_test[:10], 10)
np.argmin(predictions)
predictions[np.argmin(predictions)]
def find_best_k(features_train, output_train, features_valid, output_valid, ks):
rss_all = []
for k in ks:
predictions = predict_all_by_nearest_k(features_train, output_train, features_valid, k)
errors = predictions - output_valid
rss_all.append(np.dot(errors, errors))
print rss_all
return ks[np.argmin(rss_all)]
print find_best_k(features_train, output_train, features_valid, output_valid, range(1,16))
import matplotlib.pyplot as plt
%matplotlib inline
kvals = range(1, 16)
plt.plot(kvals, rss_all,'bo-')
'{:.3E}'.format(67361678735491.5)
predictions = predict_all_by_nearest_k(features_train, output_train, features_test, 8)
errors = predictions - output_test
RSS = np.dot(errors, errors)
'{:.3E}'.format(RSS)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load in house sales data
Step2: Import useful functions from previous notebooks
Step3: We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below.
Step4: Split data into training, test, and validation sets
Step5: Extract features and normalize
Step6: In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.
Step7: Compute a single distance
Step8: Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1.
Step9: QUIZ QUESTION
Step10: Compute multiple distances
Step11: QUIZ QUESTION
Step12: It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process.
Step13: The subtraction operator (-) in Numpy is vectorized as follows
Step14: Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below
Step15: Aside
Step16: To test the code above, run the following cell, which should output a value -0.0934339605842
Step17: The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff).
Step18: With this result in mind, write a single-line expression to compute the Euclidean distances between the query house and all houses in the training set. Assign the result to a variable distances.
Step19: To test the code above, run the following cell, which should output a value 0.0237082324496
Step20: Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters
Step21: QUIZ QUESTIONS
Step22: Perform k-nearest neighbor regression
Step23: QUIZ QUESTION
Step24: Make a single prediction by averaging k nearest neighbor outputs
Step25: QUIZ QUESTION
Step26: Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier.
Step27: QUIZ QUESTION
Step28: Choosing the best value of k using a validation set
Step29: To visualize the performance as a function of k, plot the RSS on the VALIDATION set for each considered k value
Step30: QUIZ QUESTION
|
3,696
|
<ASSISTANT_TASK:>
Python Code:
ratings = pd.read_csv(path+'ratings.csv')
ratings.head()
len(ratings)
movie_names = pd.read_csv(path+'movies.csv').set_index('movieId')['title'].to_dict()
users = ratings.userId.unique()
movies = ratings.movieId.unique()
userid2idx = {o:i for i,o in enumerate(users)}
movieid2idx = {o:i for i,o in enumerate(movies)}
ratings.movieId = ratings.movieId.apply(lambda x: movieid2idx[x])
ratings.userId = ratings.userId.apply(lambda x: userid2idx[x])
user_min, user_max, movie_min, movie_max = (ratings.userId.min(),
ratings.userId.max(), ratings.movieId.min(), ratings.movieId.max())
user_min, user_max, movie_min, movie_max
n_users = ratings.userId.nunique()
n_movies = ratings.movieId.nunique()
n_users, n_movies
n_factors = 50
np.random.seed = 42
msk = np.random.rand(len(ratings)) < 0.8
trn = ratings[msk]
val = ratings[~msk]
g=ratings.groupby('userId')['rating'].count()
topUsers=g.sort_values(ascending=False)[:15]
g=ratings.groupby('movieId')['rating'].count()
topMovies=g.sort_values(ascending=False)[:15]
top_r = ratings.join(topUsers, rsuffix='_r', how='inner', on='userId')
top_r = top_r.join(topMovies, rsuffix='_r', how='inner', on='movieId')
pd.crosstab(top_r.userId, top_r.movieId, top_r.rating, aggfunc=np.sum)
user_in = Input(shape=(1,), dtype='int64', name='user_in')
u = Embedding(n_users, n_factors, input_length=1, W_regularizer=l2(1e-4))(user_in)
movie_in = Input(shape=(1,), dtype='int64', name='movie_in')
m = Embedding(n_movies, n_factors, input_length=1, W_regularizer=l2(1e-4))(movie_in)
x = merge([u, m], mode='dot')
x = Flatten()(x)
model = Model([user_in, movie_in], x)
model.compile(Adam(0.001), loss='mse')
model.summary()
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=1, verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.01
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=3, verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.001
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=6,verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
def embedding_input(name, n_in, n_out, reg):
inp = Input(shape=(1,), dtype='int64', name=name)
return inp, Embedding(n_in, n_out, input_length=1, W_regularizer=l2(reg))(inp)
user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)
movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)
def create_bias(inp, n_in):
x = Embedding(n_in, 1, input_length=1)(inp)
return Flatten()(x)
ub = create_bias(user_in, n_users)
mb = create_bias(movie_in, n_movies)
x = merge([u, m], mode='dot')
x = Flatten()(x)
x = merge([x, ub], mode='sum')
x = merge([x, mb], mode='sum')
model = Model([user_in, movie_in], x)
model.compile(Adam(0.001), loss='mse')
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=1, verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.01
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=6, verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.001
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=10, verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=5, verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
model.save_weights(model_path+'bias.h5')
model.load_weights(model_path+'bias.h5')
model.predict([np.array([3]), np.array([6])])
g=ratings.groupby('movieId')['rating'].count()
topMovies=g.sort_values(ascending=False)[:2000]
topMovies = np.array(topMovies.index)
get_movie_bias = Model(movie_in, mb)
movie_bias = get_movie_bias.predict(topMovies)
movie_ratings = [(b[0], movie_names[movies[i]]) for i,b in zip(topMovies,movie_bias)]
sorted(movie_ratings, key=itemgetter(0))[:15]
sorted(movie_ratings, key=itemgetter(0), reverse=True)[:15]
get_movie_emb = Model(movie_in, m)
movie_emb = np.squeeze(get_movie_emb.predict([topMovies]))
movie_emb.shape
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
movie_pca = pca.fit(movie_emb.T).components_
fac0 = movie_pca[0]
movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac0, topMovies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
fac1 = movie_pca[1]
movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac1, topMovies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
fac2 = movie_pca[2]
movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac2, topMovies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
import sys
stdout, stderr = sys.stdout, sys.stderr # save notebook stdout and stderr
reload(sys)
sys.setdefaultencoding('utf-8')
sys.stdout, sys.stderr = stdout, stderr # restore notebook stdout and stderr
start=50; end=100
X = fac0[start:end]
Y = fac2[start:end]
plt.figure(figsize=(15,15))
plt.scatter(X, Y)
for i, x, y in zip(topMovies[start:end], X, Y):
plt.text(x,y,movie_names[movies[i]], color=np.random.rand(3)*0.7, fontsize=14)
plt.show()
user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)
movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)
x = merge([u, m], mode='concat')
x = Flatten()(x)
x = Dropout(0.3)(x)
x = Dense(70, activation='relu')(x)
x = Dropout(0.75)(x)
x = Dense(1)(x)
nn = Model([user_in, movie_in], x)
nn.compile(Adam(0.001), loss='mse')
nn.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=8, verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Just for display purposes, let's read in the movie names too.
Step2: We update the movie and user ids so that they are contiguous integers, which we want when using embeddings.
Step3: This is the number of latent factors in each embedding.
Step4: Randomly split into training and validation.
Step5: Create subset for Excel
Step6: Dot product
Step7: The best benchmarks are a bit over 0.9, so this model doesn't seem to be working that well...
Step8: This result is quite a bit better than the best benchmarks that we could find with a quick google search - so looks like a great approach!
Step9: We can use the model to generate predictions by passing a pair of ints - a user id and a movie id. For instance, this predicts that user #3 would really enjoy movie #6.
Step10: Analyze results
Step11: First, we'll look at the movie bias term. We create a 'model' - which in keras is simply a way of associating one or more inputs with one more more outputs, using the functional API. Here, our input is the movie id (a single id), and the output is the movie bias (a single float).
Step12: Now we can look at the top and bottom rated movies. These ratings are corrected for different levels of reviewer sentiment, as well as different types of movies that different reviewers watch.
Step13: We can now do the same thing for the embeddings.
Step14: Because it's hard to interpret 50 embeddings, we use PCA to simplify them down to just 3 vectors.
Step15: Here's the 1st component. It seems to be 'critically acclaimed' or 'classic'.
Step16: The 2nd is 'hollywood blockbuster'.
Step17: The 3rd is 'violent vs happy'.
Step18: We can draw a picture to see how various movies appear on the map of these components. This picture shows the 1st and 3rd components.
Step19: Neural net
|
3,697
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet, psd_multitaper
from mne.datasets import somato
data_path = somato.data_path()
raw_fname = data_path + '/MEG/somato/sef_raw_sss.fif'
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=False)
# Construct Epochs
event_id, tmin, tmax = 1, -1., 3.
baseline = (None, 0)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=baseline, reject=dict(grad=4000e-13, eog=350e-6),
preload=True)
epochs.resample(150., npad='auto') # resample to reduce computation time
epochs.plot_psd(fmin=2., fmax=40.)
epochs.plot_psd_topomap(ch_type='grad', normalize=True)
f, ax = plt.subplots()
psds, freqs = psd_multitaper(epochs, fmin=2, fmax=40, n_jobs=1)
psds = 10. * np.log10(psds)
psds_mean = psds.mean(0).mean(0)
psds_std = psds.mean(0).std(0)
ax.plot(freqs, psds_mean, color='k')
ax.fill_between(freqs, psds_mean - psds_std, psds_mean + psds_std,
color='k', alpha=.5)
ax.set(title='Multitaper PSD (gradiometers)', xlabel='Frequency',
ylabel='Power Spectral Density (dB)')
plt.show()
# define frequencies of interest (log-spaced)
freqs = np.logspace(*np.log10([6, 35]), num=8)
n_cycles = freqs / 2. # different number of cycle per frequency
power, itc = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, use_fft=True,
return_itc=True, decim=3, n_jobs=1)
power.plot_topo(baseline=(-0.5, 0), mode='logratio', title='Average power')
power.plot([82], baseline=(-0.5, 0), mode='logratio', title=power.ch_names[82])
fig, axis = plt.subplots(1, 2, figsize=(7, 4))
power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=8, fmax=12,
baseline=(-0.5, 0), mode='logratio', axes=axis[0],
title='Alpha', show=False)
power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=13, fmax=25,
baseline=(-0.5, 0), mode='logratio', axes=axis[1],
title='Beta', show=False)
mne.viz.tight_layout()
plt.show()
power.plot_joint(baseline=(-0.5, 0), mode='mean', tmin=-.5, tmax=2,
timefreqs=[(.5, 10), (1.3, 8)])
itc.plot_topo(title='Inter-Trial coherence', vmin=0., vmax=1., cmap='Reds')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Frequency analysis
Step3: Now let's take a look at the spatial distributions of the PSD.
Step4: Alternatively, you can also create PSDs from Epochs objects with functions
Step5: Time-frequency analysis
Step6: Inspect power
Step7: Joint Plot
Step8: Inspect ITC
|
3,698
|
<ASSISTANT_TASK:>
Python Code:
# Run this once before starting your tasks
import mcpi.minecraft as minecraft
import mcpi.block as block
import time
import thread
mc = minecraft.Minecraft.create()
# Task 1 code
# add a variable with an initial value
# toggle the variable
# print the value of the variable
# Task 2 code
# Run Task 2 function on using thread.start_new_thread
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Task 1
Step2: Task 2
Step3: We need to run the function you defined in Task 2 using the following statement.
|
3,699
|
<ASSISTANT_TASK:>
Python Code:
from mpl_toolkits.basemap import Basemap
import opsimsummary as oss
oss.__VERSION__
from opsimsummary import HealpixTree, pixelsForAng, HealpixTiles
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import healpy as hp
htree = HealpixTree(nside=1, nest=True)
# By default the nside argument to this function is the nside at which htree was instantiated
print(htree.nside)
ipix = np.array([0, 1])
htree.pixelsAtNextLevel(ipix)
# We can also be specific, and do this for a particular NSIDE
htree.pixelsAtNextLevel(ipix, nside=128)
# How many subdivisions required to go to NSIDE =256 ?
desideredNSIDE = 256
res = int(np.log2(desideredNSIDE))
# nsidenew should be the NSIDE at the resolution we want
nsidenew, pixels = htree.pixelsAtResolutionLevel(1, res, 1)
assert nsidenew == desideredNSIDE
n256 = hp.nside2npix(256)
n1 = hp.nside2npix(1)
arr1 = np.ones(n1) * hp.UNSEEN
arr1[1] = 1
arr256 = np.ones(n256) * hp.UNSEEN
arr256[pixels] = -2
hp.mollview(arr1, nest=True)
hp.mollview(arr256, nest=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Instantiate the object
Step2: Find all the healpixels at the resolution one level higher
Step3: Find all the pixels at NSIDE=256, which are owned by pixelid =1 at NSIDE=1
Step4: Visusalize this
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.